content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
ETS Bungles Additional GRE Short Form Practice Exam Launch
At long last, ETS has finally converted more of their official online practice exams to the short form of the test that was launched September 22, 2023 - and it only took about four months! That
said, as of today, January 22, the ets.org/gre PowerPrep practice exam purchasing website could not be more confusing.
Let's break this down: The title is correct. Students do finally have access to a second free shorter GRE exam. However, the supplementary text immediately identifies that exam as a "practice test
that simulates the actual GRE General Test administered on or before September 20, 2023". This information is absolutely incorrect as it directly contradicts the title. Many MyGuru students (and I'm
sure other GRE test takers) have been anxiously waiting for these updated tests, so it's beyond frustrating that ETS continues to confuse the people who are ultimately its customers. One other point
to note; the ETS website will automatically select for purchase its (most expensive!) $100 GRE Mentor course whenever someone goes to purchase any GRE materials (free or otherwise) even though the
Mentor course recycles free content!
As a GRE prep professional, I could not be more disappointed in how ETS has managed its short form conversion to the GRE. Throughout 2023 students waited patiently for new practice materials. Yet for
more than three months after the conversion, ETS offered more practice materials for the retired exam than the new exam and blithely asserted that practicing for a longer test would make students
more ready for the shorter exam. This couldn't be further from the truth. I understand that ETS laid off 6% of its workforce this past fall, but maybe then it shouldn't have changed one of your
flagship tests at that very moment! All of these factors and more are why, we at MyGuru encourage MBA candidates and JD candidates to prep for / submit the GMAT Focus or LSAT exam instead of the GRE.
These exams are developed with much more care than ETS has shown to the GRE and will help develop skills that will be useful in your graduate school and professional endeavors, whereas the GRE is an
outdated test that is simply trying to hold onto market share.
|
{"url":"https://www.myguruedge.com/our-thinking/gre-and-the-graduate-school-admissions-process/ets-bungles-additional-gre-short-form-practice-exam-launch","timestamp":"2024-11-01T19:14:54Z","content_type":"text/html","content_length":"93807","record_id":"<urn:uuid:2ef9b6a8-8b03-4c19-b163-9372eb403e4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00742.warc.gz"}
|
bug in WarpAffineQuad
03-11-2010 03:58 AM
I think there is a bug in WarpAffineQuad.
If one or more corner of the source quad lies outside the source roi and there is an ippStsAffineQuadChanged error (which seems sensitive enough to occur fairly regularly) then sometimes the
destination roi is recalculated causing a one pixel wide strip on the left and/or bottom not to be drawn.
I can work round it ok using GetAffineTransform to calculate the transform and WarpAffineBack to draw it, so this is not critical to me but it would still be nice to have WarpAffineQuad.
03-11-2010 06:47 AM
03-12-2010 12:35 AM
03-16-2010 05:13 AM
03-23-2010 05:08 AM
03-23-2010 08:59 AM
04-27-2010 01:03 PM
04-28-2010 02:54 AM
|
{"url":"https://community.intel.com/t5/Intel-Integrated-Performance/bug-in-WarpAffineQuad/m-p/873274/highlight/true","timestamp":"2024-11-02T15:10:31Z","content_type":"text/html","content_length":"320526","record_id":"<urn:uuid:9becdc4c-9c86-4199-af7a-5e9a6fc28ab2>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00049.warc.gz"}
|
Adaptive estimation of VMD modes number based on cross correlation coefficient
The variational mode decomposition (VMD) proposed recently is a kind of time-frequency signal analysis method. VMD has some advantages on signal decomposition such as high precision and noise
robustness, but its serious shortcoming is that the number of modes ($K$) should be given in advance. And if the number is chosen inappropriately, VMD will lead to larger decomposition error. In this
paper, the VMD method is introduced and the over- and under-segment characters of VMD are discussed. The cross correlation coefficients can express the similarity between the two signals. Cross
correlation coefficients among VMD components and the original signal are used to judge whether over-segment takes place. As a result, the estimation method of VMD parameter $K$ is proposed. Based on
the method, the tri-harmonic signal and the vibration signals of ball bearings are analyzed in detail. The results show that the proposed method is feasible and effective.
1. Introduction
Time and frequency analysis of vibration signal is widely used in many fields [1-3]. Variational mode decomposition (VMD) is proposed by Dragomiretskiy et al. in 2014 [4], which is also a
time-frequency analysis method in the area of signal processing. This method assumes the signal to be composed of some intrinsic mode functions. Each mode function is an
amplitude-modulated-frequency-modulated (AF-FM) signal with different center frequency. Center frequency and the relevant bandwidth of each mode function (component) are determined by recursively
searching algorithm. VMD has solid theory foundation,showing better advantages in tone detection, tone separation and noise robustness. VMD has attracted wide attention of world scholars since it
was put forward [5-9]. However, its serious shortcoming is that the modes number $K$ of VMD must be given in advance before signal decomposition. And the result of the decomposition is sensitive to
the modes number $K$. How to accurately forecast the value of $K$ is a key problem in VMD signal decomposition [4]. Tang uses particle swarm optimization algorithm to get the best combination of the
penalty parameter and number of components [5]. Most scholars give the empirical value through analyzing or observing the processed signal, which means if the number is not appropriate, some
modification and retries will be indispensable [6-9]. In order to solve the problem, the characters and properties of VMD in over-segment and under-segment should be further researched and cross
correlation coefficients among modes and original signal should be analyzed so that the relationship between correlation coefficients and over-segment can be built. The automatic estimation method of
VMD Parameter $K$ is proposed based on cross correlation coefficients in this paper. The method is proved feasible and effective by analyzing simulation signals and extracting the fault feature of
rolling bearing.
The rest of this article is organized as follows: Section 2 introduces the VMD (Variational Mode Decomposition), observes the effect of over- and under-segmenting and evaluates the outcome of VMD
using too few or too many modes, $K$. Section 3 investigates relationship between the over-segmentation and cross correlation coefficients, and presents the estimation method of VMD Parameter $K$ in
detail. Section 4 contains some experiments and results for non-noise signal, noisy signal and rolling bearing fault signals. Section 5 is the conclusion.
2. Variational mode decomposition
Intrinsic Mode Function (IMF) is originally defined in EMD (Empirical Mode Decomposition, proposed by N. E. Huang in 1998), and now in VMD, it is redefined as amplitude-modulated-frequency-modulated
(AF-FM) signal [4]:
${u}_{k}\left(t\right)={A}_{k}\left(t\right)\mathrm{c}\mathrm{o}\mathrm{s}\left({\varphi }_{k}\left(t\right)\right),$
where the phase ${\varphi }_{k}\left(t\right)$ is a non-decreasing function, ${\varphi }_{k}^{"}\left(t\right)\ge 0$, the envelope is non-negative ${A}_{k}\left(t\right)\ge 0$, and, both the envelope
and the instantaneous frequency ${\omega }_{k}\left(t\right):={\varphi }_{k}^{"}\left(t\right)$ vary much slower than the phase ${\varphi }_{k}\left(t\right)$. In other words, ${u}_{k}\left(t\right)$
can be considered as a harmonic signal with amplitude ${A}_{k}\left(t\right)$ and frequency ${\omega }_{k}\left(t\right)$, during the interval $\left[t-\delta ,t+\delta \right]\left(\delta =2\pi /{\
varphi }_{k}^{"}\left(t\right)\right)$.
In order to obtain the components, the steps are given as followed: (1) for each mode function ${u}_{k}\left(t\right)$, assess the associated analytic signal using Hilbert transform to obtain a
unilateral frequency spectrum; (2) for each mode, transform the mode’s frequency spectrum to ‘baseband’, through mixing with an exponential tuned to the respective estimated centre frequency; (3) use
Gaussian smoothness of the demodulated signal to estimate the bandwidth. The constrained variational problem is given as follows:
$\underset{\left\{{u}_{k}\right\},\left\{{\omega }_{k}\right\}}{\mathrm{m}\mathrm{i}\mathrm{n}}\left\{{\sum _{k}‖{\partial }_{t}\left[\left(\delta \left(t\right)+\frac{j}{\pi t}\right)*{u}_{k}\left(t
\right)\right]{e}^{-j{\omega }_{k}t}‖}_{2}^{2}\right\},$$s.t.\sum _{k}{u}_{k}=f,$
where $\left\{{u}_{k}\right\}=\left\{{u}_{1},{u}_{2},\dots ,{u}_{k}\right\}$ and$\left\{{\omega }_{k}\right\}=\left\{{\omega }_{1},{\omega }_{2},\dots ,{\omega }_{k}\right\}$ represent all modes and
their center frequencies, respectively. Equally, $\sum {}_{k}={\sum }_{k=1}^{K}$ is understood as the summation over all modes.
To get the optimal solution of constraint variational problem, a quadratic penalty parameter $\alpha$ and the Lagrangian multipliers, $\lambda$, are used. The constructed augmented Lagrangian
function is given as follows:
$L\left(\left\{{u}_{k}\right\},\left\{{\omega }_{k}\right\},\lambda \right)=\alpha {\sum _{k}‖{\partial }_{t}\left[\left(\delta \left(t\right)+\frac{j}{\pi t}\right)*{u}_{k}\left(t\right)\right]{e}^
{-j{\omega }_{k}t}‖}_{2}^{2}$$+{‖f\left(t\right)-\sum _{k}{u}_{k}\left(t\right)‖}_{2}^{2}+⟨\lambda \left(t\right),f\left(t\right)-\sum _{k}{u}_{k}\left(t\right)⟩.$
Eq. (3) is then solved with the alternate direction method of multipliers(ADMM), shown in Table 1. All the modes gained from solutions in spectral domain are written as:
${\stackrel{^}{u}}_{k}^{n+1}\left(\omega \right)=\frac{\stackrel{^}{f}\left(\omega \right)-{\sum }_{ie k}{\stackrel{^}{u}}_{i}\left(\omega \right)+\frac{\stackrel{^}{\lambda }\left(\omega \right)}
{2}}{1+2\alpha \left(\omega -{\omega }_{k}{\right)}^{2}},$
${\omega }_{k}^{n+1}=\frac{{\int }_{0}^{\infty }\omega {\left|{\stackrel{^}{u}}_{k}\left(\omega \right)\right|}^{2}d\omega }{{\int }_{0}^{\infty }{\left|{\stackrel{^}{u}}_{k}\left(\omega \right)\
right|}^{2}d\omega },$
where the ${\omega }_{k}$ is computed at the center of gravity of the corresponding mode’s power spectrum. The ${u}_{k}\left(\omega \right)$ is the mode in Fourier domain.
Table 1ADMM optimization for VMD
Initialize $\left\{{\stackrel{^}{u}}_{k}^{1}\right\}$, $\left\{{\stackrel{^}{\omega }}_{k}^{1}\right\}$, ${\stackrel{^}{\lambda }}^{1}$,$n←0$;
for $k=1:K$ do
update ${\stackrel{^}{u}}_{k}$, according to Eq. (4)
update ${\omega }_{k}$, according to Eq. (5)
end for
Dual ascent for all $\omega \ge 0$:
${\stackrel{^}{\lambda }}^{n+1}\left(\omega \right)={\stackrel{^}{\lambda }}^{n}\left(\omega \right)+\tau \left(\stackrel{^}{f}\left(\omega \right)-\sum _{k}{\stackrel{^}{u}}_{k}^{n+1}\left(\omega \
Until convergence: ${{\sum }_{k}‖{\stackrel{^}{u}}_{k}^{n+1}-{\stackrel{^}{u}}_{k}^{n}‖}_{2}^{2}/{‖{\stackrel{^}{u}}_{k}^{n}‖}_{2}^{2}<\epsilon$.
Fig. 1Too many modes (K= 10) lead to over-segmentation of the signal
a) VMD $u\left(t\right)$, $K=\mathrm{}$10
b)$\left|{\stackrel{^}{u}}_{k}^{\mathrm{}}\right|\left(\omega \right)$, $K=\mathrm{}$10
From this algorithm, it’s clear that the mode number $K$ needs to be given in advance. In order to study the effect of the $K$, we construct a tri-harmonic signal expressed in Eq. (6), which is
composed of three different, pure harmonics, whose frequency is 3 Hz, 26 Hz and 150 Hz, respectively:
$f=\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi 3t\right)+0.3\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi 26t\right)+0.02\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi 150t\right).$
We set the value of $K$ from 1 to 10 to observe decomposition results of the signal. When the preset component number is greater than the actual harmonic number, over-segment of VMD will take place.
Fig. 1 illustrates the result of VMD with $K=$10. Fig. 1(a) shows ten modes in time domain and frequency spectrum of these modes are illustrated in the Fig. 1(b). We can learn from Fig. 1(a) that the
mode (component) 1, 4, 5 and 9 are incomplete harmonic, which is the result of over-segment. From Fig. 1(b), the center frequency of component 1, 2, 3 are 3 Hz, and the center frequency of mode
component 6-8 are 26 Hz, which means that over-decomposition of signal occurs.
When the preset component number is fewer than the actual harmonic number, VMD method will lead to under-segment. The VMD decomposition with $K=$2 ($K<$3) is shown in Fig. 2. From Fig. 2(b), VMD
separates 150 Hz harmonic signal into two parts, which are added to the signal of 3 Hz and 26 Hz respectively. Too few modes will lead to under-segmentation. Some components are separated and
contained in other components, or discarded as “noise”.
Fig. 2Too few modes (K= 2) lead to under-segmentation of the signal
a) VMD $u\left(t\right)$, $K=\mathrm{}$2
b)$\left|{\stackrel{^}{u}}_{k}^{\mathrm{}}\right|\left(\omega \right)$, $K=\mathrm{}$2
When the preset modes number is the same as the actual component number, VMD of signal can achieve exact tones. Fig. 3 is the decomposition result with $K=$3, and three components are recovered
almost flawlessly. Fig. 3(b) are frequency expression of original signal and three components. We can learn from Fig. 3, the center frequency of each component (3 Hz, 26 Hz and 150 Hz) is the same as
the center frequency of original signal components, respectively. The amplitude of each component is 1, 0.3 and 0.02, which are consistent with the original signal components.
We can learn from the results of simulation: the value $K$ makes a great influence on decomposition, only when $K$ is chosen properly. We can get more accurate components from the original signal,
without mode overlap and mode duplication. Because number $K$ must be given in advance, reasonable pre-estimation of the parameter $K$ of VMD is rather important and is of practical significance. An
adaptive estimation method will be proposed in the following text.
Fig. 3Appropriate modes (K= 3) lead to correct segmentation of the signal
a) VMD $u\left(t\right)$, $K=\mathrm{}$3
b)$\left|{\stackrel{^}{u}}_{k}^{\mathrm{}}\right|\left(\omega \right)$, $K=\mathrm{}$3
3. Adaptive $\mathbit{K}$ estimation method of VMD based on cross correlation coefficient
The cross correlation coefficient is used as the statistical indicators of the affinity degree among the different variables [10, 11]. The cross correlation coefficient is calculated by covariance
method. Similarly, according to two variables and their average deviation, multiply two average deviations are multiplied to show the relevant degree of two variables. The calculation formula [10] of
the cross correlation coefficient between two sequence $x\left(n\right)$ and $y\left(n\right)$ is written as:
${\rho }_{xy}=\frac{\sum _{n=0}^{\infty }x\left(n\right)y\left(n\right)}{\sqrt{\sum _{n=0}^{\infty }{x}^{2}\left(n\right)\sum _{n=0}^{\infty }{y}^{2}\left(n\right)}}.$
According to the properties of cross correlation coefficient, if signals $x\left(n\right)$ and $y\left(n\right)$ are periodic signal with same frequency or contain periodic components of the same
frequency, cross correlation coefficient will be larger. If $x\left(n\right)$ and $y\left(n\right)$ have periodic signals with different frequencies, cross correlation coefficient will be smaller.
Large amounts of experiments are conducted with ${\rho }_{xy}$ from 0.5 to 1.5, When ${\rho }_{xy}$ is equal about to 0.1, the effect is most stable, and the result is more consistent with what we
have expected. So, we recommend 0.1 as threshold value to judge whether two signals are relevant. In VMD, if cross correlation coefficient between two components is larger than 0.1, we judge that
over-segmentation occurs.
In VMD, each component ${u}_{i}$ is a part of original signal ($f$). If the cross correlation coefficient ${\rho }_{if}$ between component ${u}_{i}$ and original signal is less than the coefficient $
{\rho }_{ij}$ between component ${u}_{i}$ and another component ${u}_{j}$, we can say that the component ${u}_{i}$ is much closer to the component ${u}_{j}$ than to the original signal. If that
situation happens, we also consider that over-segmentation occurs.
Synthetic signal as Eq. (6), is operated by VMD with $K=$10, the cross correlation coefficients among components and the original signal are mentioned in Table 2. From Table 2, the cross correlation
coefficients among components 1-3 (${\rho }_{12}$, ${\rho }_{13}$, ${\rho }_{23}$) are 0.75, 0.52 and 0.9, so 1-3 should be one component. The cross correlation coefficients among components 5-8 (${\
rho }_{56}$, ${\rho }_{57}$, ${\rho }_{58}$, ${\rho }_{67}$, ${\rho }_{68}$, ${\rho }_{78}$) are 0.51, 0.67, 0.73, 0.75, 0.94, 0.87, so 5-8 should be one component. Over segmentation occurs when
synthetic signal is operated by VMD with $K=$10. The conclusion is consistent with the fore graphic analysis results in Section 2. From Table 2, because ${\rho }_{45}>{\rho }_{4f}$ and ${\rho }_{45}=
$ 0.17 > 0.1, component 4 is also the product of over-segmentation, which is also consistent with the situation of component 4 shown in Fig. 1(a).
Table 2The cross correlation coefficient among VMD components and original signal (K = 10)
IMF 1 IMF2 IMF3 IMF4 IMF5 IMF6 IMF7 IMF8 IMF9 IMF10 Original signal
IMF1 1 0.75 0.52 0.02 0.01 0.00 0.00 0.00 0.01 0.00 0.60
IMF2 1 0.90 0.03 0.02 0.00 0.01 0.01 0.01 0.00 0.96
IMF3 1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02
IMF4 1 0.17 0.02 0.06 0.06 0.04 0.00 0.01
IMF5 1 0.51 0.67 0.73 0.11 0.00 0.22
IMF6 1 0.75 0.94 0.02 0.00 0.29
IMF7 1 0.87 0.08 0.00 0.28
IMF8 1 0.08 0.00 0.12
IMF9 1 0.00 0.01
IMF10 1 0.02
Original signal 0.60 0.96 0.02 0.01 0.22 0.29 0.28 0.12 0.01 0.02
Then, we focus on the characters of cross correlation coefficient in under-segmentation. After VMD operation of signal as Eq. (7) with $K=$2, the cross correlation coefficients among components and
the original signal are shown in Table 3. From Table 3, we can learn that cross correlation coefficient between component 1 and 2 is 0, over-segmentation doesn’t happen, and there are no characters
about under-segmentation. How can we judge a suitable $K$? We can use $K+1$ to operate VMD, then observe whether there exists over-segmentation. If there exists over-segmentation with $K+1$, the
value of $K$ is the appropriate estimation. If not, continue to add 1 until the over-segmentation occurs.
Fig. 4flow chart of K adaptive estimation
Table 3The cross correlation coefficient among VMD components and original signal (K= 2)
IMF1 IMF2 Original signal
IMF1 1 0.00 0.96
IMF2 0 1 0.29
Original signal 0.96 0.29
Thus, the estimation algorithm of $K$ can be designed as Fig. 4. Firstly, set $K={K}_{0}$, operate VMD, calculate the cross correlation coefficients among components and the original signal, and
judge whether it is over-segmented. If it is over-segmented, calculate the numbers of over-segmentation, subtract the numbers, reset the value of $K$ and return to VMD. Otherwise, judge whether $K+1$
is over-segmented. If $K+1$ is over-segmented, the value $K$ is what we want.
The variational mode decomposition (VMD) proposed recently has obvious advantages on signal decomposition such as high precision and noise robustness, but its number of modes ($K$) must be given in
advance. Now, the mode number $K$ can only be estimated by prior knowledge. And if chosen inappropriately, it will lead to larger decomposition error. As a result, we focus on the research of the
over- and under-segment characters of VMD. The estimation method of VMD parameter $K$ is proposed based on cross correlation coefficients among VMD components and the original signal. The method is
proved feasible and effective through analyzing simulation signals and extracting the fault feature of rolling bearing.
The Hilbert-Huang transform (HHT) is an adaptive time-frequency method which was developed by Huang in 1998. The HHT method has two steps, empirical mode decomposition (EMD) and Hilbert spectral
analysis (HSA) [11-14]. The EMD, the first step, is to decompose the data into finite different intrinsic mode function (IMF) components, which represents different oscillatory modes and acts as a
pre-processor for HSA. Based on the local extrema, the EMD method provides an adaptive method for decomposition of signals. Because the IMF components are decomposed adaptively, its number is
associated with real modes of signal and can be used as the initial $K$ (${K}_{0}$), thus avoiding giving an initial value blindly.
4. Adaptive $\mathbit{K}$ estimation of simulation signal
4.1. Adaptive $\mathbit{K}$ estimation of non-noise simulation signal
Now, we estimate the value of $K$ for non-noise simulation signal like Eq. (6). According to Fig. 4, firstly, operate EMD and we can get 3 IMFs. So, we set ${K}_{0}=$3, then operate VMD with $K={K}_
{0}=$3 and calculate the cross correlation coefficients shown in Table 4, from which we can know it isn’t over-segmented. As a result, we set $K=K+1=$4 and go back to the VMD. Similarly, calculate
again the cross correlation coefficients shown in Table 5. Obviously, over-segmentation appears, so the value 3 is the last output of algorithm. Compared to the actual signal, the estimated value ($K
=$3) is suitable.
Table 4The cross correlation coefficients for tri-harmonic signal (K= 3)
IMF1 IMF2 IMF3 Original signal
IMF1 1 0.00 0.00 0.96
IMF2 0 1 0.00 0.29
IMF3 0 0 1 0.0200
Original signal 0.96 0.29 0.02
Table 5The cross correlation coefficients for tri-harmonic signal (K= 4)
IMF1 IMF2 IMF3 IMF4 Original signal
IMF1 1 0.00 0.00 0.00 0.96
IMF2 0 1 1.00 1.00 0.29
IMF3 0 0 1 1.00 0.29
IMF4 0 0 0 1 0.29
Original signal 0.96 0.29 0.29 0.29
4.2. Adaptive $\mathbit{K}$ estimation of noise simulation signal
To estimate the value $K$ with noise signal, we use the following tri-harmonic signal affected by noise:
$f=\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi 3t\right)+0.3\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi 26t\right)+0.02\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi 150t\right)+\eta ,$
where $\eta ~N\left(0,\sigma \right)$ represents the Gaussian additive noise, and $\sigma$ controls the noise level (standard deviation). Here we pick $\sigma =\text{0.1}$.
According to our algorithm shown in Fig. 4, perform EMD operation first and 5 IMFs are obtained, so ${K}_{0}=$5. The all cross correlation coefficients after VMD operation with $K=$5 are shown in
Table 6. The coefficient between component 4 and 5 is higher than the one between component 5 and the original signal. So there exists one over-decomposition and $K$ is set to be 4 ($K=K-1$). So, we
continue to operate VMD with $K=$4 and the cross correlation coefficients are shown in Table 7. Every coefficient is less than 0.1, so over-segmentation doesn’t appear. And we know, when $K+1=$5, the
decomposition is over-segmented, so the final output is $K=$4.
Table 6The cross correlation coefficients for noisy tri-harmonic signal (K= 5)
IMF1 IMF2 IMF3 IMF4 IMF5 Original signal
IMF1 1 0.00 0.00 0.00 0.00 0.95
IMF2 1 0.00 0.00 0.00 0.29
IMF3 1 0.02 0.01 0.07
IMF4 1 0.08 0.06
IMF5 1 0.06
Original signal 0.95 0.29 0.07 0.06 0.06
Table 7The cross correlation coefficients for noisy tri-harmonic signal (K= 4)
IMF1 IMF2 IMF3 IMF4 original signal
IMF1 1 0.00 0.00 0.00 0.96
IMF2 1 0.00 0.00 0.29
IMF3 1 0.02 0.04
IMF4 1 0.03
original signal 0.96 0.29 0.04 0.03
However, we know that this signal contains tri-harmonic. Let’s observe what will happen with $K=$3. Perform VMD with $K=$3, then calculate all the cross correlation coefficients shown in Table 8.
According to the data, the decomposition can be judged not over-segmented. However, when $K=$3, the three extracted frequency centers are 3 Hz, 26 Hz and 359 Hz. The original harmonic of 150 Hz is
not drawn out and is combined with the noise.
When $K=$4, the four extracted frequency centres are 3 Hz, 26 Hz, 150 Hz, 318 Hz. The three original harmonics of 3 Hz, 26 Hz and 150 Hz are all drawn out and the rest component of 318 Hz can be
considered as noise. $K=$4 is more reasonable.
Table 8The cross correlation coefficients for noisy tri-harmonic signal (K= 3)
IMF1 IMF2 IMF3 Original signal
IMF1 1 0.00 0.00 0.95
IMF2 0 1 0.00 0.30
IMF3 0 0 1 0.06
Original signal 0.95 0.30 0.06
4.3. Adaptive $\mathbit{K}$ estimation for rolling bearing fault signal
Rolling element bearing fault data is obtained from the Case Western Reserve University Bearing Data Centre. The experimented bearing is 6205-2RS JEM SKF, deep groove ball bearing. The rotating speed
is 1730 rpm and the sampling frequency is 12 kHz. Three experiments conducted are inner raceway, ball and outer raceway fault signal decomposition, respectively. Now we only discuss the inner raceway
fault signal decomposition in detail. The theoretical fault feature frequency of inner raceway is 156.1 Hz.
First, perform EMD operation to get the initial $K\left({K}_{0}\right)$. After EMD operation, 12 IMFs can be obtained. The initial value of $K$ is 12 and the cross correlation coefficients after VMD
operation with $K=$12 are shown in the Table 9. There are 6 cross correlation coefficients higher than or equal to 0.1, so $K$ is reset as 6 ($K=\text{12}-\text{6}=\text{6}$).
Then operate VMD with $K=$6 and the cross correlation coefficients are shown in the Table 10. There is 1 cross correlation coefficient between 5 and 6 greater than 0.1, so $K$ is reset as 5 ($K=K-1=\
text{6}\text{–}\text{1}=\text{5}$). Then, operate VMD with $K=$5 and cross correlation coefficients shown in Table 11 are obtained. From Table 11, we can know that one over-segmentation occurs. Set
$K=K-1=$4 and operate VMD again. The cross correlation coefficients with $K=$4 shown in Table 12 are obtained. Consequently, there is no over-segmentation. Because the decomposition with $K+1=$5 is
over-segmented, the output is $K=$4.
To verify whether the value is reasonable, a contrastive analysis study is given. The envelop spectrum is often used in rolling bearing fault analysis. Fig. 5 and Fig. 6 illustrate the envelop
spectrum of VMD operation with $K=$4 and $K=$5 respectively. The practical fault feature frequency (155.3 Hz) and its double-frequency (310.5 Hz) both appear.
Table 9The cross correlation coefficients for rolling bearing fault signal (K= 12)
IMF1 IMF2 IMF3 IMF4 IMF5 IMF6 IMF7 IMF8 IMF9 IMF10 IMF11 IMF12 Original signal
IMF1 1.00 0.16 0.04 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.12
IMF2 1.00 0.03 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20
IMF3 1.00 0.13 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.26
IMF4 1.00 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.31
IMF5 1.00 0.06 0.01 0.00 0.00 0.00 0.01 0.01 0.36
IMF6 1.00 0.03 0.01 0.01 0.01 0.01 0.01 0.36
IMF7 1.00 0.13 0.04 0.03 0.03 0.01 0.34
IMF8 1.00 0.22 0.06 0.03 0.01 0.47
IMF9 1.00 0.10 0.03 0.01 0.51
IMF10 1.00 0.11 0.01 0.39
IMF11 1.00 0.04 0.20
IMF12 1.00 0.08
Original signal 0.12 0.20 0.26 0.31 0.36 0.36 0.34 0.47 0.51 0.39 0.20 0.08
Table 10The cross correlation coefficients for rolling bearing fault signal (K= 6)
IMF1 IMF2 IMF3 IMF4 IMF5 IMF6 Original signal
IMF1 1 0.08 0.02 0.01 0.00 0.00 0.23
IMF2 1 0.07 0.01 0.00 0.00 0.37
IMF3 1 0.03 0.00 0.00 0.37
IMF4 1 0.00 0.00 0.37
IMF5 1 0.11 0.48
IMF6 1 0.66
Original signal 0.23 0.37 0.37 0.37 0.48 0.66
Table 11The cross correlation coefficients for rolling bearing fault signal (K= 5)
IMF1 IMF2 IMF3 IMF4 IMF5 Original signal
IMF1 1 0.04 0.01 0.00 0.00 0.23
IMF2 0 1 0.17 0.01 0.00 0.38
IMF3 0 0 1 0.06 0.02 0.46
IMF4 0 0 0 1 0.02 0.49
IMF5 0 0 0 0 1 0.68
Original signal 0.23 0.38 0.46 0.49 0.68
Table 12The cross correlation coefficients for rolling bearing fault signal (K= 4)
IMF1 IMF2 IMF3 IMF4 Original signal
IMF1 1 0.04 0.01 0.00 0.24
IMF2 1 0.02 0.01 0.38
IMF3 1 0.04 0.50
IMF4 1 0.74
Original signal 0.24 0.38 0.50 0.74
Fig. 5Envelop spectrums of each component after VMD operation with K= 4
Fig. 6Envelop spectrums of each component after VMD operation with K= 5
In both pictures (Figs. 5, 6), all the 155.3 Hz character frequency are prominent. Double-frequency amplitudes of component 4 and 5 in Fig. 6 are similar to the ones of component 3 and 4 in Fig. 5.
Also, double-frequency amplitude of component 2 in Fig.6 is very weak, only 0.006. And from Table 11, the cross correlation coefficient between component 2 and 3 is 0.17, which is higher than 0.1.
Thus, the component 2 in Fig. 6 can be considered as over-segmented from component 3. Therefore, $K=$4 is more suitable than $K=$5.
5. Conclusions
As a newly proposed time-frequency analysis method, VMD performs better performance in tone detection, tone separation and noise robustness. However, the result of the VMD is sensitive to the modes
number $K$. VMD over-segment and under-segment have been discussed, and cross correlation coefficients among modes and original signal have been analyzed. We build a relationship between the
over-segmentation and cross correlation coefficients, on which the proposed estimation method of VMD Parameter $K$ is based. In the algorithm, the initial value of $K$ is determined by the numbers of
IMFs by EMD. The method is proved feasible and effective by analyzing simulation signals and extracting the fault feature of rolling bearing.
• Wang G. B., He Z. J., Chen X. F., Lai Y. N. Basic research on machinery fault diagnosis – what is the prescription. Journal of Mechanical Engineering, Vol. 49, Issue 1, 2013, p. 63-72.
• Yayli Mustafa Ozgur On the axial vibration of carbon nanotubes with different boundary conditions. Micro and Nano Letters, Vol. 9, Issue 11, 2014, p. 807-811.
• Yayli Mustafa Ozgur Free vibration behavior of a gradient elastic beam with varying cross section. Shock and Vibration, Vol. 2014, 2014.
• Dragomiretskiy K., Zosso D. Variational mode decomposition. IEEE Transactions on Signal Processing, Vol. 62, Issue 3, 2014, p. 531-544.
• Tang Guiji, Wang Xiaolong Parameter optimized variational mode decomposition method with application to incipiant fault diagnosis of rolling bearing. Journal of Xi’an Jiaotong University, Vol.
49, Issue 5, 2015, p. 73-81.
• Liu Changliang, Wu Yingjie, Zhen Chengang Rolling bearing fault diagnosis based on variational mode decomposition and fuzzy C means clustering. Proceeding of the CSEE, Vol. 35, Issue 13, 2015, p.
• An Lian-suo, Feng Qiang, et al. Variational mode decomposition and its application to monitor the furnace pressure pipeline weak signal detection of leakage. Boiler Technology, Vol. 46, Issue 4,
2015, p. 1-6.
• Yin Aijun, Ren Hongji A propagating mode extraction algorithm for microwave waveguide using variational mode decomposition. Measurement Science and Technology, Vol. 46, 2015, p. 1-10.
• Wang Yanxue, Markert Richard, Xiang Jiawei, Zhang Weiguang Research on variational mode decomposition and its application in detecting rub-impact fault of the rotor system. Mechanical System and
Signal Processing, Vols. 60-61, 2015, p. 243-251.
• Ding Chang-Fu, Cai Zhi-Cheng Methods of selecting valid IMF in EMD. Thermal Power Generation, Vol. 43, Issue 1, 2014, p. 36-40.
• Meng Lingjie, Xiang Jiawei, Wang Yanxue, et al. A hybrid fault diagnosis method using morphological filter-translation invariant wavelet and improved ensemble empirical mode decomposition.
Mechanical Systems and Signal Process, Vol. 50, Issue 51, 2015, p. 101-115.
• Norden E., Huang Z. S., et al. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proceedings of the Royal Society of London A, Vol.
454, 1998, p. 903-995.
• Huang N. E. A new view of nonlinear waves: the Hilbert spectrum. Annual Review of Fluid Mechanics, Vol. 31, 1999, p. 417-457.
• Peng Z. K., Tse Peter W., Chu F. L. A comparison study of improved Hilbert-Huang transform: Application to fault diagnosis for rolling bearing. Mechanical Systems and Signal Process. Vol. 19,
Issue 5, 2055, p. 974-988.
About this article
Modal analysis and applications
variational mode decomposition (VMD)
cross correlation coefficient
parameter estimation
rolling bearing
Copyright © 2017 JVE International Ltd.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
{"url":"https://www.extrica.com/article/17236","timestamp":"2024-11-08T12:37:16Z","content_type":"text/html","content_length":"200403","record_id":"<urn:uuid:4e2f3c57-b987-42f7-8839-410bacf8a473>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00869.warc.gz"}
|
AutoGen Graph - Mervin Praison
AutoGen Graph
pip install -U "pyautogen[graph]>=0.2.11" matplotlib networkx
import random
import matplotlib.pyplot as plt
import networkx as nx
import autogen
from autogen.agentchat.conversable_agent import ConversableAgent
from autogen.agentchat.assistant_agent import AssistantAgent
from autogen.agentchat.groupchat import GroupChat
from autogen.graph_utils import visualize_speaker_transitions_dict
config_list_gpt4 = {
"timeout": 600,
"cache_seed": 44,
"temperature": 0,
"config_list": [{"model": "gpt-4-turbo-preview"}],
def get_agent_of_name(agents, name) -> ConversableAgent:
for agent in agents:
if agent.name == name:
return agent
agents = []
speaker_transitions_dict = {}
secret_values = {}
for prefix in ["A", "B", "C"]:
for i in range(3):
node_id = f"{prefix}{i}"
secret_value = random.randint(1, 5)
secret_values[node_id] = secret_value
system_message=f"""Your name is {node_id}.
Do not respond as the speaker named in the NEXT tag if your name is not in the NEXT tag. Instead, suggest a relevant team leader to handle the mis-tag, with the NEXT: tag.
You have {secret_value} chocolates.
The list of players are [A0, A1, A2, B0, B1, B2, C0, C1, C2].
Your first character of your name is your team, and your second character denotes that you are a team leader if it is 0.
CONSTRAINTS: Team members can only talk within the team, whilst team leader can talk to team leaders of other teams but not team members of other teams.
You can use NEXT: to suggest the next speaker. You have to respect the CONSTRAINTS, and can only suggest one player from the list of players, i.e., do not suggest A3 because A3 is not from the list of players.
Team leaders must make sure that they know the sum of the individual chocolate count of all three players in their own team, i.e., A0 is responsible for team A only.
Keep track of the player's tally using a JSON format so that others can check the total tally. Use
A0:?, A1:?, A2:?,
B0:?, B1:?, B2:?,
C0:?, C1:?, C2:?
If you are the team leader, you should aggregate your team's total chocolate count to cooperate.
Once the team leader know their team's tally, they can suggest another team leader for them to find their team tally, because we need all three team tallys to succeed.
Use NEXT: to suggest the next speaker, e.g., NEXT: A0.
Once we have the total tally from all nine players, sum up all three teams' tally, then terminate the discussion using TERMINATE.
speaker_transitions_dict[agents[-1]] = []
for prefix in ["A", "B", "C"]:
for i in range(3):
source_id = f"{prefix}{i}"
for j in range(3):
target_id = f"{prefix}{j}"
if i != j:
speaker_transitions_dict[get_agent_of_name(agents, source_id)].append(get_agent_of_name(agents, target_id))
speaker_transitions_dict[get_agent_of_name(agents, "A0")].append(get_agent_of_name(agents, "B0"))
speaker_transitions_dict[get_agent_of_name(agents, "A0")].append(get_agent_of_name(agents, "C0"))
speaker_transitions_dict[get_agent_of_name(agents, "B0")].append(get_agent_of_name(agents, "A0"))
speaker_transitions_dict[get_agent_of_name(agents, "B0")].append(get_agent_of_name(agents, "C0"))
speaker_transitions_dict[get_agent_of_name(agents, "C0")].append(get_agent_of_name(agents, "A0"))
speaker_transitions_dict[get_agent_of_name(agents, "C0")].append(get_agent_of_name(agents, "B0"))
graph = nx.DiGraph()
graph.add_nodes_from([agent.name for agent in agents])
for key, value in speaker_transitions_dict.items():
for agent in value:
graph.add_edge(key.name, agent.name)
plt.figure(figsize=(12, 10))
pos = nx.spring_layout(graph)
nx.draw(graph, pos, with_labels=True, font_weight="bold")
for node, (x, y) in pos.items():
secret_value = secret_values[node]
plt.text(x, y + 0.1, s=f"Secret: {secret_value}", horizontalalignment="center")
def is_termination_msg(content) -> bool:
have_content = content.get("content", None) is not None
if have_content and "TERMINATE" in content["content"]:
return True
return False
user_proxy = autogen.UserProxyAgent(
system_message="Terminator admin.",
group_chat = GroupChat(
manager = autogen.GroupChatManager(
There are 9 players in this game, split equally into Teams A, B, C. Therefore each team has 3 players, including the team leader.
The task is to find out the sum of chocolate count from all nine players. I will now start with my team.
NEXT: A1""",
❯ python graph.py
WARNING:root:Warning: There are isolated agent nodes, there are not incoming nor outgoing edges. Isolated agents: ['User_proxy']
WARNING:root:Warning: The set of agents in allowed_speaker_transitions do not match agents. Offending agents: ['User_proxy']
A0 (to chat_manager):
There are 9 players in this game, split equally into Teams A, B, C. Therefore each team has 3 players, including the team leader.
The task is to find out the sum of chocolate count from all nine players. I will now start with my team.
NEXT: A1
A1 (to chat_manager):
A0:?, A1:4, A2:?,
B0:?, B1:?, B2:?,
C0:?, C1:?, C2:?
I have 4 chocolates. Let's find out how many chocolates A2 has.
NEXT: A2
A2 (to chat_manager):
A0:?, A1:4, A2:5,
B0:?, B1:?, B2:?,
C0:?, C1:?, C2:?
I have 5 chocolates. Now, we need to report back to our team leader, A0, to aggregate our team's total chocolate count.
NEXT: A0
A0 (to chat_manager):
A0:5, A1:4, A2:5,
B0:?, B1:?, B2:?,
C0:?, C1:?, C2:?
As the team leader of Team A, I have aggregated our team's total chocolate count. Team A has a total of 14 chocolates. It's time to find out Team B's total chocolate count.
NEXT: B0
B0 (to chat_manager):
A0:5, A1:4, A2:5,
B0:3, B1:?, B2:?,
C0:?, C1:?, C2:?
As the team leader of Team B, I will start by sharing that I have 3 chocolates. Now, let's find out how many chocolates B1 has.
NEXT: B1
B1 (to chat_manager):
A0:5, A1:4, A2:5,
B0:3, B1:1, B2:?,
C0:?, C1:?, C2:?
I have 1 chocolate. Let's find out how many chocolates B2 has.
NEXT: B2
B2 (to chat_manager):
A0:5, A1:4, A2:5,
B0:3, B1:1, B2:2,
C0:?, C1:?, C2:?
I have 2 chocolates. Now, we need to report back to our team leader, B0, to aggregate our team's total chocolate count.
NEXT: B0
B0 (to chat_manager):
A0:5, A1:4, A2:5,
B0:3, B1:1, B2:2,
C0:?, C1:?, C2:?
As the team leader of Team B, I have aggregated our team's total chocolate count. Team B has a total of 6 chocolates. It's time to find out Team C's total chocolate count.
NEXT: C0
C0 (to chat_manager):
A0:5, A1:4, A2:5,
B0:3, B1:1, B2:2,
C0:1, C1:?, C2:?
As the team leader of Team C, I have 1 chocolate. Now, let's find out how many chocolates C1 has.
NEXT: C1
C1 (to chat_manager):
A0:5, A1:4, A2:5,
B0:3, B1:1, B2:2,
C0:1, C1:4, C2:?
I have 4 chocolates. Let's find out how many chocolates C2 has.
NEXT: C2
C2 (to chat_manager):
A0:5, A1:4, A2:5,
B0:3, B1:1, B2:2,
C0:1, C1:4, C2:5
I have 5 chocolates. Now, we need to report back to our team leader, C0, to aggregate our team's total chocolate count.
NEXT: C0
C0 (to chat_manager):
A0:5, A1:4, A2:5,
B0:3, B1:1, B2:2,
C0:1, C1:4, C2:5
As the team leader of Team C, I have aggregated our team's total chocolate count. Team C has a total of 10 chocolates. Now that we have the total tally from all three teams, we can sum up all three teams' tally.
Team A: 14 chocolates
Team B: 6 chocolates
Team C: 10 chocolates
Total: 30 chocolates
|
{"url":"https://mer.vin/2024/02/autogen-graph/","timestamp":"2024-11-14T10:21:23Z","content_type":"text/html","content_length":"71705","record_id":"<urn:uuid:8999122d-5569-4538-9609-3eb3e6a591ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00443.warc.gz"}
|
Will Donovan
Associate professor at Yau MSC, Tsinghua University, Beijing.
Adjunct faculty at BIMSA, Yanqi Lake, Huairou, Beijing.
Visiting associate scientist at Kavli IPMU, University of Tokyo.
I work in geometry and its intersections with physics and algebra, with particular interests as follows.
• Algebraic geometry
• Noncommutative geometry
• Representation theory
• Symplectic geometry
• Quantum field theory
I completed my PhD in 2011 at Imperial College London, supervised by Richard Thomas and Ed Segal. I was a postdoc at University of Edinburgh with Iain Gordon, and then at Kavli IPMU, University of
Tokyo until 2018.
My curriculum vitae.
|
{"url":"https://w-donovan.github.io/","timestamp":"2024-11-08T11:37:55Z","content_type":"application/xhtml+xml","content_length":"5137","record_id":"<urn:uuid:e59ec59d-6bec-4bb6-8493-31e36e1d6c1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00799.warc.gz"}
|
Different Types of Magic Rectangles in Construction of Magic Squares of Order 22
Never Seen Before
This work brings magic squares in very different way. These are based on three different types of magic rectangles:
1. Bordered Magic Rectangles:
2. Double Digits Magic Rectangles:
3. Cornered magic rectangles.
First of let’s understand these magic rectangles one by one. Removing external borders in each case still we left with lower order magic rectangles. See below few examples in each case.
1. Bordered Magic Rectangles
Below are two examples of bordered magic rectangles
Example 1. Bordered Magic Rectangle of Order 12×18
The entries according to colors are as follows:
Example 2. Bordered Magic Rectangle of Order 6×16
The entries according to colors are as follows:
More details are given in author’s recent work:
2. Double Digits Magic Rectangles
Below are two examples of bordered magic rectangles
Example 1. Double Digits Magic Rectangle of Order 14×20
Except the corners, the entries entries sums are understood as
Example 2. Double Digits Magic Rectangle of Order 12×18
Except the corners, the entries entries sums are understood as
After reorganizing the internal magic rectangle of order 4×16, we have the following double digits magic rectangle:
More details are given in author’s recent work:
3. Cornered Magic Rectangles
Below are two examples of cornered magic rectangles
Example 1. Cornered Magic Rectangle of Order 8×12
Let’s consider a cornered magic rectangle of order 8×12 formed by 96 sequencial entries, i.e., from 1 to 96:
Distributions in colors as follows:
Example 2. Cornered Magic Rectangle of Order 8×12
Let’s consider a cornered magic rectangle of order 10×24 formed by 240 sequencial entries, i.e., from 1 to 240:
Distributions in colors as follows:
More details are given in author’s recent work:
Based on ideas given above, we shall construct magic squares of order 22. These are constructed with four magic rectangles. As explained above these are of are of equal sums. These are made with the
help of script by H. White (Downloads (budshaw.ca) – NestedCornerRectangles)). These kinds of magic squares are never seen in the history. There are of different styles. All the three types of magic
rectangles are used to bring magic squares of order 22. See below:
Magic Squares of Order 22
Initially below are two double digits and cornered magic squares of order 22:
1. Bordered Magic Rectangles and
Magic Squares of Order 22
Below are examples of magic squares of order 22 centered in magic squares of order 6, 10 and 14 respectively. These are constructed with four equal sums bordered magic rectangles of orders 8×14, 6×16
and 4×18 respectively.
More examples are in pdf file attached at the end of work.
2. Double Digits Magic Rectangles and
Magic Squares of Order 22
Below are examples of magic squares of order 22 centered in magic squares of order 6 and 10 respectively. These are constructed with four equal sums double digits magic rectangles of orders 8×14 and
6×16 respectively.
In this case we don’t have magic square of order 22 centered in magic square of order 14 having four equal sums double digits magic rectangles.
3. Cornered Magic Rectangles and
Magic Squares of Order 22: First Type
Below are examples of magic squares of order 22 centered in magic squares of order 6, 10 and 14 respectively. These are constructed with four equal sums cornered magic rectangles of orders 8×14, 6×16
and 4×18 respectively.
3. Cornered Magic Rectangles and
Magic Squares of Order 22: Second Type
Below are examples of magic squares of order 22 centered in magic squares of order 6 and 10 respectively. These are constructed with four equal sums cornered magic rectangles of orders 8×14 and 6×16
3. Cornered Magic Rectangles and
Magic Squares of Order 22: Third Type
Below are examples of magic squares of order 22 centered in magic squares of order 6, 10 and 14 respectively. These are constructed with four equal sums cornered magic rectangles of orders 8×14, 6×16
and 4×18 respectively.
PDF File of Magic Squares of Order 22
Below is a pdf file of 132 magic squares of order 22 for download. These are constructed with 4 equal sums magic rectangles of three types as explained above. Magic squares number 1 and 2 are basic.
|
{"url":"https://numbers-magic.com/?p=10061","timestamp":"2024-11-11T08:22:05Z","content_type":"text/html","content_length":"128450","record_id":"<urn:uuid:ae599739-e6f2-456c-9a00-4026312874f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00870.warc.gz"}
|
Video Series on the HP graphing calculators
05-03-2020, 03:19 AM
Post: #1
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
Video Series on the HP graphing calculators
I decided there might be some interest ..... because the HP50G comes with a small fraction of the documentation that the original 48SX came with.
So having used every model since they came out ..... I decided to make an educational series for them ..... maybe there are still some users/owners who have no idea what they can actually do
Wish me luck
Here is the first episode
05-03-2020, 10:51 PM
Post: #2
cdmackay Posts: 775
Senior Member Joined: Sep 2018
RE: Video Series on the HP graphing calculators
thanks for doing this, I enjoyed the video. Nice solid approach on which to build. Look forward to more (please)…
request: please consider a higher video quality, for calculator shots.
Cambridge, UK
41CL/DM41X 12/15C/16C DM15/16 17B/II/II+ 28S 42S/DM42 32SII 48GX 50g 35s WP34S PrimeG2 WP43S/pilot/C47
Casio, Rockwell 18R
05-04-2020, 10:38 PM
Post: #3
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
(05-03-2020 10:51 PM)cdmackay Wrote: thanks for doing this, I enjoyed the video. Nice solid approach on which to build. Look forward to more (please)…
request: please consider a higher video quality, for calculator shots.
Thanks for your encouragement .... the original calculator shot was hi-res but was compromised loading it onto the "Whiteboard"..... but as the videos series progress I will be filming the actual
calculator screen with an HD video camera mounted on a tripod .... so quality will be better
Best Wishes,
Stephen Mendes
05-05-2020, 02:16 PM
Post: #4
cdmackay Posts: 775
Senior Member Joined: Sep 2018
RE: Video Series on the HP graphing calculators
sounds good, thanks Stephen.
Cambridge, UK
41CL/DM41X 12/15C/16C DM15/16 17B/II/II+ 28S 42S/DM42 32SII 48GX 50g 35s WP34S PrimeG2 WP43S/pilot/C47
Casio, Rockwell 18R
05-05-2020, 06:52 PM
Post: #5
jch Posts: 98
Member Joined: Dec 2014
RE: Video Series on the HP graphing calculators
Thank you for sharing, I find your approach very educational, clearly introducing the fundamentals.
Looking forward to more episodes!
05-07-2020, 02:08 PM
Post: #6
joeres Posts: 47
Junior Member Joined: Oct 2016
RE: Video Series on the HP graphing calculators
(05-03-2020 03:19 AM)stephenmendes Wrote: ..... maybe there are still some users/owners who have no idea what they can actually do
I am happy about the planned HP50g video series. I use the real devices almost every day and want to learn more about how to use them. The whole concept including RPL is really fascinating.
It's just a shame that HP stopped the production. Could SwissMicros theoretically also replicate an HP50g? Or is this not possible for licensing reasons?
By the way, I came to the HP50g via your video
, thanks for that.
good luck
05-10-2020, 01:59 PM
(This post was last modified: 05-10-2020 02:00 PM by stephenmendes.)
Post: #7
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
Hi Guys thanks for the encouragement
Here is the second episode ..... several items mentioned in this video :
Input methods
Command line editing and viewing of larger stack objects
Time and Date setup
CAS versus RPN differences
Plotting / solving for independent variable
are to be addressed in their own respective videos .... and I will need to find some index system so viewers can move from one video to the other in an easy and efficient manner.
I do not think that many people who own the 50G (but never owned previous 48 calculators) have any idea of it's incredible capabilities.
The 50G expands on every previous calculator that went before it...... backward compatibility is maintained with all the commands and flags ...... and additional flags and commands have been
added.... along with more memory (and the SD card subsystem).
I blame the LACK OF COMPREHENSION as to the capabilities of this calculator on extremely POOR DOCUMENTATION..... as shipped, there was nothing that came with it that even remotely showed how to
leverage it's capabilities.
Texas Instruments calculators ONLY dominate the market because of TI's tremendous attempt to EDUCATE the public about how to use them.
Had a similar attempt been made for HP the superiority of the 50G would have been immediately obvious to every college student ..... and instead of being obsolete, they would be selling by the
millions..... SOMEBODY HAS DROPPED THE BALL WITH THIS CALCULATOR
By the time I am finished with my video series (if I ever am) "used" 50G's will be selling $2000 a pop ??
05-10-2020, 07:03 PM
Post: #8
grsbanks Posts: 1,219
Senior Member Joined: Jan 2017
RE: Video Series on the HP graphing calculators
(05-10-2020 01:59 PM)stephenmendes Wrote: By the time I am finished with my video series (if I ever am) "used" 50G's will be selling $2000 a pop ??
I hope so! I think there are 13 of them in my collection so that's quite a nest egg I'll be sitting on if they do
There are only 10 types of people in this world. Those who understand binary and those who don't.
05-13-2020, 03:56 PM
Post: #9
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
Just letting you know the THIRD episode of "hp Tutorial video" is here..... this one looks at ENTRY MODES .... and should work with all the HP graphing calculator models (although I am specifically
using the 50G for all my examples)
05-14-2020, 05:36 PM
Post: #10
cdmackay Posts: 775
Senior Member Joined: Sep 2018
RE: Video Series on the HP graphing calculators
thanks Stephen, keep 'em coming please.
Not a criticism, but for the last one, I would have preferred to see you walk-through it on the calc on-screen, rather than just having it on the white-board.
It not entirely due to my laziness
I was feeling silly with four 50g, but grsbanks makes me feel better
Cambridge, UK
41CL/DM41X 12/15C/16C DM15/16 17B/II/II+ 28S 42S/DM42 32SII 48GX 50g 35s WP34S PrimeG2 WP43S/pilot/C47
Casio, Rockwell 18R
05-17-2020, 01:26 PM
Post: #11
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
The next 50g video is here ...... Basic graphing of Functions..... and in the following subsequent video, I will show how to add elements to your plot (labels, text or graphics) and save your graphs
as GRaphics OBjects (GROB) from where they can recalled to the screen at anytime.
05-17-2020, 01:33 PM
Post: #12
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
(05-14-2020 05:36 PM)cdmackay Wrote: thanks Stephen, keep 'em coming please.
Not a criticism, but for the last one, I would have preferred to see you walk-through it on the calc on-screen, rather than just having it on the white-board.
It not entirely due to my laziness
I was feeling silly with four 50g, but grsbanks makes me feel better
Would it help if I use light colored writing on a DARK background (instead of the whiteboard) ? ..... I am not sure that punching keys on the Calculator in real time is a better way .... rather than
just giving the sequence "on the board" along with a few "calculator shots" ?
If I do all the problem in real time on the calculator..... persons trying to repeat the steps may have to to do a lot of "rewinding" and "rewatching" ..... so by writing the key (softkey) sequence
.... they can have one screen paused and key it in ..... just like if they were using calculator manual
suggestions are always welcome.... from any one
Best Wishes.
Stephen Mendes
05-18-2020, 12:27 AM
Post: #13
Cristi Neagu Posts: 18
Junior Member Joined: May 2020
RE: Video Series on the HP graphing calculators
(05-17-2020 01:33 PM)stephenmendes Wrote: Would it help if I use light colored writing on a DARK background (instead of the whiteboard) ?
Oh yes. Much, much easier on the eyes of anyone watching.
(05-17-2020 01:33 PM)stephenmendes Wrote: I am not sure that punching keys on the Calculator in real time is a better way .... rather than just giving the sequence "on the board" along with a
few "calculator shots" ?
Maybe try to use one of the emulators available. The one from HP seems to be the best. Barring that, a walkthrough of the examples by using a real calculator should be good.
Either way, it's a good series. There doesn't seem to be many tutorials on the 50g. I think it's because it was already old by the time YouTube really started to kick off, and it's not yet classic
enough for people to show it off.
05-19-2020, 09:58 PM
Post: #14
cdmackay Posts: 775
Senior Member Joined: Sep 2018
RE: Video Series on the HP graphing calculators
hi Stephen, yes, I take the point that having the instructions on the whiteboard is useful; I'm just not able to follow along, since I can't easily see the calc and the screen at the same time, which
is annoying (and my fault).
I'd thought that perhaps having both: the instructions on the whiteboard, and a calc screen on the left, where you just perform the instructions, so optically-challenged folk like me can see both at
It's not a big deal though, just something to consider for the future.
thanks again, will watch the new one asap.
Cambridge, UK
41CL/DM41X 12/15C/16C DM15/16 17B/II/II+ 28S 42S/DM42 32SII 48GX 50g 35s WP34S PrimeG2 WP43S/pilot/C47
Casio, Rockwell 18R
05-21-2020, 12:09 AM
Post: #15
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
Hello everybody,
For the next installment in the tutorial video series I have done something different.
I have taken a problem in ac steady state circuit analysis and done the computations on the actual calculator (as has been suggested).
During the problem solving I demonstrate the following:
RPN operational procedures
Directory creation and variable creation
Re-ordering the variables in a directory
Changing modes with programs attached to the User keyboard
(for the programs to change the mode settings visit my Facebook page "Mendes Computers")
Here is problem explained:
Here is the Calculator video to solve it:
05-21-2020, 12:17 AM
Post: #16
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
(05-18-2020 12:27 AM)Cristi Neagu Wrote:
(05-17-2020 01:33 PM)stephenmendes Wrote: Would it help if I use light colored writing on a DARK background (instead of the whiteboard) ?
Oh yes. Much, much easier on the eyes of anyone watching.
(05-17-2020 01:33 PM)stephenmendes Wrote: I am not sure that punching keys on the Calculator in real time is a better way .... rather than just giving the sequence "on the board" along with
a few "calculator shots" ?
Maybe try to use one of the emulators available. The one from HP seems to be the best. Barring that, a walkthrough of the examples by using a real calculator should be good.
Either way, it's a good series. There doesn't seem to be many tutorials on the 50g. I think it's because it was already old by the time YouTube really started to kick off, and it's not yet
classic enough for people to show it off.
I used the actual calculator to solve the problem in this episode (it's in HD so switch in the Youtube control bar and use a bigger screen if you want) .....
but I did not see your post in time to switch from my whiteboard to blackboard (for the initial problem specification and preparation) I will make sure and do that from now on .... thanks for your
05-21-2020, 12:19 AM
Post: #17
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
(05-19-2020 09:58 PM)cdmackay Wrote: hi Stephen, yes, I take the point that having the instructions on the whiteboard is useful; I'm just not able to follow along, since I can't easily see the
calc and the screen at the same time, which is annoying (and my fault).
I'd thought that perhaps having both: the instructions on the whiteboard, and a calc screen on the left, where you just perform the instructions, so optically-challenged folk like me can see both
at once?
It's not a big deal though, just something to consider for the future.
thanks again, will watch the new one asap.
I used the actual calculator to do the demonstrations this time around......
and will switch to the BLACKBOARD for future videos ..... thanks
05-21-2020, 01:08 AM
Post: #18
DanO Posts: 4
Junior Member Joined: May 2020
RE: Video Series on the HP graphing calculators
(05-10-2020 01:59 PM)stephenmendes Wrote: I do not think that many people who own the 50G (but never owned previous 48 calculators) have any idea of it's incredible capabilities.
Recently I've been looking for an additional 50g online. In the dozens of pictures of used 50g's on sale, I think I haven't seen a single picture showing the RPN stack. Almost every display picture
of a powered-on 50g shows ALG mode. That alone speaks volumes. I got a 50g in great condition for 65 USD.
05-21-2020, 04:49 AM
Post: #19
Thomas Radtke Posts: 855
Senior Member Joined: Dec 2013
RE: Video Series on the HP graphing calculators
Just watched your first episode. Nice work, thank you Stephen!
05-21-2020, 01:14 PM
Post: #20
stephenmendes Posts: 17
Junior Member Joined: Apr 2020
RE: Video Series on the HP graphing calculators
(05-21-2020 04:49 AM)Thomas Radtke Wrote: Just watched your first episode. Nice work, thank you Stephen!
Thanks...... I am aiming for two episodes per week and although I am not numbering them individually.... I have a PLAYLIST (to keep them all in one place because the channel has very diversified
Here is the link to the PLAYLIST for "HP GRAPHING CALCULATORS"
User(s) browsing this thread:
|
{"url":"https://hpmuseum.org/forum/thread-14924.html","timestamp":"2024-11-09T10:36:28Z","content_type":"application/xhtml+xml","content_length":"79350","record_id":"<urn:uuid:231abef0-4ac0-4994-8078-fc48a4d84967>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00899.warc.gz"}
|
Iterate through the elements (i1,i2,....,in) with i1 ... in unsigned integers with the following constrain (i1>i2>......>in)
Definition at line 855 of file grid_sm.hpp.
bool Iterator_g_const::isNext ( ) inline
Check if there is the next element.
Check if there is the next element
true if there is the next, false otherwise
we did not reach the end of the grid
we reach the end of the grid
Definition at line 957 of file grid_sm.hpp.
|
{"url":"https://ppmcore.mpi-cbg.de/doxygen/openfpm_data/classIterator__g__const.html","timestamp":"2024-11-02T02:11:19Z","content_type":"application/xhtml+xml","content_length":"16725","record_id":"<urn:uuid:17b92301-fcef-4867-be20-7feaacc7a811>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00360.warc.gz"}
|
MetricGate, LLC
Linear Discriminant Analysis (LDA)
Linear Discriminant Analysis (LDA) is a classification technique that projects high-dimensional data onto a lower-dimensional space, optimizing separation between predefined groups. It is commonly
used to determine which variables best separate multiple classes in the dataset.
The Mathematics of Linear Discriminant Analysis (Theory)
This section delves into the mathematical foundations of Linear Discriminant Analysis (LDA), including deriving the optimal linear combinations that maximize class separation. We'll cover the
calculation of between-class and within-class scatter matrices, the optimization process to maximize the ratio of between-class to within-class variance, and how the resulting discriminant functions
are used for classification.
1. Mean Vectors
LDA starts by calculating the mean vectors for each class. Let μ[i] represent the mean vector of class i, calculated as:
\mu_i = \frac{1}{n_i} \sum_{x \in C_i} x
where n[i] is the number of samples in class i, and C[i] is the set of all samples in class i.
2. Within-Class Scatter Matrix
The within-class scatter matrix S[W] measures the variance within each class. It is defined as:
S_W = \sum_{i=1}^{c} \sum_{x \in C_i} (x - \mu_i)(x - \mu_i)^T
where c is the number of classes, and x represents each sample in class i.
3. Between-Class Scatter Matrix
The between-class scatter matrix S[B] captures the variance between the different class means and the overall mean. It is defined as:
S_B = \sum_{i=1}^{c} n_i (\mu_i - \mu)(\mu_i - \mu)^T
where μ is the overall mean of all samples.
4. Maximizing the Discriminant
LDA finds the linear discriminants by maximizing the ratio of the determinant of S[B] to the determinant of S[W]. This objective can be formulated as:
\text{arg max}_{W} \frac{|W^T S_B W|}{|W^T S_W W|}
where W represents the transformation matrix that projects the data onto the new subspace, maximizing class separability.
5. Solving for the Linear Discriminants
The linear discriminants are the eigenvectors corresponding to the largest eigenvalues of the matrix S[W]^-1S[B]. These eigenvectors define the directions in the feature space that maximize
separation between classes.
6. Projection of Data
Once the linear discriminants are identified, each sample x can be projected into the new space as follows:
y = W^T x
where y represents the coordinates of x in the LDA-transformed space.
7. Decision Boundaries
In the transformed space, LDA separates classes using linear decision boundaries. New samples are classified based on which side of the boundary they fall on relative to the discriminants.
Linear Discriminant Analysis is a powerful technique for supervised dimensionality reduction, especially when there are multiple classes. By transforming the data to maximize class separation, LDA
improves classification accuracy and simplifies complex datasets.
Model Overview
In this R code, Linear Discriminant Analysis (LDA) is implemented to classify the levels of the dependent variable am (automatic vs. manual transmission) based on the independent variables mpg (miles
per gallon), hp (horsepower), and wt (weight).
The lda() function from the MASS package calculates the optimal linear combinations of these predictors to maximize the separation between the transmission types.
The resulting model summary provides coefficients and insights that can be used to interpret the contribution of each variable to the classification and apply the model to new data points.
# Linear Discriminant Analysis (LDA) Summary
cat("Linear Discriminant Analysis (LDA) Summary\n")
# Load and prepare data from mtcars
data <- na.omit(mtcars)
# Perform LDA with mpg as the dependent variable and others as independent variables
lda_model <- lda(am ~ mpg + hp + wt, data = data)
# Print the LDA model summary
Linear Discriminant Analysis (LDA) Summary Interpretation
This summary of the Linear Discriminant Analysis (LDA) model provides key insights into the classification of the variable am (transmission type) based on predictors mpg (miles per gallon), hp
(horsepower), and wt (weight).
Prior Probabilities of Groups
The prior probabilities indicate the proportion of observations in each class before classification:
• Class 0 (Automatic): 59.38%
• Class 1 (Manual): 40.63%
Group Means
The group means represent the average values of mpg, hp, and wt for each class:
• Automatic (0):mpg = 17.15, hp = 160.26, wt = 3.77
• Manual (1):mpg = 24.39, hp = 126.85, wt = 2.41
These averages highlight that manual transmission vehicles generally have higher fuel efficiency (mpg) and lower weight (wt) compared to automatic vehicles.
Discriminant Functions
LDA generates discriminant functions that combine the independent variables to maximize the separation between classes. These functions can be interpreted by examining the coefficients.
# Discriminant Functions Coefficients
cat("Coefficients of Linear Discriminants:\n")
Interpretation: The coefficients of the linear discriminant function (LD1) represent the contribution of each variable to the discriminant function:
• mpg: 0.1457
• hp: 0.0156
• wt: -1.3594
These coefficients suggest that wt has the most significant impact on the classification, with a negative coefficient indicating that higher weight is associated with automatic transmissions. In
contrast, mpg and hp contribute positively to distinguishing between the two groups, but with smaller effects.
Assumption Checks
Before using LDA, we verify the assumptions of multivariate normality and homogeneity of covariances:
• Multivariate Normality: Mardia's test checks if the variables are normally distributed across groups.
• Homogeneity of Covariances: Box's M test verifies whether the covariance matrices are equal across groups.
# Assumption Checks for LDA
# Multivariate Normality Check (Mardia's test)
cat("Multivariate Normality Check (Mardia's test)\n")
mvn_result <- mvn(data[, c("mpg", "hp", "wt")], mvnTest = "mardia")
# Homogeneity of Covariances (Box's M Test)
cat("Homogeneity of Covariances Check (Box's M test)\n")
box_m_test <- boxM(as.matrix(data[, c("mpg", "hp", "wt")]), as.factor(data$am))
In our example, the Mardia's test show:
• Mardia Skewness: Statistic = 30.47, p-value = 0.0007 (Result: Not Normal)
• Mardia Kurtosis: Statistic = 1.25, p-value = 0.2121 (Result: Normal)
Interpretation: The skewness component of Mardia's test indicates a significant departure from normality (p < 0.05), suggesting non-normality in the data distribution. The kurtosis component does not
indicate a violation. Overall, the data fails the multivariate normality test, which may influence the reliability of the LDA model.
Here, the Box's M test statistic is 15.429 with a chi-square distribution (degrees of freedom = 6), resulting in a p-value of 0.01717.
Interpretation: Since the p-value is less than the common significance level (e.g., 0.05), we reject the null hypothesis. This suggests that the assumption of equal covariance matrices across groups
is violated, which may impact the accuracy of the Linear Discriminant Analysis model.
LDA Plot
The LDA plot visualizes the separation between groups based on the linear discriminants. For datasets with more than one linear discriminant, it plots the first two.
# Plotting the LDA results
lda_predict <- predict(lda_model)
# Create a dataframe with LDA results for plotting
lda_data <- data.frame(lda_predict$x, Group = data$am)
# Check if there is only one discriminant function
# Plot only LD1 if LD2 is not available
ggplot(lda_data, aes(x = LD1, fill = factor(Group))) +
geom_histogram(binwidth = 0.5, position = "dodge", color = "black", fill = "steelblue") +
labs(title = "LDA Plot", x = "Linear Discriminant 1", y = "Count") +
LDA Plot Interpretation
This histogram displays the distribution of the data along the first linear discriminant (LD1) for the Linear Discriminant Analysis (LDA) model. Each bar represents the count of data points that fall
within a particular range of LD1 values. A higher count in the center suggests that most data points are distributed around this region, indicating that LD1 captures a significant amount of
separation among the groups.
The spread of the histogram provides insights into the overlap and separation between the groups. If distinct peaks were present or if the groups showed minimal overlap along LD1, this would suggest
a strong separation between classes in the dataset.
Confusion Matrix
The confusion matrix shows the actual vs. predicted classifications, indicating the model's classification accuracy.
# Confusion Matrix for LDA
cat("Confusion Matrix for LDA\n")
confusion_matrix <- table(Actual = data$am, Predicted = lda_predict$class)
# Calculate and print accuracy
accuracy <- sum(diag(confusion_matrix)) / sum(confusion_matrix)
cat("Accuracy:", accuracy, "\n")
Confusion Matrix Interpretation
The confusion matrix above shows the performance of the Linear Discriminant Analysis (LDA) model in classifying data points. Here is the breakdown:
• True Negatives (Actual 0, Predicted 0): 18 instances where the model correctly predicted class 0.
• False Positives (Actual 0, Predicted 1): 1 instance where the model incorrectly predicted class 1 for an actual class 0.
• False Negatives (Actual 1, Predicted 0): 3 instances where the model incorrectly predicted class 0 for an actual class 1.
• True Positives (Actual 1, Predicted 1): 10 instances where the model correctly predicted class 1.
The accuracy of the model is 87.5%, calculated as the proportion of correctly classified instances out of the total predictions. This indicates that the LDA model performs well, with a high rate of
correct classifications.
Linear Discriminant Analysis (LDA) is a robust statistical method that not only separates data into distinct groups but also provides insights into the variables that contribute most to these
By maximizing between-group variance and minimizing within-group variance, LDA allows for effective classification of new observations. It is a valuable tool for understanding group differences in
complex datasets and enhancing predictive accuracy in applications where clear categorization is essential.
Explore our Online R Compiler or Statistics Calculator to apply Linear Discriminant Analysis (LDA) to your own datasets.
|
{"url":"https://metricgate.com/docs/linear-discriminant-analysis/","timestamp":"2024-11-14T17:34:12Z","content_type":"text/html","content_length":"43959","record_id":"<urn:uuid:b1c4fc15-d88d-4df9-acec-75eb0b564625>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00532.warc.gz"}
|
Three Ecological Population Systems: MATLAB and C MEX-File Modeling of Time-Series
This example shows how to create nonlinear grey box time series models. Time series models are models not using any measured inputs. Three idealized ecological systems are studied where two species
• compete for the same food, or:
• are in a predator-prey situation
The example shows modeling based on both MATLAB® and C MEX-files.
Ecological Population Systems
In all three population systems investigated we are interesting in how the population of two species vary with time. To model this, let x1(t) and x2(t) denote the number of individuals of the
respective species at time t. Let l1 and l2 denote the birth-rate associated with x1(t) and x2(t), respectively, both assumed to be constants over time. The death-rates of the species depend both on
the availability of food and, if predators are present, on the risk of being eaten. Quite often (and in general), the death-rate for species i (i = 1 or 2) can be written as ui(x1(t), x2(t)), where
ui(.) is some appropriate function. In practice, this means that ui(x1(t), x2(t))*xi(t) animals of species i dies every time unit. The net effect of these statements can be summarized in a state
space type of model structure: (a time-series):
-- x1(t) = l1*x1(t) - u1(x1(t), x2(t))*x1(t)
-- x2(t) = l2*x2(t) - u2(x1(t), x2(t))*x2(t)
It is here natural to choose the two states as outputs, i.e., we let y1(t) = x1(t) and y2(t) = x2(t).
A.1. Two Species That Compete for the Same Food
In case two species compete for the same food, then it is the overall population of the species that controls the availability of food, and in turn of their death-rates. A simple yet common approach
is to assume that the death-rate can be written as:
ui(x1(t), x2(t)) = gi + di*(x1(t) + x2(t))
for both species (i = 1 or 2), where gi and di are unknown parameters. Altogether this gives the state space structure:
-- x1(t) = (l1-g1)*x1(t) - d1*(x1(t)+x2(t))*x1(t)
-- x2(t) = (l2-g2)*x2(t) - d2*(x1(t)+x2(t))*x2(t)
An immediate problem with this structure is that l1, g1, l2, and g2 cannot be identified separately. We can only hope to identify p1 = l1-g1 and p3 = l2-g2. By also letting p2 = d1 and p4 = d2, one
gets the reparameterized model structure:
-- x1(t) = p1*x1(t) - p2*(x1(t)+x2(t))*x1(t)
-- x2(t) = p3*x2(t) - p4*(x1(t)+x2(t))*x2(t)
y1(t) = x1(t)
y2(t) = x2(t)
In this first population example we resort to MATLAB file modeling. The equations above are then entered into a MATLAB file, preys_m.m, with the following content.
function [dx, y] = preys_m(t, x, u, p1, p2, p3, p4, varargin)
%PREYS_M Two species that compete for the same food.
% Output equations.
y = [x(1); ... % Prey species 1.
x(2) ... % Prey species 2.
% State equations.
dx = [p1*x(1)-p2*(x(1)+x(2))*x(1); ... % Prey species 1.
p3*x(2)-p4*(x(1)+x(2))*x(2) ... % Prey species 2.
The MATLAB file, along with an initial parameter vector, an adequate initial state, and some administrative information are next fed as inputs to the IDNLGREY object constructor. Notice that the
initial values of both the parameters and the initial states are specified as structure arrays with Npo (number of parameter objects = number of parameters if all parameters are scalars) and Nx
(number of states) elements, respectively. Through these structure arrays it is possible to completely assign non-default property values (of 'Name', 'Unit', 'Value, 'Minimum', 'Maximum', and
'Fixed') to each parameter and initial state. Here we have assigned the 'Minimum' value of each initial state to zero (populations are positive!), and also specified that both initial states are to
be estimated by default.
FileName = 'preys_m'; % File describing the model structure.
Order = [2 0 2]; % Model orders [ny nu nx].
Parameters = struct('Name', {'Survival factor, species 1' 'Death factor, species 1' ...
'Survival factor, species 2' 'Death factor, species 2'}, ...
'Unit', {'1/year' '1/year' '1/year' '1/year'}, ...
'Value', {1.8 0.8 1.2 0.8}, ...
'Minimum', {-Inf -Inf -Inf -Inf}, ...
'Maximum', {Inf Inf Inf Inf}, ...
'Fixed', {false false false false}); % Estimate all 4 parameters.
InitialStates = struct('Name', {'Population, species 1' 'Population, species 2'}, ...
'Unit', {'Size (in thousands)' 'Size (in thousands)'}, ...
'Value', {0.2 1.8}, ...
'Minimum', {0 0}, ...
'Maximum', {Inf Inf}, ...
'Fixed', {false false}); % Estimate both initial states.
Ts = 0; % Time-continuous system.
nlgr = idnlgrey(FileName, Order, Parameters, InitialStates, Ts, ...
'Name', 'Two species competing for the same food', ...
'OutputName', {'Population, species 1' 'Population, species 2'}, ...
'OutputUnit', {'Size (in thousands)' 'Size (in thousands)'}, ...
'TimeUnit', 'year');
The PRESENT command can be used to view information about the initial model:
nlgr =
Continuous-time nonlinear grey-box model defined by 'preys_m' (MATLAB file):
dx/dt = F(t, x(t), p1, ..., p4)
y(t) = H(t, x(t), p1, ..., p4) + e(t)
with 0 input(s), 2 state(s), 2 output(s), and 4 free parameter(s) (out of 4).
States: Initial value
x(1) Population, species 1(t) [Size (in t..] xinit@exp1 0.2 (estimated) in [0, Inf]
x(2) Population, species 2(t) [Size (in t..] xinit@exp1 1.8 (estimated) in [0, Inf]
y(1) Population, species 1(t) [Size (in thousands)]
y(2) Population, species 2(t) [Size (in thousands)]
Parameters: Value
p1 Survival factor, species 1 [1/year] 1.8 (estimated) in [-Inf, Inf]
p2 Death factor, species 1 [1/year] 0.8 (estimated) in [-Inf, Inf]
p3 Survival factor, species 2 [1/year] 1.2 (estimated) in [-Inf, Inf]
p4 Death factor, species 2 [1/year] 0.8 (estimated) in [-Inf, Inf]
Name: Two species competing for the same food
Created by direct construction or transformation. Not estimated.
More information in model's "Report" property.
A.2. Input-Output Data
We next load (simulated, though noise corrupted) data and create an IDDATA object describing one particular situation where two species compete for the same food. This data set contains 201 data
samples covering 20 years of evolution.
z = iddata(y, [], 0.1, 'Name', 'Two species competing for the same food');
set(z, 'OutputName', {'Population, species 1', 'Population, species 2'}, ...
'Tstart', 0, 'TimeUnit', 'Year');
A.3. Performance of the Initial Two Species Model
A simulation with the initial model clearly reveals that it cannot cope with the true population dynamics. See the plot figure. For a time-series type of IDNLGREY model notice that the model output
is determined by the initial state.
Figure 1: Comparison between true outputs and the simulated outputs of the initial two species model.
A.4. Parameter Estimation
In order to overcome the rather poor performance of the initial model we proceed to estimate the 4 unknown parameters and the 2 initial states using NLGREYEST. Specify estimation options using
NLGREYESTOPTIONS; in this case 'Display' is set to 'on', which means that estimation progress information is displayed in the progress window. You can use NLGREYESTOPTIONS to specify the basic
algorithm properties such as 'GradientOptions', 'SearchMethod', 'MaxIterations', 'Tolerance', 'Display'.
opt = nlgreyestOptions;
opt.Display = 'on';
opt.SearchOptions.MaxIterations = 50;
nlgr = nlgreyest(z, nlgr, opt);
A.5. Performance of the Estimated Two Species Model
The estimated values of the parameters and the initial states are well in line with those used to generate the true output data:
disp(' True Estimated parameter vector');
True Estimated parameter vector
ptrue = [2; 1; 1; 1];
fprintf(' %6.3f %6.3f\n', [ptrue'; getpvec(nlgr)']);
2.000 2.004
1.000 1.002
1.000 1.018
1.000 1.010
disp(' True Estimated initial states');
True Estimated initial states
x0true = [0.1; 2];
fprintf(' %6.3f %6.3f\n', [x0true'; cell2mat(getinit(nlgr, 'Value'))']);
To further evaluate the quality of the model (and to illustrate the improvement compared to the initial model) we also simulate the estimated model. The simulated outputs are compared to the true
outputs in a plot window. As can be seen, the estimated model is quite good.
Figure 2: Comparison between true outputs and the simulated outputs of the estimated two species model.
PRESENT provides further information about the estimated model, e.g., about parameter uncertainties, and other estimation related quantities, like the loss function and Akaike's FPE (Final Prediction
Error) measure.
nlgr =
Continuous-time nonlinear grey-box model defined by 'preys_m' (MATLAB file):
dx/dt = F(t, x(t), p1, ..., p4)
y(t) = H(t, x(t), p1, ..., p4) + e(t)
with 0 input(s), 2 state(s), 2 output(s), and 4 free parameter(s) (out of 4).
States: Initial value
x(1) Population, species 1(t) [Size (in t..] xinit@exp1 0.100729 (estimated) in [0, Inf]
x(2) Population, species 2(t) [Size (in t..] xinit@exp1 1.98855 (estimated) in [0, Inf]
y(1) Population, species 1(t) [Size (in thousands)]
y(2) Population, species 2(t) [Size (in thousands)]
Parameters: ValueStandard Deviation
p1 Survival factor, species 1 [1/year] 2.00429 0.00971109 (estimated) in [-Inf, Inf]
p2 Death factor, species 1 [1/year] 1.00235 0.00501783 (estimated) in [-Inf, Inf]
p3 Survival factor, species 2 [1/year] 1.01779 0.0229598 (estimated) in [-Inf, Inf]
p4 Death factor, species 2 [1/year] 1.0102 0.0163506 (estimated) in [-Inf, Inf]
Name: Two species competing for the same food
Termination condition: Near (local) minimum, (norm(g) < tol)..
Number of iterations: 5, Number of function evaluations: 6
Estimated using Solver: ode45; Search: lsqnonlin on time domain data "Two species competing for the same food".
Fit to estimation data: [98.42;97.92]%
FPE: 7.747e-09, MSE: 0.0001743
More information in model's "Report" property.
B.1. A Classical Predator-Prey System
Assume now that the first species lives on the second one. The availability of food for species 1 is then proportional to x2(t) (the number of individuals of species 2), which means that the
death-rate of species 1 decreases when x2(t) increases. This fact is captured by the simple expression:
u1(x1(t), x2(t)) = g1 - a1*x2(t)
where g1 and a1 are unknown parameters. Similarly, the death-rate of species 2 will increase when the number of individuals of the first species increases, e.g., according to:
u2(x1(t), x2(t)) = g2 + a2*x1(t)
where g2 and a2 are two more unknown parameters. Using the linear birth-rate assumed above one gets the state space structure:
-- x1(t) = (l1-g1)*x1(t) + a1*x2(t)*x1(t)
-- x2(t) = (l2-g2)*x2(t) - a2*x1(t)*x2(t)
As in the previous population example, it is also here impossible to uniquely identify the six individual parameters. With the same kind of reparameterization as in the above case, i.e., p1 = l1-g1,
p2 = a1, p3 = l2-g2, and p4 = a2, the following model structure is obtained:
-- x1(t) = p1*x1(t) + p2*x2(t)*x1(t)
-- x2(t) = p3*x2(t) - p4*x1(t)*x2(t)
y1(t) = x1(t)
y2(t) = x2(t)
which is better suited from an estimation point of view.
This time we enter this information into a C MEX-file named predprey1_c.c. The model file is structured as the standard IDNLGREY C MEX-file (see example titled "Creating IDNLGREY Model Files" or
idnlgreydemo2.m), with the state and output update functions, compute_dx and compute_y, as follows.
void compute_dx(double *dx, double t, double *x, double **p,
const mxArray *auxvar)
/* Retrieve model parameters. */
double *p1, *p2, *p3, *p4;
p1 = p[0]; /* Survival factor, predators. */
p2 = p[1]; /* Death factor, predators. */
p3 = p[2]; /* Survival factor, preys. */
p4 = p[3]; /* Death factor, preys. */
/* x[0]: Predator species. */
/* x[1]: Prey species. */
dx[0] = p1[0]*x[0]+p2[0]*x[1]*x[0];
dx[1] = p3[0]*x[1]-p4[0]*x[0]*x[1];
/* Output equations. */
void compute_y(double *y, double t, double *x, double **p,
const mxArray *auxvar)
/* y[0]: Predator species. */
/* y[1]: Prey species. */
y[0] = x[0];
y[1] = x[1];
Since the model is of time-series type, neither compute_dx nor compute_y include u in the input argument list. In fact, the main interface function of predprey1_c.c, does not even declare u even
though an empty u ([]) is always passed to predprey1_c by the IDNLGREY methods.
The compiled C MEX-file, along with an initial parameter vector, an adequate initial state, and some administrative information are next fed as input arguments to the IDNLGREY object constructor:
FileName = 'predprey1_c'; % File describing the model structure.
Order = [2 0 2]; % Model orders [ny nu nx].
Parameters = struct('Name', {'Survival factor, predators' 'Death factor, predators' ...
'Survival factor, preys' 'Death factor, preys'}, ...
'Unit', {'1/year' '1/year' '1/year' '1/year'}, ...
'Value', {-1.1 0.9 1.1 0.9}, ...
'Minimum', {-Inf -Inf -Inf -Inf}, ...
'Maximum', {Inf Inf Inf Inf}, ...
'Fixed', {false false false false}); % Estimate all 4 parameters.
InitialStates = struct('Name', {'Population, predators' 'Population, preys'}, ...
'Unit', {'Size (in thousands)' 'Size (in thousands)'}, ...
'Value', {1.8 1.8}, ...
'Minimum', {0 0}, ...
'Maximum', {Inf Inf}, ...
'Fixed', {false false}); % Estimate both initial states.
Ts = 0; % Time-continuous system.
nlgr = idnlgrey(FileName, Order, Parameters, InitialStates, Ts, ...
'Name', 'Classical 1 predator - 1 prey system', ...
'OutputName', {'Population, predators', 'Population, preys'}, ...
'OutputUnit', {'Size (in thousands)' 'Size (in thousands)'}, ...
'TimeUnit', 'year');
The predator prey model is next textually viewed via the PRESENT command.
nlgr =
Continuous-time nonlinear grey-box model defined by 'predprey1_c' (MEX-file):
dx/dt = F(t, x(t), p1, ..., p4)
y(t) = H(t, x(t), p1, ..., p4) + e(t)
with 0 input(s), 2 state(s), 2 output(s), and 4 free parameter(s) (out of 4).
States: Initial value
x(1) Population, predators(t) [Size (in t..] xinit@exp1 1.8 (estimated) in [0, Inf]
x(2) Population, preys(t) [Size (in t..] xinit@exp1 1.8 (estimated) in [0, Inf]
y(1) Population, predators(t) [Size (in thousands)]
y(2) Population, preys(t) [Size (in thousands)]
Parameters: Value
p1 Survival factor, predators [1/year] -1.1 (estimated) in [-Inf, Inf]
p2 Death factor, predators [1/year] 0.9 (estimated) in [-Inf, Inf]
p3 Survival factor, preys [1/year] 1.1 (estimated) in [-Inf, Inf]
p4 Death factor, preys [1/year] 0.9 (estimated) in [-Inf, Inf]
Name: Classical 1 predator - 1 prey system
Created by direct construction or transformation. Not estimated.
More information in model's "Report" property.
B.2. Input-Output Data
Our next step is to load (simulated, though noise corrupted) data and create an IDDATA object describing this particular predator-prey situation. This data set also contains 201 data samples covering
20 years of evolution.
z = iddata(y, [], 0.1, 'Name', 'Classical 1 predator - 1 prey system');
set(z, 'OutputName', {'Population, predators', 'Population, preys'}, ...
'Tstart', 0, 'TimeUnit', 'Year');
B.3. Performance of the Initial Classical Predator-Prey Model
A simulation with the initial model indicates that it cannot accurately cope with the true population dynamics. See the plot window.
Figure 3: Comparison between true outputs and the simulated outputs of the initial classical predator-prey model.
B.4. Parameter Estimation
To improve the performance of the initial model we continue to estimate the 4 unknown parameters and the 2 initial states using NLGREYEST.
nlgr = nlgreyest(z, nlgr, nlgreyestOptions('Display', 'on'));
B.5. Performance of the Estimated Classical Predator-Prey Model
The estimated values of the parameters and initial states are very close to those that were used to generate the true output data:
disp(' True Estimated parameter vector');
True Estimated parameter vector
ptrue = [-1; 1; 1; 1];
fprintf(' %6.3f %6.3f\n', [ptrue'; getpvec(nlgr)']);
-1.000 -1.000
1.000 1.000
1.000 1.000
1.000 0.999
disp(' True Estimated initial states');
True Estimated initial states
x0true = [2; 2];
fprintf(' %6.3f %6.3f\n', [x0true'; cell2mat(getinit(nlgr, 'Value'))']);
To further evaluate the model's quality (and to illustrate the improvement compared to the initial model) we also simulate the estimated model. The simulated outputs are compared to the true outputs
in a plot window. As can be seen again, the estimated model is quite good.
Figure 4: Comparison between true outputs and the simulated outputs of the estimated classical predator-prey model.
As expected, the prediction errors returned by PE are small and of a random nature.
Figure 5: Prediction errors obtained with the estimated IDNLGREY classical predator-prey model.
C.1. A Predator-Prey System with Prey Crowding
The last population study is also devoted to a 1 predator and 1 prey system, with the difference being that we here introduce a term -p5*x2(t)^2 representing retardation of prey growth due to
crowding. The reparameterized model structure from the previous example is then just complemented with this crowding term:
-- x1(t) = p1*x1(t) + p2*x2(t)*x1(t)
-- x2(t) = p3*x2(t) - p4*x1(t)*x2(t) - p5*x2(t)^2
y1(t) = x1(t)
y2(t) = x2(t)
The interpretation of these equations is essentially the same as above, except that in the absence of predators the growth of the prey population will be kept at bay.
The new modeling situation is reflected by a C MEX-file called predprey2_c.c, which is almost the same as predprey1_c.c. Aside from changes related to the number of model parameters (5 instead of 4),
the main difference lies in the state update function compute_dx:
/* State equations. */
void compute_dx(double *dx, double t, double *x, double **p,
const mxArray *auxvar)
/* Retrieve model parameters. */
double *p1, *p2, *p3, *p4, *p5;
p1 = p[0]; /* Survival factor, predators. */
p2 = p[1]; /* Death factor, predators. */
p3 = p[2]; /* Survival factor, preys. */
p4 = p[3]; /* Death factor, preys. */
p5 = p[4]; /* Crowding factor, preys. */
/* x[0]: Predator species. */
/* x[1]: Prey species. */
dx[0] = p1[0]*x[0]+p2[0]*x[1]*x[0];
dx[1] = p3[0]*x[1]-p4[0]*x[0]*x[1]-p5[0]*pow(x[1],2);
Notice that the added retardation term is computed as p5[0]*pow(x[1],2). The power of function pow(., .) is defined in the C library math.h, which must be included at the top of the model file
through the statement #include "math.h" (this is not necessary in predprey1_c.c as it only holds standard C mathematics).
The compiled C MEX-file, along with an initial parameter vector, an adequate initial state, and some administrative information are next fed as input arguments to the IDNLGREY object constructor:
FileName = 'predprey2_c'; % File describing the model structure.
Order = [2 0 2]; % Model orders [ny nu nx].
Parameters = struct('Name', {'Survival factor, predators' 'Death factor, predators' ...
'Survival factor, preys' 'Death factor, preys' ...
'Crowding factor, preys'}, ...
'Unit', {'1/year' '1/year' '1/year' '1/year' '1/year'}, ...
'Value', {-1.1 0.9 1.1 0.9 0.2}, ...
'Minimum', {-Inf -Inf -Inf -Inf -Inf}, ...
'Maximum', {Inf Inf Inf Inf Inf}, ...
'Fixed', {false false false false false}); % Estimate all 5 parameters.
InitialStates = struct('Name', {'Population, predators' 'Population, preys'}, ...
'Unit', {'Size (in thousands)' 'Size (in thousands)'}, ...
'Value', {1.8 1.8}, ...
'Minimum', {0 0}, ...
'Maximum', {Inf Inf}, ...
'Fixed', {false false}); % Estimate both initial states.
Ts = 0; % Time-continuous system.
nlgr = idnlgrey(FileName, Order, Parameters, InitialStates, Ts, ...
'Name', '1 predator - 1 prey system exhibiting crowding', ...
'OutputName', {'Population, predators', 'Population, preys'}, ...
'OutputUnit', {'Size (in thousands)' 'Size (in thousands)'}, ...
'TimeUnit', 'year');
By typing the name of the model object (nlgr) basic information about the model is displayed in the command window. Note that, as before, present(nlgr) provides a more comprehensive summary of the
nlgr =
Continuous-time nonlinear grey-box model defined by 'predprey2_c' (MEX-file):
dx/dt = F(t, x(t), p1, ..., p5)
y(t) = H(t, x(t), p1, ..., p5) + e(t)
with 0 input(s), 2 state(s), 2 output(s), and 5 free parameter(s) (out of 5).
Name: 1 predator - 1 prey system exhibiting crowding
Created by direct construction or transformation. Not estimated.
C.2. Input-Output Data
Next we load (simulated, though noise corrupted) data and create an IDDATA object describing this crowding type of predator-prey situation. This data set contains 201 data samples covering 20 years
of evolution.
z = iddata(y, [], 0.1, 'Name', '1 predator - 1 prey system exhibiting crowding');
set(z, 'OutputName', {'Population, predators', 'Population, preys'}, ...
'Tstart', 0, 'TimeUnit', 'Year');
C.3. Performance of the Initial Predator-Prey Model with Prey Crowding
A simulation with the initial model clearly shows that it cannot cope with the true population dynamics. See the figure.
Figure 6: Comparison between true outputs and the simulated outputs of the initial predator-prey model with prey crowding.
C.4. Parameter Estimation
To improve the performance of the initial model we proceed to estimate the 5 unknown parameters and the 2 initial states.
nlgr = nlgreyest(z, nlgr, nlgreyestOptions('Display', 'on'));
C.5. Performance of the Estimated Predator-Prey Model with Prey Crowding
The estimated values of the parameters and the initial states are again quite close to those that were used to generate the true output data:
disp(' True Estimated parameter vector');
True Estimated parameter vector
ptrue = [-1; 1; 1; 1; 0.1];
fprintf(' %6.3f %6.3f\n', [ptrue'; getpvec(nlgr)']);
-1.000 -1.000
1.000 1.001
1.000 1.002
1.000 1.002
0.100 0.101
disp(' True Estimated initial states');
True Estimated initial states
x0true = [2; 2];
fprintf(' %6.3f %6.3f\n', [x0true'; cell2mat(getinit(nlgr, 'Value'))']);
To further evaluate the quality of the model (and to illustrate the improvement compared to the initial model) we also simulate the estimated model. The simulated outputs are compared to the true
outputs in a plot window. As can be seen again, the estimated model is quite good.
Figure 7: Comparison between true outputs and the simulated outputs of the estimated predator-prey model with prey crowding.
We conclude the third population example by presenting the model information returned by PRESENT.
nlgr =
Continuous-time nonlinear grey-box model defined by 'predprey2_c' (MEX-file):
dx/dt = F(t, x(t), p1, ..., p5)
y(t) = H(t, x(t), p1, ..., p5) + e(t)
with 0 input(s), 2 state(s), 2 output(s), and 5 free parameter(s) (out of 5).
States: Initial value
x(1) Population, predators(t) [Size (in t..] xinit@exp1 2.00281 (estimated) in [0, Inf]
x(2) Population, preys(t) [Size (in t..] xinit@exp1 2.00224 (estimated) in [0, Inf]
y(1) Population, predators(t) [Size (in thousands)]
y(2) Population, preys(t) [Size (in thousands)]
Parameters: Value Standard Deviation
p1 Survival factor, predators [1/year] -0.999914 0.00280581 (estimated) in [-Inf, Inf]
p2 Death factor, predators [1/year] 1.00058 0.00276684 (estimated) in [-Inf, Inf]
p3 Survival factor, preys [1/year] 1.0019 0.00272154 (estimated) in [-Inf, Inf]
p4 Death factor, preys [1/year] 1.00224 0.00268423 (estimated) in [-Inf, Inf]
p5 Crowding factor, preys [1/year] 0.101331 0.0005023 (estimated) in [-Inf, Inf]
Name: 1 predator - 1 prey system exhibiting crowding
Termination condition: Change in cost was less than the specified tolerance..
Number of iterations: 8, Number of function evaluations: 9
Estimated using Solver: ode45; Search: lsqnonlin on time domain data "1 predator - 1 prey system exhibiting crowding".
Fit to estimation data: [97.53;97.36]%
FPE: 4.327e-08, MSE: 0.0004023
More information in model's "Report" property.
This example showed how to perform IDNLGREY time-series modeling based on MATLAB and MEX model files.
|
{"url":"https://in.mathworks.com/help/ident/ug/three-ecological-population-systems-matlab-and-c-mex-file-modeling-of-time-series.html","timestamp":"2024-11-14T08:28:44Z","content_type":"text/html","content_length":"119506","record_id":"<urn:uuid:858ff0dc-6260-4d91-a46d-262b6f396330>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00702.warc.gz"}
|
On the absolute stability regions corresponding to partial sums of the exponential function
Certain numerical methods for initial value problems have as stability function the nth partial sum of the exponential function. We study the stability region, i.e., the set in the complex plane over
which the nth partial sum has at most unit modulus. It is known that the asymptotic shape of the part of the stability region in the left half-plane is a semi-disk. We quantify this by providing
disks that enclose or are enclosed by the stability region or its left half-plane part. The radius of the smallest disk centered at the origin that contains the stability region (or its portion in
the left half-plane) is determined for 1 n 20. Bounds on such radii are proved for n 2; these bounds are shown to be optimal in the limit n ! +1. We prove that the stability region and its
complement, restricted to the imaginary axis, consist of alternating intervals of length tending to , as n ! 1. Finally, we prove that a semi-disk in the left half-plane with vertical boundary being
the imaginary axis and centered at the origin is included in the stability region if and only if n 0 mod 4 or n 3 mod 4. The maximal radii of such semi-disks are exactly determined for 1 n 20.
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01
Dive into the research topics of 'On the absolute stability regions corresponding to partial sums of the exponential function'. Together they form a unique fingerprint.
|
{"url":"https://academia.kaust.edu.sa/en/publications/on-the-absolute-stability-regions-corresponding-to-partial-sums-o","timestamp":"2024-11-14T04:23:46Z","content_type":"text/html","content_length":"57651","record_id":"<urn:uuid:774b10a6-a9bb-420a-aa22-92c6329eff4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00820.warc.gz"}
|
Mathematics and Statistics: Loyola University Chicago
Mathematics and Statistics
Mathematics and Statistics
The Department of Mathematics and Statistics offers a wide variety of undergraduate and graduate degree programs designed for students with diverse career or higher educational goals. The faculty
assists students achieve these goals with up-to-date courses, seminars, and research or internship projects. Our faculty members maintain highly productive research programs which regularly result in
publications in leading journals and academic presses. Much of the faculty research is supported by external agencies. The faculty is also actively engaged in modernizing our elementary course
offerings both in content and in the use of technology.
Mathematics and Statistics Colloquium (Oct 17)
Join the Math & Stat Department for our Colloquium on Thursday, October 17, which will feature a talk by Dr. Sven Leyffer (Argonne National Laboratory). His topic will be "Topological Design Problems
and Integer Optimization". Click to learn more details about the talk!
SIAM Informational Meeting (Oct 9)
Join Loyola's chapter of SIAM (Society for Industrial and Applied Mathematics) for an Informational Meeting. Learn more about how to become a member and the exciting opportunities included with
membership. Food and beverages will be provided!
Undergraduate Research Colloquium (Sep 26)
Our second Undergraduate Research Colloquium this semester will be happening this coming Thursday (September 26). We will hear about more exciting research by our students! Click to learn more
information about the speakers and their topics.
|
{"url":"https://www.luc.edu/math/","timestamp":"2024-11-07T00:47:44Z","content_type":"text/html","content_length":"54700","record_id":"<urn:uuid:f07843d7-ef51-4227-a759-13cc3f8e322c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00375.warc.gz"}
|
ALG_ID (Windows CE .NET 4.2)
This data type specifies algorithm identifiers. Most of the functions in the CryptoAPI pass parameters of this data type that are defined in the Wincrypt.h header file as follows.
typedef unsigned int ALG_ID;
Authors of custom cryptographic service providers (CSPs) can define algorithm identifiers. The ALG_ID data type used by custom CSPs for the key specs AT_KEYEXCHANGE and AT_SIGNATURE are provider
dependent. The following table shows the algorithm identifiers that are currently defined.
Constant Description
CALG_AGREEDKEY_ANY Temporary algorithm identifier for handles of Diffie-Hellman–agreed keys.
CALG_CYLINK_MEK* An algorithm to create a 40-bit DES key that has parity bits and zeroed key bits to make its key length 64 bits.
CALG_DES DES encryption algorithm.
CALG_DESX DES encryption algorithm.
CALG_3DES Triple DES encryption algorithm.
CALG_3DES_112 Two-key triple DES with effective key length equal to 112 bits.
CALG_DH_EPHEM Diffie-Hellman ephemeral key exchange algorithm.
CALG_DH_SF Diffie-Hellman store and forward key exchange algorithm.
CALG_DSS_SIGN DSA public-key signature algorithm.
CALG_HMAC* HMAC keyed hash algorithm.
CALG_KEA_KEYX KEA key exchange algorithm (FORTEZZA).
CALG_MAC* MAC keyed hash algorithm.
CALG_MD2* MD2 hashing algorithm.
CALG_MD4 MD4 hashing algorithm.
CALG_MD5* MD5 Hashing algorithm.
CALG_RC2* RC2 block encryption algorithm.
CALG_RC4* RC4 stream encryption algorithm.
CALG_RC5 RC5 block encryption algorithm.
CALG_RSA_KEYX* RSA public-key key exchange algorithm.
CALG_RSA_SIGN* RSA public-key signature algorithm.
CALG_SEAL SEAL encryption algorithm.
CALG_SHA* SHA hashing algorithm.
CALG_SHA1* Same as CALG_SHA.
CALG_SKIPJACK Skipjack block encryption algorithm (FORTEZZA).
CALG_SSL3_SHAMD5 SSL3 client authentication.
CALG_TEK TEK (FORTEZZA).
CALG_SSL3_SHAMD5 Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
CALG_SSL3_MASTER Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
CALG_SCHANNEL_MASTER_HASH Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
CALG_SCHANNEL_MAC_KEY Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
CALG_SCHANNEL_ENC_KEY Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
CALG_PCT1_MASTER Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
CALG_SSL2_MASTER Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
CALG_TLS1_MASTER Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
CALG_TLS1PRF Used by the schannel.dll operations system. This ALG_ID should not be used by applications.
* Algorithms supported by the Microsoft Base Cryptographic Provider.
For the Microsoft Base Cryptographic Provider and the Microsoft Enhanced Cryptographic Provider, the following ALG_IDs are used for the key specs AT_KEYEXCHANGE and AT_SIGNATURE:
• CALG_RSA_KEYX for AT_KEYEXCHANGE
• CALG_RSA_SIGN for AT_SIGNATURE
For the Microsoft DSS Cryptographic Provider and the Diffie-Hellman Provider, the following ALG_IDs are used for the key specs AT_KEYEXCHANGE and AT_SIGNATURE:
• CALG_DH_SF for AT_KEYEXCHANGE
• CALG_DSS_SIGN for AT_SIGNATURE
OS Versions: Windows CE 3.0 and later.
Header: Wincrypt.h.
See Also
CryptFindOIDInfo | CRYPT_ALGORITHM_IDENTIFIER
Last updated on Thursday, April 08, 2004
© 1992-2003 Microsoft Corporation. All rights reserved.
|
{"url":"https://learn.microsoft.com/en-us/previous-versions/windows/embedded/ms884389(v=msdn.10)","timestamp":"2024-11-02T18:02:56Z","content_type":"text/html","content_length":"43180","record_id":"<urn:uuid:830dbc6e-451d-4f6f-b708-f6262a4626ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00827.warc.gz"}
|
Excel Formula: Average Cells in Column G based on Condition in Column C
In this tutorial, you will learn how to use an Excel formula to average cells in column G based on a condition in column C. The formula uses the AVERAGEIF function in Excel to calculate the average
of cells in column G where the corresponding value in column C matches a specific value.
To write the formula, you need to use the AVERAGEIF function in Excel. This function takes three arguments: range, criteria, and average_range. The range argument specifies the range of cells in
column C where the condition is checked. The criteria argument is the specific value in column C that is used as the criteria for averaging the cells in column G. The average_range argument
represents the range of cells in column G from which the values are averaged.
For example, let's say we have a dataset with columns C and G. We want to calculate the average of values in column G where the corresponding value in column C is 1. To do this, we can use the
formula =AVERAGEIF(C:C, C1, G:G).
By using this formula, you can easily calculate the average of cells in column G based on a condition in column C. This can be useful for analyzing data and finding the average value for specific
In conclusion, the AVERAGEIF function in Excel is a powerful tool for averaging cells based on a condition. By using this formula, you can quickly calculate the average of cells in column G where the
corresponding value in column C matches a specific value.
Formula Explanation
This formula uses the AVERAGEIF function in Excel to calculate the average of cells in column G based on a condition in column C. It averages the cells in column G where the corresponding value in
column C matches a specific value.
Step-by-step explanation
1. C:C refers to the entire column C where the condition is checked.
2. C1 is the specific value in column C that is used as the criteria for averaging the cells in column G.
3. G:G represents the entire column G from which the values are averaged.
4. The AVERAGEIF function calculates the average of cells in column G that meet the specified condition.
For example, if we have the following data in columns C and G:
| C | D | E | G |
| | | | |
| 1 | A | | 5 |
| 2 | B | | 3 |
| 1 | C | | 7 |
| 3 | D | | 6 |
| 2 | E | | 2 |
| 1 | F | | 9 |
| 3 | G | | 1 |
| 2 | H | | 4 |
The formula =AVERAGEIF(C:C, C1, G:G) would calculate the average of values in column G where the corresponding value in column C is 1.
|
{"url":"https://codepal.ai/excel-formula-generator/query/CgAiwRG0/excel-formula-averages-cells-column-g-based-condition-column-c","timestamp":"2024-11-09T04:43:03Z","content_type":"text/html","content_length":"93222","record_id":"<urn:uuid:6410f646-52ea-4037-9eef-d104afb6cec4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00595.warc.gz"}
|
class sklearn.ensemble.ExtraTreesClassifier(n_estimators=100, *, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt',
max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None,
An extra-trees classifier.
This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and uses averaging to improve the predictive
accuracy and control over-fitting.
Read more in the User Guide.
n_estimatorsint, default=100
The number of trees in the forest.
Changed in version 0.22: The default value of n_estimators changed from 10 to 100 in 0.22.
criterion{“gini”, “entropy”, “log_loss”}, default=”gini”
The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “log_loss” and “entropy” both for the Shannon information gain, see Mathematical
formulation. Note: This parameter is tree-specific.
max_depthint, default=None
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
min_samples_splitint or float, default=2
The minimum number of samples required to split an internal node:
○ If int, then consider min_samples_split as the minimum number.
○ If float, then min_samples_split is a fraction and ceil(min_samples_split * n_samples) are the minimum number of samples for each split.
Changed in version 0.18: Added float values for fractions.
min_samples_leafint or float, default=1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least min_samples_leaf training samples in each of the left
and right branches. This may have the effect of smoothing the model, especially in regression.
○ If int, then consider min_samples_leaf as the minimum number.
○ If float, then min_samples_leaf is a fraction and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
Changed in version 0.18: Added float values for fractions.
min_weight_fraction_leaffloat, default=0.0
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
max_features{“sqrt”, “log2”, None}, int or float, default=”sqrt”
The number of features to consider when looking for the best split:
○ If int, then consider max_features features at each split.
○ If float, then max_features is a fraction and max(1, int(max_features * n_features_in_)) features are considered at each split.
○ If “sqrt”, then max_features=sqrt(n_features).
○ If “log2”, then max_features=log2(n_features).
○ If None, then max_features=n_features.
Changed in version 1.1: The default of max_features changed from "auto" to "sqrt".
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.
max_leaf_nodesint, default=None
Grow trees with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
min_impurity_decreasefloat, default=0.0
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where N is the total number of samples, N_t is the number of samples at the current node, N_t_L is the number of samples in the left child, and N_t_R is the number of samples in the right
N, N_t, N_t_R and N_t_L all refer to the weighted sum, if sample_weight is passed.
bootstrapbool, default=False
Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
oob_scorebool or callable, default=False
Whether to use out-of-bag samples to estimate the generalization score. By default, accuracy_score is used. Provide a callable with signature metric(y_true, y_pred) to use a custom
metric. Only available if bootstrap=True.
n_jobsint, default=None
The number of jobs to run in parallel. fit, predict, decision_path and apply are all parallelized over the trees. None means 1 unless in a joblib.parallel_backend context. -1 means using
all processors. See Glossary for more details.
random_stateint, RandomState instance or None, default=None
Controls 3 sources of randomness:
○ the bootstrapping of the samples used when building trees (if bootstrap=True)
○ the sampling of the features to consider when looking for the best split at each node (if max_features < n_features)
○ the draw of the splits for each of the max_features
See Glossary for details.
verboseint, default=0
Controls the verbosity when fitting and predicting.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See Glossary and Fitting additional
weak-learners for details.
class_weight{“balanced”, “balanced_subsample”}, dict or list of dicts, default=None
Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in
the same order as the columns of y.
Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights
should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}].
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))
The “balanced_subsample” mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown.
For multi-output, the weights of each column of y will be multiplied.
Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.
ccp_alphanon-negative float, default=0.0
Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than ccp_alpha will be chosen. By default, no pruning is
performed. See Minimal Cost-Complexity Pruning for details.
max_samplesint or float, default=None
If bootstrap is True, the number of samples to draw from X to train each base estimator.
○ If None (default), then draw X.shape[0] samples.
○ If int, then draw max_samples samples.
○ If float, then draw max_samples * X.shape[0] samples. Thus, max_samples should be in the interval (0.0, 1.0].
monotonic_cstarray-like of int of shape (n_features), default=None
Indicates the monotonicity constraint to enforce on each feature.
■ 1: monotonically increasing
■ 0: no constraint
■ -1: monotonically decreasing
If monotonic_cst is None, no constraints are applied.
Monotonicity constraints are not supported for:
■ multiclass classifications (i.e. when n_classes > 2),
■ multioutput classifications (i.e. when n_outputs_ > 1),
■ classifications trained on data with missing values.
The constraints hold over the probability of the positive class.
Read more in the User Guide.
The child estimator template used to create the collection of fitted sub-estimators.
New in version 1.2: base_estimator_ was renamed to estimator_.
estimators_list of DecisionTreeClassifier
The collection of fitted sub-estimators.
classes_ndarray of shape (n_classes,) or a list of such arrays
The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).
n_classes_int or list
The number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem).
feature_importances_ndarray of shape (n_features,)
The impurity-based feature importances.
Number of features seen during fit.
feature_names_in_ndarray of shape (n_features_in_,)
Names of features seen during fit. Defined only when X has feature names that are all strings.
The number of outputs when fit is performed.
Score of the training dataset obtained using an out-of-bag estimate. This attribute exists only when oob_score is True.
oob_decision_function_ndarray of shape (n_samples, n_classes) or (n_samples, n_classes, n_outputs)
Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this
case, oob_decision_function_ might contain NaN. This attribute exists only when oob_score is True.
estimators_samples_list of arrays
The subset of drawn samples for each base estimator.
See also
An extra-trees regressor with random splits.
A random forest classifier with optimal splits.
Ensemble regressor using trees with optimal splits.
The default values for the parameters controlling the size of the trees (e.g. max_depth, min_samples_leaf, etc.) lead to fully grown and unpruned trees which can potentially be very large on some
data sets. To reduce memory consumption, the complexity and size of the trees should be controlled by setting those parameter values.
P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006.
>>> from sklearn.ensemble import ExtraTreesClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_features=4, random_state=0)
>>> clf = ExtraTreesClassifier(n_estimators=100, random_state=0)
>>> clf.fit(X, y)
>>> clf.predict([[0, 0, 0, 0]])
apply(X) Apply trees in the forest to X, return leaf indices.
decision_path(X) Return the decision path in the forest.
fit(X, y[, sample_weight]) Build a forest of trees from the training set (X, y).
get_metadata_routing() Get metadata routing of this object.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict class for X.
predict_log_proba(X) Predict class log-probabilities for X.
predict_proba(X) Predict class probabilities for X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_fit_request(*[, sample_weight]) Request metadata passed to the fit method.
set_params(**params) Set the parameters of this estimator.
set_score_request(*[, sample_weight]) Request metadata passed to the score method.
Apply trees in the forest to X, return leaf indices.
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix.
X_leavesndarray of shape (n_samples, n_estimators)
For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.
Return the decision path in the forest.
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix.
indicatorsparse matrix of shape (n_samples, n_nodes)
Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format.
n_nodes_ptrndarray of shape (n_estimators + 1,)
The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator.
property estimators_samples_¶
The subset of drawn samples for each base estimator.
Returns a dynamically generated list of indices identifying the samples used for fitting each member of the ensemble, i.e., the in-bag samples.
Note: the list is re-created at each call to the property in order to reduce the object memory footprint by not storing the sampling data. Thus fetching the property may be slower than
property feature_importances_¶
The impurity-based feature importances.
The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini
Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See sklearn.inspection.permutation_importance as an alternative.
feature_importances_ndarray of shape (n_features,)
The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros.
fit(X, y, sample_weight=None)[source]¶
Build a forest of trees from the training set (X, y).
X{array-like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csc_matrix.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node.
In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node.
Fitted estimator.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRequest encapsulating routing information.
Get parameters for this estimator.
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Predict class for X.
The predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. That is, the predicted class is the one with highest mean probability
estimate across the trees.
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix.
yndarray of shape (n_samples,) or (n_samples, n_outputs)
The predicted classes.
Predict class log-probabilities for X.
The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the trees in the forest.
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix.
pndarray of shape (n_samples, n_classes), or a list of such arrays
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
Predict class probabilities for X.
The predicted class probabilities of an input sample are computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction
of samples of the same class in a leaf.
X{array-like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparse csr_matrix.
pndarray of shape (n_samples, n_classes), or a list of such arrays
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
score(X, y, sample_weight=None)[source]¶
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
Mean accuracy of self.predict(X) w.r.t. y.
set_fit_request(*, sample_weight: bool | None | str = '$UNCHANGED$') ExtraTreesClassifier[source]¶
Request metadata passed to the fit method.
Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
☆ True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.
☆ False: metadata is not requested and the meta-estimator will not pass it to fit.
☆ None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
☆ str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for sample_weight parameter in fit.
The updated object.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each
component of a nested object.
Estimator parameters.
selfestimator instance
Estimator instance.
set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') ExtraTreesClassifier[source]¶
Request metadata passed to the score method.
Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
☆ True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.
☆ False: metadata is not requested and the meta-estimator will not pass it to score.
☆ None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
☆ str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.
sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for sample_weight parameter in score.
The updated object.
Examples using sklearn.ensemble.ExtraTreesClassifier¶
|
{"url":"https://scikit-learn.org/1.4/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html","timestamp":"2024-11-06T00:52:17Z","content_type":"text/html","content_length":"85863","record_id":"<urn:uuid:df7489da-df70-456a-8ca3-f2e46e542e9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00133.warc.gz"}
|
Problem G – Gotta Pick'em All
IPSC 2006
Problem G – Gotta Pick'em All
One day, Mrs. Goa Waymaths was chosen to be the substitute teacher in a class full of young maths enthusiasts. In order to keep the kids quiet and busy, she gave each of them a deck of cards and
asked them to play a game of solitaire she used to play when she was in their age.
The game is very simple. You get a deck of 20 cards, numbered 1 to 20, and place them all on the table. Your goal is to pick up as many cards as possible. There is only one requirement: You are not
allowed to pick up a card, if it would lead to a situation where you hold three cards with numbers A, B and C such that A+B=C. (For example, out of the cards 4, 5 and 9 you may only pick at most
One of the best students in the class, little William Strathmore Xaverol (known as Johnny among friends), considered the game too simple. Instead of playing with 20 cards he started to play it with N
(virtual) cards, numbered 1 to N. Soon, he found the maximal number of cards he can pick up.
Once he had the answer, he posed a more difficult question: in how many different ways can he take that maximum number of cards? (Order in which he picks them up doesn't matter.)
For example, if N=3, he can't pick up all three cards (because 1+2=3), but he can pick up any two cards. There are three ways of selecting two out of three cards (1 and 2, 1 and 3, 2 and 3). Thus the
answer to Johnny's "hard" question for N=3 is 3.
Problem specification (G1)
In the easy subproblem (G1), your task is to determine the maximum number of cards one can take without breaking the aforementioned rule.
Problem specification (G2)
In the hard subproblem (G2), your task is to determine the number of distinct sets of cards having the maximal possible cardinality (= count of elements) and satisfying the rules of the game.
Input specification
The first line of the input file contains an integer T specifying the number of test cases. Each test case is preceded by a blank line.
Each test case consists of one line containing one integer N.
For both subproblems use the same input file.
Output specification
For each test case output a line containing a single number – the answer to the particular test case.
Output (G1):
Output (G2):
Problemsetter(s): g00ber
Contest-related materials: g00ber, yulka
|
{"url":"https://ipsc.ksp.sk/2006/real/problems/g.html","timestamp":"2024-11-04T04:33:27Z","content_type":"text/html","content_length":"5864","record_id":"<urn:uuid:a975aec1-58b3-4f9d-b693-02ac01e16ebd>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00201.warc.gz"}
|
Moduli Spaces Seminar
Room 162, Peter Hall Building
• 11 October
SPEAKER: Thorsten Hertl
TITLE: Meromorphic Differentials.
ABSTRACT: In this talk, we will introduce meromorphic differentials on Riemann surfaces. After providing local modules for them, we compare the moduli space of these differentials (with
prescribed poles and zeros) to its holomorphic counterpart. We introduce a topological invariant that allows us to find infinitely many components in the genus 1 case.
Past Seminars
□ 4 October
SPEAKER: Quan Nguyen
TITLE: Affine invariant manifolds.
ABSTRACT: In this talk, we will define affine invariant manifolds, which are interesting linear subspaces in the stratum of differentials. We give examples and discuss some of their
properties, including a theorem by Eskin-Mirzakhani-Mohammadi which allows us to redefine affine invariant manifolds as $\text{GL}^+(2,\mathbb{R})$-invariant subspaces. Affine invariant
manifolds also give rise to Veech surfaces, which have some interesting dynamical properties. Some other elementary constructions of affine invariant manifolds come from real
multiplication in genus 2, which relate to special properties of the Jacobian of a Riemann surface.
20 September
SPEAKER: Marcel Dang
TITLE: Compactifications of strata of differentials.
ABSTRACT: Whenever one encounters noncompact spaces in nature, one can ask the question of how to compactify them. But why do we care? One can think of it as a generalization of the
concept of fini teness. These spaces tend to be more well behaved than their noncompact counterparts, and they allow new techniques and constructions. Putting it into the words of Angelo
Vistoli: "Working with noncompact spaces is like trying to keep change with holes in your pockets." We have seen that a typical stratum of differentials is not compact. As there is no
canonical construction to compactify a space, we will present multiple compactifications: Deligne-Mumford, "You get what you see", incidence variety and themultiscale compactification.
13 September
SPEAKER: Paul Norbury
TITLE: Square-tiled surfaces.
ABSTRACT: I will discuss square-tiled surfaces Veech groups commensurable to SL(2, Z) and conditions on a flat surface to be a square-tiled surface. Enumeration of square-tiled surfaces
produces generating functions with modular properties and leads to calculations of Masur-Veech volumes.
6 September, 2024
SPEAKER: Scott Mullane
TITLE: Saddle connections and Veech groups for flat surfaces.
ABSTRACT: After discussing three examples of different sources of non-compactness in the strata of differentials, we'll state Masur's compactness criterion, the action of GL^+(2,R) on the
set of saddle connections of a flat surface, and define and discuss properties of the Veech group, the stabiliser of a surface under the GL^+(2,R) action.
30 August, 2024
SPEAKER: Marcel Dang
TITLE: The GL^+(2,R)-action on the strata of differentials.
ABSTRACT: Originally motivated by dynamics, GL^+(2,R) acts on the strata of differentials naturally by acting on the polygons in any polygon presentation of a flat surface in the plane
(the first definition). After defining this action and showing it is well-defined, we'll discuss how square-tiled surfaces have closed orbits in the strata of differentials, as well as
how some interesting 1-parameter subgroups of GL^+(2,R) act on the strata.
23 August, 2024
SPEAKER: Thorsten Hertl
TITLE: Connected components of the strata of differentials II.
ABSTRACT: Building on the last lecture, I will introduce spin structures from different perspectives. I will define and give an example of computing the Arf invariant of a flat surface.
Then I will define the spin structure from an algebraic perspective, given by the parity of the number of global sections of the associated theta divisor. Finally I will state Kontsevich
and Zorich's theorem.
16 August, 2024
SPEAKER: Scott Mullane
TITLE: Connected components of the strata of differentials I.
ABSTRACT: A seminal theorem of Kontsevich and Zorich classifies the number of connected components of each stratum of differentials showing that each stratum of differentials has up to
three connected components due to spin and hyperelliptic components. In this talk, we will introduce the concepts of hyperellipticity and spin structures and how they are used to classify
the connected components in the example of the stratum H(4).
9 August, 2024
SPEAKER: Paul Norbury
TITLE: Moduli spaces of flat structures, period coordinates and volumes.
ABSTRACT: In previous lectures we have described how a collection of n vectors in R^2, or equivalently n points in C, describe 2n-sided polygons in R^2. This gives a rather elementary
description of the period coordinates of a flat structure, defined as the periods of a holomorphic differential on a Riemann surface. These equivalent viewpoints lead to natural volume
measures on the moduli spaces of flat structures. I will describe this measure, and also give calculations of (finite) volumes of the moduli spaces of unit area flat structures.
2 August, 2024
SPEAKER: Scott Mullane
TITLE: Flat Surfaces
ABSTRACT: This talk will give three definitions of a flat surface and a proof of equivalence of these definitions. Flat surfaces produce concrete constructions of Riemann surfaces
equipped with a holomorphic differentials. The concrete constructions include: square tiled surfaces, translation coverings, and slit torus and slit cylinder constructions.
26 July, 2024
SPEAKER: Paul Norbury
TITLE: Abelian differentials and flat structures.
ABSTRACT: In this talk I will give an introduction to the topic of abelian differentials and flat structures. To describe billiards bouncing off an edge of a rectangular table, one can
instead continue along a straight line path past the edge to return from the opposite edge (as in some old video games). This describes a path on a flat torus. Equivalently, the surface
is equipped with a conformal structure = well-defined angles, plus further structure, which is neatly captured by the notion of a Riemann surface equipped with an abelian differential.
This viewpoint allows for more general shaped billiard tables, and gives a concrete approach to these algebraic and geometric objects.
|
{"url":"https://researchers.ms.unimelb.edu.au/~norbury@unimelb/moduliseminar.html","timestamp":"2024-11-06T02:43:16Z","content_type":"text/html","content_length":"7624","record_id":"<urn:uuid:e63d08a7-0a56-4f75-ba01-a6e8c23c10d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00110.warc.gz"}
|
Re: Generating LR parsers from EBNF?
"Karsten Nyblad, TFL, Denmark" <KARSTEN@tfl.dk>
10 May 91 11:11:49 +0200
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: "Karsten Nyblad, TFL, Denmark" <KARSTEN@tfl.dk>
Keywords: EBNF, parse, bibliography
Organization: TFL, A Danish Telecommunication Research Laboratory
References: <29305.9105072033@pessoa.ecs.soton.ac.uk>
Date: 10 May 91 11:11:49 +0200
In article <29305.9105072033@pessoa.ecs.soton.ac.uk>, S.R.Adams@ecs.southampton.ac.uk (Stephen Adams) writes:
> [re generating parsing tables directly from EBNF]
> I thought about the idea and tried a few tiny examples by hand. The main
> trick is to reason about the placing of the `dot' in the EBNF items.
> Shifting the dot in an item may produce more than one derived item. For
> example, consider the simple grammar for nested parenthesized lists of
> numbers:
> s -> ( {s} )
> s -> number
> `{s}' menas 0 or more s's. This grammar might generate `5' or
> `(1 2 (3 4) 5)'.
> The item `s -> . ( {s} )' shifted to two items, `s -> ( { . s} )' and
> `s -> ( {s} . )' The state containing these two items would usually be two
> states in a CFG generated automaton.
> The parse tables came out smaller. The EBNF generated table was smaller
> because some table slots in the CFG generated table were being replaced
> with sequences of operations, for example a reduction of a null production
> might be replaced by a compound operation `reduce and goto'. After these
> changes some states and some columns of the goto table are never used and
> may be removed.
> I have looked in several books and a couple of conference proceedings but
> I have failed to find any references on generating LR parse tables
> directly from an EBNF grammar. What I would like to know is:
> 1. Are there any references?
Acta Informatica 7, 61-73 (1976)
O.L. Madsen and B.B. Kristensen: LR-Parsing of Extended Context Free
Acta Informatica 11 177-193 (1979)
Wilf R. LaLonde: Constructing LR Parsers for Regular Right Part Grammars
Communications of the ACM: Volume 20 number 10
Wilf R. LaLonde: Regular Right Part Grammars and Their Parsers
Acta Informatica Vol 21 Fasc 1 1984.
N.P. Chapman: LALR(1,1) Parser Generation for Regular Right Part Grammars.
Acta Informatica Vol 23 149-162 (1986)
Ikuo Nakata and Masataka Sassa: Generation of Efficeint LALR Parsers for
Regular Right Part Grammars
(Take care: Their algorithm can't handle the following
A -> a a
A -> a A
There is a bug in their optimizing algorithm too. It can't handle the
grammar of their example. Otherwise it is the most advanced algorithm.)
It is possible to fix both the bugs. I am currently writing a
parser generator, which does it.
> 2. Is the EBNF approach equivalent to the CFG one? Is the difference in
> the tables always a due to small set of `optimizations'?
> 3. Which is faster? The EBNF item sets are smaller but
> more complex to handle. Using the EBNF approach seems
> to reduce the need for `optimizing' the generated table.
It is hard to tell. I can confirm that the number of states is
smaller. My parser generator generates 592 states for an ADA grammar,
which typed in directly from the standard, and which does not handle
pragmas. I prefer generators which let the user write his grammar
like he likes, without loss of performance. In order to obtain that,
you will still need to remove unit reductions, and make stackings that
are always followed by a reduction into one operation. Removing unit
reductions is harder than in normal LR parsing because you will need
to split states that recognize repeation of a symbol to get the unit
productions, i.e., assume the production:
expression -> term { or term }
When this production is reduced, the length of the right hand side is
frequently 1, and depending of the algorithm you use for generating
the LR0 automaton you might need splitting states. I do.
I don't know whether anybody has ever completed writing a generator.
My generator in itself is rather fast. When generating the ADA
parser, it uses 20 seconds form start to the lookahead sets are
calculated when executed on a VAX with 2.8 the power of the VAX780 and
with optimization disabled when compiling the parser generator. I am
currently not capable of doing anything but generating lookahead set,
and that takes 5700 lines of C code. I think I will need 2-3000 lines
of code more before having a parser generator without error recorvery
and without any facilities for handling attributes.
> 4. I only generated the LR(0) automaton by hand. I have not thought
> about `higher' grammar categories like LALR(1), LR(2), regular-LR etc.
> Are these kinds easily generated using the EBNF method?
Well, generating the basic LR(0) atomaton is easy. You generate a DFA
for each right hand side of the grammar. You use the states in the
DFAs to substitute items, i.e., each DFA state repesent one or more
positions of dots in the original regular expression. Now it is easy
to generate the LR(0). Calculating the lookahead sets is as easy as
with normal LALR algorithm.
You can also generate the LR0 atomaton as you describe, Stephen.
Madsen's and Kristensen's paper is about that. They put some
restrictions on the regular expressions. I do not remember why, but
it has the advantage that there is no ambiguities in how to build the
parse trees recognized by their parsers.
The real problem is deciding how much to pop from the stack when
reducing a production. Chapman uses DFAs that recognize the reverse
sequence of symbols of the right hand side, or rather recognize not
the symbols but the states representing them on the parser stack.
When the parser reduces, it starts by recognizing the right hand part
starting from the top of the parser stack. Then the recognized part
of the parser stack is substituted by the state representing the
nonterminal of the reduced production.
Nakata and Sassa prove a theorem, that say that when reducing, the
state to restart the parser in, will always be the topmost of a set of
possible states (topmost on the parser stack.) As I wrote previously,
the there is a bug the proof, but after correcting that you will get
a parser that is faster than Chapmans, because you save recognizing
the right hand side once more.
Karsten Nyblad
TFL, A Danish Telecommunication Research Laboratory
E-mail: karsten@tfl.dk
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"https://compilers.iecc.com/comparch/article/91-05-076","timestamp":"2024-11-11T01:30:23Z","content_type":"text/html","content_length":"10607","record_id":"<urn:uuid:1f78843d-ca94-4336-91bb-04ababab6c3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00631.warc.gz"}
|
Course Catalog - 2024-2025
COMP 647 - DEEP LEARNING
Long Title: DEEP LEARNING
Department: Computer Science
Grade Mode: Standard Letter
Language of Instruction: Taught in English
Course Type: Lecture
Credit Hours: 3
Must be enrolled in one of the following Program(s):
Online Master of Data Science
Online Master Computer Science
Must be enrolled in one of the following Level(s):
Description: In this course, students will learn the fundamentals of neural networks and deep learning along with their applications in several domains, such as computer vision and natural language
processing. In order to enroll in an online section of this course, you are expected to have a working camera and microphone. During class sessions, you must be able to participate using your
microphone and you are expected to have your camera on for the duration of the class so that you are visible to the instructor and other students in the class, just as you would be in an in-person
class. Recommended Prerequisite(s): 1. Proficiency in Python 3. 2. Familiarity with fundamental concepts of calculus, including partial derivatives, chain rule, total derivatives, derivatives and
partial derivatives of vectors and matrices. 3. Familiarity with fundamental concepts of probability and statistics, including probability distributions, density functions, computing probabilities,
expectation, variance, multivariate distributions, random variables and multivariate random variables. 4. Familiarity with fundamental concepts of linear algebra, such as inner products, vector
spaces, vector and matrix norms, rank of a matrix, positive definite matrices, and matrix factorization, e.g., spectral decomposition and singular value decomposition. 5. Familiarity with fundamental
concepts of machine learning and optimization theory, such as loss functions, gradient descent, maximum likelihood estimation, MAP estimation, and principal component analysis. Mutually Exclusive:
Cannot register for COMP 647 if student has credit for COMP 646/ELEC 576.
|
{"url":"https://courses.rice.edu/courses/!SWKSCAT.cat?p_action=CATALIST&p_acyr_code=2025&p_crse_numb=647&p_subj=COMP","timestamp":"2024-11-09T02:57:46Z","content_type":"text/html","content_length":"8926","record_id":"<urn:uuid:b5da13c4-3eab-4cb4-9b59-a4f6e309fa2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00266.warc.gz"}
|
Supporting Fluency and Reasoning through Rich Tasks
Contributed by:
Mathematics is a creative and highly inter-connected discipline that has been developed over centuries, providing the solution to some of history’s most intriguing problems. It is essential to
everyday life, critical to science, technology, and engineering, and necessary for financial literacy and most forms of employment. A high-quality mathematics education, therefore, provides a
foundation for understanding the world, the ability to reason mathematically, an appreciation of the beauty and power of mathematics, and a sense of enjoyment and curiosity about the subject.
1. Haringey
Session 1
Supporting fluency and
reasoning through rich
8 October 2014
Lynne McClure
Director, NRICH project
© University of Cambridge
2. National Curriculum
Become fluent in the fundamentals of
mathematics, including through varied and
frequent practice with increasingly complex
problems over time, so that pupils develop
conceptual understanding and the ability to
recall and apply knowledge rapidly and
© University of Cambridge
3. National Curriculum
Reason mathematically by following a line of
enquiry, conjecturing relationships and
generalisations, and developing an
argument, justification or proof using
mathematical language
© University of Cambridge
4. Reach 100
Choose four different digits from 1−9 and
put one in each box. For example:
This gives four two-digit
numbers: 52,19, 51, 29
In this case their sum is 151.
Can you find four different digits that give
four two-digit numbers which add to a total
of 100?
© University of Cambridge
5. • What is the mathematical
knowledge that is needed?
• Who would this be for?
• What is the ‘value added’ for higher
attaining children/struggling children.
© University of Cambridge
6. Strike It Out
6 + 4 = 10
10 take away 9 makes 1
1 add 17 is 18
Competitive aim – stop your partner
from going
Collaborative aim – cross off as many
as possible
© University of Cambridge
7. • What is the mathematical knowledge
that is needed to play?
• Who would this game be for?
• What is the ‘value added’ for able
children/struggling children of playing
the game?
• How could you adapt this game to use
it in your classroom?
© University of Cambridge
8. How do these rich tasks contribute to
© University of Cambridge
9. An efficient strategy is one that the student
can carry out easily, keeping track of sub-
problems and making use of intermediate
results to solve the problem.
© University of Cambridge
10. depends on careful recording, the
knowledge of basic number combinations
and other important number relationships,
and checking results.
© University of Cambridge
11. requires the knowledge of more than one
approach and being able to choose
appropriately between them
(Russell, 2000
© University of Cambridge
12. Procedural &
conceptual fluency
Automaticity with recall
© University of Cambridge
13. Fluency
Procedural without conceptual Conceptual without procedural
Computation without meaning Computation which is slow, effortful
and frustrating
Inability to adapt skills to unfamiliar Inability to focus on the bigger picture
contexts when solving problems
Difficulty reconstructing forgotten Difficulty progressing to new or more
knowledge or skills complex ideas
© University of Cambridge
14. Using the same rules is it possible to cross all
the numbers off?
How do you know?
© University of Cambridge
15. Two types of reasoning
Inductive reasoning
• Can be incorrect
• Can’t be used to ‘prove’
Deductive reasoning
• Follows rules of logic
• Can be used to prove
© University of Cambridge
16. In a problem:
• Reasoning is necessary when:
• The route through the problem is not clear
• There are some conflicts in what you are given
or know
• There are some things you don’t know
• Theres no structure to what you’re given
• There are different possible solutions
• All of which require mental work….
© University of Cambridge
17. Reasoning is…
• A critical skill to knowing and doing maths
• Enabling – it allows children to make use of all
the other mathematical skills – it’s the glue that
helps maths to make sense.
© University of Cambridge
18. Structuring
children’s reasoning
• Questioning: closed v open
• Listening
• Acknowledging
• Improving
• Modelling KS1: good 'because' statements,
short chains
• KS2: logic, convincing
© University of Cambridge
19. Session 2
Problem solving
© University of Cambridge
20. National Curriculum
Can solve problems by applying their
mathematics to a variety of routine and non-
routine problems with increasing
sophistication, including breaking down
problems into a series of simpler steps and
persevering in seeking solutions
© University of Cambridge
21. Historically
• learning the content v problem solving
• theory versus practice, reason versus
experience, acquiring knowledge versus
applying knowledge.
• problems seen as vehicles for practicing
applications ie computation procedures are
acquired first and then applied
• problem-based learning
© University of Cambridge
22. Dominoes
• Dominoes – have a play….
• Have you got a full set?
• How do you know?
• Can you arrange them in some way to
convince yourself/others that you have/
haven’t got full set?
© University of Cambridge
23. • What number knowledge/skills did you
• What mathematical processes did you
• What ‘soft skills’ did you use?
© University of Cambridge
24. Amy has a box containing ordinary
domino pieces but she does not think it is
a complete set. She has 24 dominoes in
her box and there are 125 spots on them
altogether. Which of her domino pieces
are missing?
© University of Cambridge
25. • What number knowledge/skills did you
• What mathematical processes did you
• What ‘soft skills’ did you use?
© University of Cambridge
26. Order of events
• Free play –Montessori ‘work’
• Closed activity: structure of the
• Task which uses that knowledge
• Multistep
• With or without apparatus
© University of Cambridge
27. Dominoes v houses
Sort – have you got them all?
How do you know?
Tasks using that knowledge
Guess the dominoes/ houses
© University of Cambridge
28. Rich tasks….
• combine fluency, problem solving and
mathematical reasoning
• are accessible
• promote success through supporting
thinking at different levels of challenge (low
threshold - high ceiling tasks)
• encourage collaboration and discussion
• use intriguing contexts or intriguing maths
© University of Cambridge
29. • allow for:
• learners to pose their own problems,
• different methods and different responses
• identification of elegant or efficient solutions,
• creativity and imaginative application of
• have the potential for revealing patterns or
lead to generalisations or unexpected
• have the potential to reveal underlying
principles or make connections between
areas of mathematics
(adapted from Jenny Piggott, NRICH)
© University of Cambridge
30. Tasks
• Non-routine
• Accessible
• Challenging
• Curriculum linked
• Rich tasks/LTHC tasks
Implications for your teaching?
© University of Cambridge
31. Valuing mathematical
• Process as well as end product
• Talk as well as recording
• Questioning as well as answering
• …………
© University of Cambridge
32. Purposeful activity
Give the pupils something to do,
not something to learn;
and if the doing is of such a nature
as to demand thinking;
learning naturally results.
John Dewey
© University of Cambridge
33. Session 4
Games are more than fillers
© University of Cambridge
34. Dotty
Green wins!
© University of Cambridge
35. • What is the mathematical knowledge that
is needed to play?
• Who would this game be for?
• What is the value added of playing the
• Could you adapt it to use it in your
• Contribute to F, R, PS?
© University of Cambridge
36. Board Block
© University of Cambridge
37. • What is the mathematical knowledge that
is needed to play?
• Who would this game be for?
• What is the value added of playing the
• Could you adapt it to use it in your
• Contribute to F, R, PS?
© University of Cambridge
38. Four Go
© University of Cambridge
39. • What is the mathematical knowledge that
is needed to play?
• Who would this game be for?
• What is the value added of playing the
• Could you adapt it to use it in your
• Contribute to F, R, PS?
© University of Cambridge
40. Nice and nasty
© University of Cambridge
41. • What is the mathematical knowledge that
is needed to play?
• Who would this game be for?
• What is the value added of playing the
• Could you adapt it to use it in your
• Contribute to F, R, PS?
© University of Cambridge
42. © University of Cambridge
43. “If I ran a school, I’d give all the average
grades to the ones who gave me all the right
answers, for being good parrots. I’d give the
top grades to those who made lots of
mistakes and told me about them and then
told me what they had learned from them.”
Buckminster Fuller, Inventor
© University of Cambridge
44. • What were these children’s views of
• Would you get the same answers?
© University of Cambridge
45. Session 3
Maths Working Group
© University of Cambridge
46. Purpose of study
Mathematics is a creative and highly inter-
connected discipline that has been developed over
centuries, providing the solution to some of
history’s most intriguing problems. It is essential to
everyday life, critical to science, technology and
engineering, and necessary for financial literacy
and most forms of employment. A high-quality
mathematics education therefore provides a
foundation for understanding the world, the ability
to reason mathematically, an appreciation of the
beauty and power of mathematics, and a sense of
enjoyment and curiosity about the subject.
© University of Cambridge
47. Purpose of study
Mathematics is a creative and highly inter-
connected discipline that has been developed over
centuries, providing the solution to some of
history’s most intriguing problems. It is essential to
everyday life, critical to science, technology and
engineering, and necessary for financial literacy
and most forms of employment. A high-quality
mathematics education therefore provides a
foundation for understanding the world, the ability
to reason mathematically, an appreciation of the
beauty and power of mathematics, and a sense of
enjoyment and curiosity about the subject.
© University of Cambridge
48. • interconnected subject in which pupils need
to be able to move fluently between
representations of mathematical ideas.
• make rich connections across mathematical
ideas to develop fluency, mathematical
reasoning and competence in solving
increasingly sophisticated problems
• apply their mathematical knowledge to
science and other subjects.
© University of Cambridge
49. • interconnected subject in which pupils need
to be able to move fluently between
representations of mathematical ideas.
• make rich connections across mathematical
ideas to develop fluency, mathematical
reasoning and competence in solving
increasingly sophisticated problems
• apply their mathematical knowledge to
science and other subjects.
© University of Cambridge
50. The new National Curriculum
What’s important to teachers?
© University of Cambridge
51. Aims
• All equally important
• First two feed into third
© University of Cambridge
52. Big ideas
• Fluency
• Reasoning
• Problem solving
• Arithmetic/calculation (fractions)
• Multiplicative/proportional reasoning
• Pre-algebra/algebra
• Connections within and without
• No probability at KS1/2
• Reduced data handling at 1/2/3
© University of Cambridge
53. Year 6
Pupils should be taught to: • Pupils should be introduced to
•use simple formulae the use of symbols and letters to
•generate and describe represent variables and
linear number sequences unknowns in mathematical
situations that they already
•express missing number understand, such as:
problems algebraically • missing numbers, lengths,
•find pairs of numbers that coordinates and angles,
satisfy an equation with two • formulae in mathematics and
unknowns science
•enumerate all possibilities of • equivalent expressions (for
combinations of two example, a + b = b + a)
variables. • generalisations of number
• number puzzles (for example,
what two numbers can add up
to). © University of Cambridge
54. Year 6
Pupils should be taught to: • Pupils should be introduced to
•use simple formulae the use of symbols and letters to
•generate and describe represent variables and
linear number sequences unknowns in mathematical
situations that they already
•express missing number
understand, such as:
problems algebraically • missing numbers, lengths,
•find pairs of numbers that coordinates and angles,
satisfy an equation with two • formulae in mathematics and
unknowns science
•enumerate all possibilities of • equivalent expressions (for
combinations of two example, a + b = b + a)
variables. • generalisations of number
• number puzzles (for example,
what two numbers can add up
to). © University of Cambridge
55. © University of Cambridge
56. 10 + 10 + 8 + 8
25 + 25 + 23 + 23
s + s + (s-2) +( s-2)
= 4s - 4
© University of Cambridge
57. 10 + 9 + 8 + 9
25 + 24 + 23 + 24
s + s-1 + (s-2) +( s-1)
= 4s- 4
© University of Cambridge
58. 9 + 9 + 9+ 9
24 + 24 + 24 + 24
(s-1) + (s-1) + (s-1) +(s-1)
= 4s- 4
© University of Cambridge
59. 10 + 10 + 10 + 10 – 4
25 + 25 + 25 + 25 - 4
= 4s- 4
© University of Cambridge
60. s + s + (s-2) +( s-2)
= 4s - 4
s + s-1 + (s-2) +( s-1)
= 4s- 4
(s-1) + (s-1) +( s-1) + (s-1)
= 4s- 4
= 4s- 4
© University of Cambridge
61. 102 - 82
62 - 4 2
182 - 162
s2 - (s-2)2
s2 - (s-2)2 = s2 - (s2 - 4s +4)
= s2 - s2 +4s – 4
= 4s - 4
© University of Cambridge
62. The expectation is that the majority of pupils will move
through the programmes of study at broadly the same
pace. However, decisions about when to progress should
always be based on the security of pupils’ understanding
and their readiness to progress to the next stage. Pupils
who grasp concepts rapidly should be challenged through
being offered rich and sophisticated problems before any
acceleration through new content. Those who are not
sufficiently fluent with earlier material should consolidate
their understanding, including through additional practice,
before moving on.
© University of Cambridge
63. The programmes of study for mathematics are set
out year-by-year for Key Stages 1 and 2. Schools
are, however, only required to teach the relevant
programme of study by the end of the key stage.
Within each key stage, schools therefore have the
flexibility to introduce content earlier or later than
set out in the programme of study.
© University of Cambridge
64. IWADWADWAGWAG
If we always do what we’ve always done
we’ll always get what we always got…..
© University of Cambridge
65. Session 3
National Collaborative Projects
a.Mastery pedagogy for primary mathematics 1 – China-
England research and innovation project
b.Mastery pedagogy for primary mathematics 2 – Use of
high quality textbooks (linked to Singapore) to support
teacher professional development and deep conceptual
and procedural knowledge for pupils
© University of Cambridge
66. 1. Increasing supply of specialist teachers of mathematics
(including primary, secondary convertors, Post-16) (SO1a)
2. Developing specialist subject knowledge of teachers of
mathematics (all phases and including particular areas)
3. Developing pedagogical knowledge of teachers of
mathematics (especially understanding of mastery
pedagogy and Shanghai & Singapore pedagogy) (SO1c)
4. Improving quality of mathematics teaching practice
(including the move from good to outstanding) (SO1d)
5. Supporting teachers to address new curriculum and
© University of Cambridge
67. 6. Improving quality of curriculum resources and activities (especially
to support mastery teaching) (SO3b)
8. Improving supply and developing specialist leadership knowledge
of mathematics subject leaders (SO2a/b)
9. Improving quality of and access to mathematics enrichment
experiences (SO3c)
10. Increased progress and achievement in primary and secondary
(including sustained progress through transition phases) (PO1a/b)
11. Reducing the gap in achievement between disadvantaged pupils
and others (PO4c)
14. Developing confidence (can-do attitude) and resilience in learning
mathematics (PO3a)
© University of Cambridge
68. Key Findings
Successful schools
• Hands on crucial in FS and KS1
• ‘Traditional’ methods need to be underpinned by place
value, mental methods fluency, facts
• Inverse operations important
• Confidence fluency and versatility nurtured through
problem solving and investigations
• Clear coherent calculation policy
© University of Cambridge
69. Key findings
Made to Measure
• Inconsistency within schools
• Need to increase emphasis on problem solving
• Teachers to be enabled to choose approaches that foster
deeper understanding
• Checking understanding and reacting immediately
• Attention on most and least able
© University of Cambridge
|
{"url":"https://merithub.com/tutorial/supporting-fluency-and-reasoning-through-rich-math-tasks-c7e87qdonhcu71pbu8n0","timestamp":"2024-11-11T04:16:08Z","content_type":"text/html","content_length":"67496","record_id":"<urn:uuid:e803dc89-0513-47cc-9f81-aa61af4c51f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00085.warc.gz"}
|
A Look Into the Logic of Computer Science
Diving into discrete mathematics
Photo by Volodymyr Hryshchenko on Unsplash
Contrary to popular belief, computer science is not all about raw code. In my first year of computer science, I was surprised to see that my university required many “non-CS courses,” including
calculus, linear algebra, and most interestingly, a class on logic and discrete mathematics. I had always known that computer science and mathematics were deeply intertwined, which is partly why I
decided to do a joint major in both. However, this class gave me the underlying foundation to think deeply about the logic behind certain computer science principles, and I hope to give a brief
introduction to some of the more fundamental topics in this article.
Truth Tables
Let’s begin with going over truth tables and propositions. In discrete mathematics, a proposition is simply a statement that has a truth value, and that truth value is either true or false. For
example, the statement “two plus two equals 4” is a proposition because it has a truth value. In this case, its truth value happens to be true. However, note that the statement “two plus two equals
five” is still a proposition because it retains the property of having a truth value; only in this case, the truth value is false. Common examples of statements that are not a propositions include
questions. For example…
|
{"url":"https://albertming88.medium.com/a-look-into-the-logic-of-computer-science-d9312eafbf8d?source=user_profile_page---------1-------------1715f386c47c---------------","timestamp":"2024-11-05T13:13:03Z","content_type":"text/html","content_length":"89689","record_id":"<urn:uuid:e8b83879-cba1-42a6-a9d4-d6a6bc7b0f71>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00068.warc.gz"}
|
Congruent Lines
Congruent Lines in Geometry
What are Congruent Lines?
Congruent lines are lines that are identical in length and angle. Congruent lines are one of the fundamental concepts in geometry. They are used to measure and compare the lengths, angles, and other
properties of geometric shapes. Geometric shapes are made up of congruent lines and angles, and these lines and angles must be the same in order for the shape to remain intact.
How to Identify Congruent Lines
Congruent lines can be identified by their length and angle. The length of congruent lines must be equal, and the angles of the lines must be equal as well. If two lines are the same length and
angle, then they are congruent.
Congruent lines can also be identified by their symmetry. Symmetry is when two lines or angles are mirror images of each other. If two lines or angles are symmetrical, then they are congruent.
Examples of Congruent Lines
Congruent lines can be found in a variety of geometric shapes. For example, a square has four congruent sides and four congruent angles. A triangle also has three congruent sides and three congruent
In addition, congruent lines can be found in parallel lines. Parallel lines are lines that are always the same distance apart and never intersect. If two lines are parallel, then they are congruent.
Geometric Proofs
Geometric proofs involve using congruent lines to prove the properties of geometric shapes. For example, a geometric proof can be used to prove that two triangles are congruent. In this proof,
congruent lines and angles are used to show that the two triangles have the same size and shape.
Geometric proofs can also be used to prove the properties of circles. For example, a geometric proof can be used to prove that two circles are congruent. In this proof, congruent lines are used to
show that the two circles have the same size and shape.
Practice Problems
1. Identify the congruent lines in the following diagram:
A. Lines AB and CD
B. Lines AD and BC
C. Lines AB and AD
D. Lines BC and CD
Answer: A. Lines AB and CD
2. Identify the congruent lines in the following diagram:
A. Lines AB and CD
B. Lines AD and BC
C. Lines AB and AD
D. Lines BC and CD
Answer: B. Lines AD and BC
3. Determine if the following lines are parallel:
A. Lines AB and CD
B. Lines AD and BC
C. Lines AB and AD
D. Lines BC and CD
Answer: A. Lines AB and CD are parallel.
4. Determine if the following lines are congruent:
A. Lines AB and CD
B. Lines AD and BC
C. Lines AB and AD
D. Lines BC and CD
Answer: B. Lines AD and BC are congruent.
5. Determine if the following lines are symmetrical:
A. Lines AB and CD
B. Lines AD and BC
C. Lines AB and AD
D. Lines BC and CD
Answer: C. Lines AB and AD are symmetrical.
6. Determine if the following angles are congruent:
A. Angles AB and CD
B. Angles AD and BC
C. Angles AB and AD
D. Angles BC and CD
Answer: A. Angles AB and CD are congruent.
In this article, we discussed congruent lines in geometry. Congruent lines are lines that are identical in length and angle. We discussed how to identify congruent lines, examples of congruent lines,
and how to use congruent lines in geometric proofs. We also provided practice problems to help you better understand congruent lines.
What is Congruent Lines?
Congruent lines are two straight lines that are identical in length and shape.
What is the symbol for congruent lines?
The symbol for congruent lines is a triple bar, or "?".
What is the difference between congruent lines and parallel lines?
Congruent lines are identical in length and shape, whereas parallel lines are lines that never meet and have the same slope.
|
{"url":"https://www.intmath.com/functions-and-graphs/congruent-lines.php","timestamp":"2024-11-08T15:29:20Z","content_type":"text/html","content_length":"101813","record_id":"<urn:uuid:53c01a95-50c3-47b6-8499-e3acb46235af>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00033.warc.gz"}
|
Linear Regression Calculator
This calculator performs linear regression by calculating the slope, intercept, correlation coefficient (r), and R-squared (R²) values using your predictor and response data.
To use the calculator, provide a list of values for the predictor and the response, ensuring they are the same length, and then click the “Do Linear Regression” button.
Regression Line: y = + x
Goodness of fit:
Correlation r =
R-Squared =
The y-intercept represents the mean of the response variable when the predictor variable is zero.
For each single unit increase in the predictor variable, there is an associated average change of in the response variable.
The R-squared value tells us that % of the variation in the response variable is predictable from the predictor variable.
Linear Regression Explanation
Linear Regression is a statistical method used to model the relationship between a dependent variable (response) and one or more independent variables (predictors). It is one of the simplest forms of
regression analysis, commonly used to find the linear relationship between two variables.
Key Concepts
• Predictor Variable (X): The independent variable used to predict the outcome.
• Response Variable (Y): The dependent variable being predicted or explained.
• Regression Line: The straight line that best fits the data points on the plot.
• Slope (m): Indicates how much the response variable changes for a one-unit change in the predictor variable.
• Y-Intercept (b): The value of Y when the predictor variable (X) is zero.
Linear Regression Formula
The relationship between the predictor and response variables in linear regression is expressed using the equation of a line:
\[ Y = mX + b \]
where \( Y \) is the predicted value of the response, \( X \) is the predictor, \( m \) is the slope of the line, and \( b \) is the y-intercept.
Steps to Perform Linear Regression
1. Collect data for the predictor and response variables.
2. Use the least squares method to estimate the slope and intercept of the regression line.
3. Calculate the predicted values of the response variable for each value of the predictor.
4. Evaluate the model's goodness of fit using metrics like the correlation coefficient (r) and R-squared (R²).
Goodness of Fit
Linear regression models are evaluated based on how well they fit the data:
• Correlation Coefficient (r): Measures the strength of the relationship between the predictor and response variables. A value close to 1 or -1 indicates a strong linear relationship.
• R-Squared (R²): Represents the proportion of the variance in the response variable that can be explained by the predictor variable(s). Higher values indicate a better fit.
Further Reading
|
{"url":"https://researchdatapod.com/data-science-tools/calculators/linear-regression-calculator/","timestamp":"2024-11-07T19:06:44Z","content_type":"text/html","content_length":"171006","record_id":"<urn:uuid:57b54ce8-c6fd-4b0f-834f-3dde450a1150>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00111.warc.gz"}
|
Sobolev-Lorentz capacity and its regularity in the Euclidean setting
Published Paper
Inserted: 28 jul 2017
Last Updated: 4 dec 2020
Journal: Annales Academiae Scientiarum Fennicae Mathematica
Volume: 44
Pages: 537-568
Year: 2019
Doi: 10.5186/aasfm.2019.4433
v1, 42 pages; v2, 34 pages: introduction on pages 1-3 expanded and clarified, sections 3,4 and 5 shortened, result in subsection 4.3 improved (see Theorem 4.3), proof of Proposition 7.3 expanded and
clarified; v3, 28 pages: introduction expanded, sections 2-5 shortened, statement and proof of Theorem 7.1 (i) improved, proof of Proposition 7.3 clarified
Links: link to the version on arxiv.org
This paper studies the Sobolev-Lorentz capacity and its regularity in the Euclidean setting for $n \ge 1$ integer. We extend here our previous results on the Sobolev-Lorentz capacity obtained for $n
\ge 2.$ Moreover, for $n \ge 2$ integer we obtain the exact value of the $n,1$ capacity of a point relative to all its bounded open neighborhoods from ${\mathbf{R}}^n,$ improving another previous
result of ours. We show that this constant is also the value of the $n,1$ global capacity of any point from ${\mathbf{R}}^n,$ $n \ge 2.$ We also prove the embedding $H_{0}^{1,(n,1)}(\Omega) \
hookrightarrow C(\bar{\Omega}) \cap L^{\infty}(\Omega),$ where $\Omega \subset {\mathbf{R}}^n$ is open and $n \ge 2$ is an integer. In the last section of the paper we show that the relative and the
global $(p,1)$ and $p,1$ capacities are Choquet whenever $1 \le n<p<\infty$ or $1<n=p<\infty.$
Keywords: Sobolev spaces, Lorentz spaces, capacity
|
{"url":"https://cvgmt.sns.it/paper/3532/","timestamp":"2024-11-10T00:07:25Z","content_type":"text/html","content_length":"9557","record_id":"<urn:uuid:cfd0c904-6d6d-4265-9d1b-09f49c69052d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00428.warc.gz"}
|
Calculate the Velocity of the Student
Can we calculate the velocity of the student based on the given information?
Options: A) The velocity cannot be calculated with the given information. B) The velocity is 0 m/s. C) The velocity is constant. D) The velocity depends on the specific equation used.
Due to the lack of specific data such as displacement or time duration, it's not possible to calculate the velocity directly. It can be inferred, though, that if the conditions were constant and
there was a displacement, the velocity would also likely be constant and non-zero.
When we look at the information provided, it becomes clear that without knowing the displacement or the time duration of the student's movement, we cannot accurately calculate the velocity. However,
we can make an educated guess based on the assumption that the student's first data points assumed constant conditions.
Velocity is defined as the rate of change of displacement, so if the conditions were constant and the student moved a certain distance, we can assume that the velocity would also be constant. It's
important to note that velocity is a vector quantity and includes both speed and direction.
Therefore, while we cannot pinpoint the exact velocity of the student without additional data, we can speculate that it is likely constant and non-zero if there was any movement involved. The
specific equation used could potentially affect the velocity calculation, but in general, we can infer that the velocity would not be exactly zero unless the student ended up at the same spot they
|
{"url":"https://madlabcreamery.com/physics/calculate-the-velocity-of-the-student.html","timestamp":"2024-11-14T14:26:03Z","content_type":"text/html","content_length":"21836","record_id":"<urn:uuid:a564f1f4-b54d-43cc-bf4c-6c534f99d50a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00132.warc.gz"}
|
Bandwidth-selective excitation in NMR is commonly accomplished through the use of shaped pulses. These pulses require careful calibration to ensure power levels and pulse lengths are correctly
determined for optimal excitation of only the desired frequency range. Simulating the shaped pulse over a range of frequencies is one way of accomplishing this task. Many spectrometer control
programs contain software capable of this (Bruker's TopSpin has ShapeTool and Aglient's VNMRJ has Pbox), but these simulations can also be performed quite easily in an IPython notebook.
This IPython notebook demonstrates how to simulate one of my favorite NMR shaped pulses, the Reburp pulse. This is a refocusing pulse, meaning it rotates magnetization 180ยบ regardless of initial
state of the magnetization. The Reburp pulse has excellent bandwidth selectivity because its excitation profile closely approximates that of a top hat function, which will be evident when the
calculated excitation profile is plotted.
This simulation requires installation of a fortran compiler, the f2py package for converting fortran code into Python modules, and the f2pymagic extension for compiling the fortran code within this
IPython notebook. I installed gfortran and f2py via MacPorts. The f2pymagic extension was installed in the IPython profile used for notebooks by placing the file in the ~/.ipython/profile_notebook/
extensions directory, where profile_notebook should be changed to the name of the notebook profile. The extensions directory may need to be created. Additional required packages include NumPy,
Matplotlib, and, optionally, SymPy.
In [1]:
import numpy as np
import matplotlib.pyplot as plt
import sympy as spy
%matplotlib inline
The following command loads the f2pymagic extension:
Spectrometer Implementation and Simulation of Shaped Pulses¶
Due to the nature of spectrometer hardware, a shaped pulse is implemented as a large number ($N$ or n_pulse as used here) of rectangular pulses of length $\Delta t$, where the total pulse length, $\
tau_{p}$, is equal to $N \Delta t$. The net rotation of the entire pulse is the product of the following five rotations about the y- and z-axes performed for each successive period $\Delta t$ at each
relative frequency:
$R(\tau_{p}) = \prod^{N}_{i=1} R_{z}(\phi_{i}) \space R_{y}(\theta_{i}) \space R_{z}(\alpha_{i}) \space R_{y}(-\theta_{i}) \space R_{z}(-\phi_{i})$
where $\phi_{i}$, $\theta_{i}$, and $\alpha_{i}$ are the respective phase, tilt angle and effective rotation for each time $\Delta t$ in the pulse. This calculation and its derivation are described
in equation 3.107 and surrounding text of Protein NMR Spectroscopy: Principles and Practice, Second Edition. The rotation matrices are:
$$\begin{eqnarray*} R_{x}(\beta) = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & cos(\beta) & -sin(\beta) \\ 0 & sin(\beta) & cos(\beta) \end{array} \right]\\ R_{y}(\beta) = \left[ \begin{array}{ccc}
cos(\beta) & 0 & sin(\beta) \\ 0 & 1 & 0 \\ -sin(\beta) & 0 & cos(\beta) \end{array} \right]\\ R_{z}(\beta) = \left[ \begin{array}{ccc} cos(\beta) & -sin(\beta) & 0 \\ sin(\beta) & cos(\beta) & 0 \\
0 & 0 & 1 \end{array} \right]\\ \end{eqnarray*}$$
where $\beta$ is an arbitrary angle, as in equation 1.35 of the same text. ($R_{x}$ is shown only for completeness and is not used in the above calculation.)
Note that, while the simulation in this notebook does consider the effects of resonance offset, it does not consider evolution during the pulse itself. Furthermore, this method will not work as
implemented to simulate adiabatic pulses.
Setup Pulse and Frequency Range Properties¶
Enter the length of the pulse (in $\mu$s), the minimum and maximum offset of the frequency range (both in hertz), the number of steps in the frequency range, and the input magnetization ('Mx', 'My',
In [6]:
pulseLength = 1000. # in microseconds
offset = [-5000., 5000.] # in hertz
n_freq = 500
inputMagnetization = 'Mz' # 'Mx', 'My', or 'Mz'
Calculate the frequency range and spacing for each of these points.
In [7]:
deltaomega = np.abs(offset[1]-offset[0])/n_freq
relativeomega = np.arange(np.min(offset), np.max(offset), deltaomega)
Input Shaped Pulse Amplitude and Intensity¶
Two options for determining the amplitude and intensity of a shaped pulse are presented. The first (used in this demonstration) calculates the necessary values for a Reburp pulse. The second option
will read a Reburp pulse file (or any other shaped pulse file) from a Bruker instrument if the correct path to the file is given.
Alternative 1: Calculate the Shaped Pulse From a Fourier Series¶
The Reburp pulse is relatively easy to calculate--the pulse is the cosine portion of a Fourier series and the coefficients for this series are published in the original manuscript describing the
"burp" family of pulses.
In [8]:
n_pulse = 1000 # number of points in the pulse, set by user
totalRotation = 180. # in degrees, set by user
fourierCoeffA = np.array([0.49, -1.02, 1.11, -1.57, 0.83,
-0.42, 0.26, -0.16, 0.10, -0.07,
0.04, -0.03, 0.01, -0.02, 0.0, -0.01])
x = np.linspace(1,n_pulse,n_pulse)/n_pulse*2.*np.pi
nCosCoef = np.arange(1, len(fourierCoeffA))
cosMat = np.cos(nCosCoef[np.newaxis,:]*x[:,np.newaxis])
cosMat = np.append(np.ones((n_pulse, 1)), cosMat,axis=1)*fourierCoeffA
sumMat = np.sum(cosMat, axis=1)
This makes the data look just like the values from Bruker's shaped pulse file. The first column in puleShapeArray is the magnitude of the intensity. The second column is phase, which in this case is
0 when intensity is positive, and 180 when it's negative (recall that $\cos(\pi) = -1$).
In [9]:
pulseShapeArray = np.zeros((n_pulse, 2))
pulseShapeArray[:,0] = np.abs(sumMat)
pulseShapeArray[sumMat<0,1] = 180.
Alternative 2: Read a Bruker Shaped Pulse File¶
Reading a Bruker shaped pulse file requires entering the path to the file. For most versions of TopSpin, the file resides in a path similar to the one below. This file can be copied onto a local
computer for reading by this notebook.
In [7]:
pulseFile = '/opt/topspin/exp/stan/nmr/lists/wave/Reburp.1000' # path to shaped pulse file, set by user
Read in the data from the pulse file and extract necessary information.
In [9]:
import re
from StringIO import StringIO
pulseString = open(pulseFile, 'r').read()
pulseShapeArray = np.loadtxt(StringIO(pulseString), comments='#', delimiter=', ')
n_pulse = pulseShapeArray.shape[0]
totalRotation = np.float(re.search("""SHAPE_TOTROT=\s?(.*)""", pulseString).group(1))
# Option to read the scaling factor from the file instead of calculating it below
# scalingFactor = np.float(re.search("""SHAPE_INTEGFAC=\s?(.*)""",pulseString).group(1))
Visualizing the Shaped Pulse¶
Normalize the magnitude of the pulse to 1 and convert the phase to radians.
In [10]:
pulseShapeInten = pulseShapeArray[:,0] / np.max(np.abs(pulseShapeArray[:,0]))
pulseShapePhase = pulseShapeArray[:,1] * np.pi/180
The magnitude and phase of the pulse can be seen by plotting.
In [11]:
f = plt.figure()
f.set_facecolor((1.0, 1.0, 1.0, 1.0))
ax1 = plt.axes()
ax2 = ax1.twinx()
l1, = ax1.plot(np.linspace(0, 1, n_pulse), pulseShapeInten, 'b-', label='Magnitude')
l2, = ax2.plot(np.linspace(0, 1, n_pulse), pulseShapePhase, 'r-', label='Phase')
ax1.set_ylim((1.1*np.min(pulseShapeInten), 1.1*np.max(pulseShapeInten)))
ax1.set_xlabel('Pulse Length')
ax2.legend([l1, l2], [l1.get_label(), l2.get_label()])
<matplotlib.legend.Legend at 0x10da59fd0>
The x- and y-components of the pulse are the products of the intensity and the complex phase.
In [12]:
xPulseShape = pulseShapeInten * np.cos(pulseShapePhase)
yPulseShape = pulseShapeInten * np.sin(pulseShapePhase)
These can also be plotted. Note that the amplitude is negated everywhere the phase was 180ยบ and the value of the phase is now zero everywhere.
In [13]:
f = plt.figure()
f.set_facecolor((1.0, 1.0, 1.0, 1.0))
ax3 = plt.axes()
ax4 = ax3.twinx()
l1, = ax3.plot(np.linspace(0, 1, n_pulse), xPulseShape, 'b-', label='Amplitude')
l2, = ax4.plot(np.linspace(0, 1, n_pulse), yPulseShape, 'r-', label='Phase')
ax4.set_ylim((-np.pi, np.pi))
ax3.set_xlabel('Pulse Length')
ax4.legend([l1, l2],[l1.get_label(), l2.get_label()])
<matplotlib.legend.Legend at 0x10d6ebf50>
The shaped pulse has a bandwidth just like a rectangular (hard) pulse, except that this bandwidth is scaled by the intensity.
In [14]:
scalingFactor = np.sum(xPulseShape)/n_pulse
gammaB1max = 1./(pulseLength * 360./totalRotation)/scalingFactor * 1e6
nu1maxdt = 2*np.pi*1e-6*gammaB1max*pulseLength/n_pulse
Setup the starting magnization on the x-, y-, or z-axis. The default is z magnetization if nothing is set or an invalid choice is made.
In [15]:
inputVector = np.array([[0],[0],[1]])
inputMagnetizationDict = {'mx':np.array([[1],[0],[0]]), 'my':np.array([[0],[1],[0]]), 'mz':np.array([[0],[0],[1]]) }
if inputMagnetization.lower() in inputMagnetizationDict.keys():
inputVector = inputMagnetizationDict[inputMagnetization.lower()]
vectorComponent = inputVector.argmax()
Calculating the Propagator: NumPy vs Fortran Functions Using F2PY Magic¶
This exercise was started as demonstration for incorporating f2pymagic in an IPython notebook to utilize the great increase in speed it can provide over Python. However, spending additional time
optimizing the NumPy function resulted in a considerable speed increase on its own. Since the web-centric nature of IPython notebooks allows technical information to be visualized and shared so
easily, the original (slow) function has been included for the sake of demonstrating what I've learned.
Alternative 1: A Slow NumPy Function¶
This was a first attempt at writing a function to calculate the effect of a shaped pulse on a frequency range. This function is written exactly how I think about the problem mathematically: as a
series of nested loops and rotations.
In [16]:
def pulseprop_py_slow(relativeomega, pulseShapeInten, pulseShapePhase,
gammaB1max, nu1maxdt, inputVector, n_pulse, n_freq):
# Functions for the y and z-rotations and the the function for a generic rotation
yrotation = lambda beta: np.array([[np.cos(beta), 0, np.sin(beta)],
[0, 1, 0],
[-np.sin(beta), 0, np.cos(beta)]])
zrotation = lambda beta: np.array([[np.cos(beta), -np.sin(beta), 0],
[np.sin(beta), np.cos(beta), 0],
[0, 0, 1]])
grotation = lambda alpha, theta, phi: np.dot(zrotation(phi), np.dot(yrotation(theta), np.dot(zrotation(alpha),
xyzdata = np.zeros((3,len(relativeomega)))
phi = pulseShapePhase
# Loop through the entire frequency range calculating the rotation matrix (r) at each frequency
for ind in range(len(relativeomega)):
theta = np.arctan2(pulseShapeInten, relativeomega[ind]/gammaB1max)
alpha = nu1maxdt * np.sqrt(pulseShapeInten**2+(relativeomega[ind]/gammaB1max)**2)
prop = np.eye(3)
# The rotation matrix is a recursive loop through each step of the shaped pulse
for pulseindex in range(n_pulse):
r = grotation(alpha[pulseindex],theta[pulseindex],phi[pulseindex])
prop = np.dot(r,prop)
xyzdata[:,ind] = np.squeeze(np.dot(prop,inputVector))
return xyzdata
In [17]:
%timeit pulseprop_py_slow(relativeomega, pulseShapeInten,
pulseShapePhase, gammaB1max,
nu1maxdt, inputVector, n_pulse, n_freq)
1 loops, best of 3: 1min 26s per loop
As the name implies, this function is pretty slow. One could use python's profiler to determine where the bottlenecks are, but it's also easy to guess that the culprits are the pair of nested for
loops and, on the inside of these loops, a call to grotation, the generalized rotation function, that itself calls two functions, yrotation and zrotation, two times each. To speed up the calculation,
we should eliminate as much of the looping and function calls as possible.
Alternative 2: The Improved NumPy Function¶
The looping can be reduced by improving the vectorization of the function so that it operates on matrices instead of looping through each point of the pulse and frequency range. The calls to the
rotation functions can be eliminated by calculating each of the nine elements of the matrix resulting from grotation. This can even be done symbolically using SymPy.
In [18]:
a,b,t,p = spy.symbols('a,b,t,p')
zr = lambda b: spy.Matrix([[spy.cos(b), -spy.sin(b), 0], [spy.sin(b), spy.cos(b), 0], [0, 0, 1]])
xr = lambda b: spy.Matrix([[1, 0, 0], [0, spy.cos(b), -spy.sin(b)], [0, spy.sin(b), spy.cos(b)]])
yr = lambda b: spy.Matrix([[spy.cos(b), 0, spy.sin(b)], [0, 1, 0], [-spy.sin(b), 0, spy.cos(b)]])
gr = lambda a, t, p : zr(p)*yr(t)*zr(a)*yr(-t)*zr(-p)
This function can be evaluated numerically, if desired.
[6.12323399573676e-17, -1.0, 0]
[ 1.0, 6.12323399573676e-17, 0]
[ 0, 0, 1]
Unfortunately, it can't be evalued numerically with NumPy's ndarrays because it hasn't been vectorized.
In [20]:
an = np.random.random((4,5))
tn = np.random.random((4,5))
pn = np.random.random((4,5))
print("Sad trombone. The SymPy function doesn't work with NumPy arrays.")
Sad trombone. The SymPy function doesn't work with NumPy arrays.
None of my attempts at automatically vectorizing this function were successful.^1 Since the matrix is only 3x3, it is not difficult to calculate each of the nine matrix elements. Their symbolic
values can be determined by printing each element of the SymPy function. Note the use of SymPy's simplify function for algebraic simplification.
In [21]:
r = gr(a,t,p)
for row in range(3):
for col in range(3):
print('r[%d,%d] = %s' % (row,col,spy.simplify(r[row,col])))
r[0,0] = sin(p)**2*cos(a) + sin(t)**2*cos(p)**2 + cos(a)*cos(p)**2*cos(t)**2
r[0,1] = -sin(a)*cos(t) - sin(p)*sin(t)**2*cos(a)*cos(p) + sin(p)*sin(t)**2*cos(p)
r[0,2] = (sin(a)*sin(p) - cos(a)*cos(p)*cos(t) + cos(p)*cos(t))*sin(t)
r[1,0] = sin(a)*cos(t) - sin(p)*sin(t)**2*cos(a)*cos(p) + sin(p)*sin(t)**2*cos(p)
r[1,1] = -sin(p)**2*sin(t)**2*cos(a) + sin(p)**2*sin(t)**2 + cos(a)
r[1,2] = (-sin(a)*cos(p) - sin(p)*cos(a)*cos(t) + sin(p)*cos(t))*sin(t)
r[2,0] = -((cos(a) - 1)*cos(p)*cos(t) + sin(a)*sin(p))*sin(t)
r[2,1] = (-(cos(a) - 1)*sin(p)*cos(t) + sin(a)*cos(p))*sin(t)
r[2,2] = sin(t)**2*cos(a) + cos(t)**2
Using the elements of the rotation matrix and a few other vectorization tricks, I improved the function. In my opinion, the individual calculation of matrix elements makes this function more
difficult to read, but readability wasn't the primary goal of this exercise.
In [22]:
def pulseprop_py_fast(relativeomega, pulseShapeInten, pulseShapePhase,
gammaB1max, nu1maxdt, vectorComponent, n_pulse, n_freq):
# Use broadcasting to create 2D arrays (n_freq x n_rows) for each of the angles
phi = np.tile(pulseShapePhase[np.newaxis,:],(n_freq,1))
theta = np.arctan2(pulseShapeInten[np.newaxis,:], relativeomega[:,np.newaxis]/gammaB1max)
alpha = nu1maxdt * np.sqrt(pulseShapeInten[np.newaxis,:]**2 + (relativeomega[:,np.newaxis]/gammaB1max)**2)
# Then calculate their cosine/sine functions
cosp = np.cos(phi)
sinp = np.sin(phi)
cosa = np.cos(alpha)
sina = np.sin(alpha)
cost = np.cos(theta)
sint = np.sin(theta)
# Calculate each element of the rotation matrix
r = np.empty((3,3,n_freq,n_pulse))
r[0,0] = sinp**2*cosa + sint**2*cosp**2 + cosa*cosp**2*cost**2
r[0,1] = -sina*cost - sinp*sint**2*cosa*cosp + sinp*sint**2*cosp
r[0,2] = (sina*sinp - cosa*cosp*cost + cosp*cost)*sint
r[1,0] = sina*cost - sinp*sint**2*cosa*cosp + sinp*sint**2*cosp
r[1,1] = -sinp**2*sint**2*cosa + sinp**2*sint**2 + cosa
r[1,2] = (-sina*cosp - sinp*cosa*cost + sinp*cost)*sint
r[2,0] = -((cosa - 1)*cosp*cost + sina*sinp)*sint
r[2,1] = (-(cosa - 1)*sinp*cost + sina*cosp)*sint
r[2,2] = sint**2*cosa + cost**2
# Calculate the propagator for the pulse--this is a recursive multiplication so I don't
# know of a way to vectorize this loop. If someone knows, please share in the comments
prop = np.tile(np.eye(3)[:,:,np.newaxis],(1,1,n_freq))
for pulseindex in range(n_pulse):
r_s = np.squeeze(r[:,:,:,pulseindex])
# Must create immutable copies of the views for multiplication
prop00 = prop[0,0].copy()
prop01 = prop[0,1].copy()
prop02 = prop[0,2].copy()
prop10 = prop[1,0].copy()
prop11 = prop[1,1].copy()
prop12 = prop[1,2].copy()
prop20 = prop[2,0].copy()
prop21 = prop[2,1].copy()
prop22 = prop[2,2].copy()
r00 = r_s[0,0].copy()
r01 = r_s[0,1].copy()
r02 = r_s[0,2].copy()
r10 = r_s[1,0].copy()
r11 = r_s[1,1].copy()
r12 = r_s[1,2].copy()
r20 = r_s[2,0].copy()
r21 = r_s[2,1].copy()
r22 = r_s[2,2].copy()
prop[0,0] = r00*prop00 + r01*prop10 + r02*prop20
prop[0,1] = r00*prop01 + r01*prop11 + r02*prop21
prop[0,2] = r00*prop02 + r01*prop12 + r02*prop22
prop[1,0] = r10*prop00 + r11*prop10 + r12*prop20
prop[1,1] = r10*prop01 + r11*prop11 + r12*prop21
prop[1,2] = r10*prop02 + r11*prop12 + r12*prop22
prop[2,0] = r20*prop00 + r21*prop10 + r22*prop20
prop[2,1] = r20*prop01 + r21*prop11 + r22*prop21
prop[2,2] = r20*prop02 + r21*prop12 + r22*prop22
# This is a looped alternative to the above, but it is slower
# for i in range(3):
# for j in range(3):
# for k in range(3):
# prop[i,j] += r_tmp[i,k]*prop[k,j]
# Since the starting magnetization is x, y, or z, just pick the right index
# rather than multiplying
xyzdata = np.squeeze(prop[:,vectorComponent])
return xyzdata
As expected, this subroutine runs much more quickly. (And I'd love to hear from anyone who can improve the speed of this function even more.)
In [23]:
%timeit pulseprop_py_fast(relativeomega, pulseShapeInten,
pulseShapePhase, gammaB1max, nu1maxdt,
vectorComponent, n_pulse, n_freq)
1 loops, best of 3: 428 ms per loop
In [31]:
print("Relative speed up of %.0f times for the vectorized NumPy function." % (86./0.428))
Relative speed up of 201 times for the vectorized NumPy function.
Alternative 3: The Fortran Function Using F2PY Magic¶
Before trying to optimize the NumPy version, I wrote a fortran version to calculate the propagator for the pulse. I was able to further improve the speed of the fortran function using some of the
techniques learned from the NumPy function optimization. This improved version is presented below. My fortran skills haven't been dusted off since my freshman year of college, so I'm certain there
are ways to further improve this function.
In [26]:
! -*- f90 -*-
subroutine propcalc(alpha, theta, phi, n_freq, n_pulse, prop)
integer, intent(in) :: n_freq, n_pulse
real, dimension(n_freq, n_pulse), intent(in) :: alpha(n_freq, n_pulse), theta(n_freq, n_pulse), phi(n_freq, n_pulse)
real, dimension(n_freq, n_pulse) :: cosp(n_freq, n_pulse), sinp(n_freq, n_pulse), cosa(n_freq, n_pulse)
real, dimension(n_freq, n_pulse) :: sina(n_freq, n_pulse), cost(n_freq, n_pulse), sint(n_freq, n_pulse)
real, dimension(3, 3, n_freq, n_pulse) :: r(3, 3, n_freq, n_pulse)
real, dimension(n_freq) :: r11(n_freq), r12(n_freq), r13(n_freq)
real, dimension(n_freq) :: r21(n_freq), r22(n_freq), r23(n_freq)
real, dimension(n_freq) :: r31(n_freq), r32(n_freq), r33(n_freq)
real, dimension(n_freq) :: p11(n_freq), p12(n_freq), p13(n_freq)
real, dimension(n_freq) :: p21(n_freq), p22(n_freq), p23(n_freq)
real, dimension(n_freq) :: p31(n_freq), p32(n_freq), P33(n_freq)
real, dimension(3, 3, n_freq), intent(out) :: prop(3, 3, n_freq)
! The following line is required so the function can return a variable to python:
! f2py real, dimension(3, 3, n_freq), intent(out) :: prop(3, 3, n_freq)
cosp = cos(phi)
sinp = sin(phi)
cosa = cos(alpha)
sina = sin(alpha)
cost = cos(theta)
sint = sin(theta)
r(1,1,:,:) = sinp**2*cosa + sint**2*cosp**2 + cosa*cosp**2*cost**2
r(1,2,:,:) = -sina*cost - sinp*sint**2*cosa*cosp + sinp*sint**2*cosp
r(1,3,:,:) = (sina*sinp - cosa*cosp*cost + cosp*cost)*sint
r(2,1,:,:) = sina*cost - sinp*sint**2*cosa*cosp + sinp*sint**2*cosp
r(2,2,:,:) = -sinp**2*sint**2*cosa + sinp**2*sint**2 + cosa
r(2,3,:,:) = (-sina*cosp - sinp*cosa*cost + sinp*cost)*sint
r(3,1,:,:) = -((cosa - 1)*cosp*cost + sina*sinp)*sint
r(3,2,:,:) = (-(cosa - 1)*sinp*cost + sina*cosp)*sint
r(3,3,:,:) = sint**2*cosa + cost**2
prop = spread(reshape([1,0,0,0,1,0,0,0,1],[3,3]),3,n_freq)
do i=1,n_pulse
! Is there a way to do this without reshaping?
r11 = reshape(r(1,1,:,i), [n_freq])
r12 = reshape(r(1,2,:,i), [n_freq])
r13 = reshape(r(1,3,:,i), [n_freq])
r21 = reshape(r(2,1,:,i), [n_freq])
r22 = reshape(r(2,2,:,i), [n_freq])
r23 = reshape(r(2,3,:,i), [n_freq])
r31 = reshape(r(3,1,:,i), [n_freq])
r32 = reshape(r(3,2,:,i), [n_freq])
r33 = reshape(r(3,3,:,i), [n_freq])
p11 = reshape(prop(1,1,:), [n_freq])
p12 = reshape(prop(1,2,:), [n_freq])
p13 = reshape(prop(1,3,:), [n_freq])
p21 = reshape(prop(2,1,:), [n_freq])
p22 = reshape(prop(2,2,:), [n_freq])
p23 = reshape(prop(2,3,:), [n_freq])
p31 = reshape(prop(3,1,:), [n_freq])
p32 = reshape(prop(3,2,:), [n_freq])
p33 = reshape(prop(3,3,:), [n_freq])
prop(1,1,:) = r11*p11 + r12*p21 + r13*p31
prop(1,2,:) = r11*p12 + r12*p22 + r13*p32
prop(1,3,:) = r11*p13 + r12*p23 + r13*p33
prop(2,1,:) = r21*p11 + r22*p21 + r23*p31
prop(2,2,:) = r21*p12 + r22*p22 + r23*p32
prop(2,3,:) = r21*p13 + r22*p23 + r23*p33
prop(3,1,:) = r31*p11 + r32*p21 + r33*p31
prop(3,2,:) = r31*p12 + r32*p22 + r33*p32
prop(3,3,:) = r31*p13 + r32*p23 + r33*p33
end do
propcalc is ready for use
In [27]:
def pulseprop_fort(relativeomega, pulseShapeInten, pulseShapePhase,
gammaB1max, nu1maxdt, vectorComponent, n_pulse, n_freq):
# Setup the angles using broadcasting as before
phi = np.tile(pulseShapePhase[np.newaxis,:],(n_freq,1))
theta = np.arctan2(pulseShapeInten[np.newaxis,:], relativeomega[:,np.newaxis]/gammaB1max)
alpha = nu1maxdt * np.sqrt(pulseShapeInten[np.newaxis,:]**2 + (relativeomega[:,np.newaxis]/gammaB1max)**2)
# Calculate the propagator and select the appropriate magnetization
prop = propcalc(alpha, theta, phi, n_freq, n_pulse)
xyzdata = np.squeeze(prop[:,vectorComponent])
return xyzdata
In [28]:
%timeit pulseprop_fort(relativeomega, pulseShapeInten,
pulseShapePhase, gammaB1max, nu1maxdt, vectorComponent, n_pulse, n_freq)
10 loops, best of 3: 140 ms per loop
In [32]:
print "Relative speed up of %.0f times for fortran over the slow numpy function." % (86./0.140)
print "Relative speed up of %.0f times for fortran over the vectorized numpy function." % (0.428/0.140)
Relative speed up of 614 times for fortran over the slow numpy function.
Relative speed up of 3 times for fortran over the vectorized numpy function.
As expected, this function is much faster than the original NumPy version. In this case, the speed increase over the improved NumPy version is rather modest.^2 The relative speed of the fortran
function will likely grow as the number of points increases, but this is also a lesson about spending time optimizing a particular technique before moving onto something more powerful. (If major
improvements can be made to either of the subroutines, then there is also a lesson about being familiar with the nuances of a language.)
Alternative 4: A Propagator Function Written in C¶
Joshua Adelman has kindly provided a Cython version of the propagator function.
The cythonmagic extension is already loaded. To reload it, use:
%reload_ext cythonmagic
In [34]:
%%cython -c=-03
#cython: boundscheck=False
#cython: wraparound=False
#cython: cdivision=True
import numpy as np
cimport numpy as np
import cython
from libc.math cimport sqrt, sin, cos, atan2
ctypedef np.float64_t real_t
real_dtype = np.float64
cdef void grotation(double alpha, double theta, double phi, real_t[:,::1] r) nogil:
double cosp, sinp, cosa, sina, cost, sint
cosp = cos(phi)
sinp = sin(phi)
cosa = cos(alpha)
sina = sin(alpha)
cost = cos(theta)
sint = sin(theta)
r[0,0] = cosa*cosp**2*cost**2 - cosa*cosp**2 + cosa - cosp**2*cost**2 + cosp**2
r[0,1] = -sina*cost + sinp*cosa*cosp*cost**2 - sinp*cosa*cosp - sinp*cosp*cost**2 + sinp*cosp
r[0,2] = (sina*sinp - cosa*cosp*cost + cosp*cost)*sint
r[1,0] = sina*cost + sinp*cosa*cosp*cost**2 - sinp*cosa*cosp - sinp*cosp*cost**2 + sinp*cosp
r[1,1] = sinp**2*sint**2 - cosa*cosp**2*cost**2 + cosa*cosp**2 + cosa*cost**2
r[1,2] = (-sina*cosp - sinp*cosa*cost + sinp*cost)*sint
r[2,0] = (-sina*sinp - cosa*cosp*cost + cosp*cost)*sint
r[2,1] = (sina*cosp - sinp*cosa*cost + sinp*cost)*sint
r[2,2] = -cosa*cost**2 + cosa + cost**2
cdef void matmul(real_t[:,::1] a, real_t[:,::1] b, real_t[:,::1] out) nogil:
Py_ssize_t i,j,m
for i in range(3):
for j in range(3):
out[i,j] = 0.0
for m in range(3):
out[i,j] += a[i,m]*b[m,j]
cdef inline real_t sdot(real_t[::1] a, real_t[::1] b) nogil:
return a[0]*b[0] + a[1]*b[1] + a[2]*b[2]
def pulseprop_cython(real_t[::1] relativeomega,
real_t[::1] pulseShapeInten,
real_t[::1] pulseShapePhase,
real_t gammaB1max,
real_t nu1maxdt,
real_t[::1] inputVector, int n_pulse, int n_freq):
Py_ssize_t ind, pulseindex
np.ndarray[real_t, ndim=2] xyzdata = np.zeros((3, relativeomega.shape[0]), dtype=real_dtype)
real_t[::1] phi = pulseShapePhase
real_t[:,::1] r = np.eye(3, dtype=real_dtype)
real_t[:,::1] rtmp = np.empty((3,3), real_dtype)
real_t[:,::1] rtmp2 = np.empty((3,3), real_dtype)
double theta, alpha, romega
# Loop through the entire frequency range calculating the rotation matrix (r) at each frequency
for ind in range(relativeomega.shape[0]):
romega = relativeomega[ind]
r[:] = 0.0
r[0,0] = 1.0
r[1,1] = 1.0
r[2,2] = 1.0
# The rotation matrix is a recursive loop through each step of the shaped pulse
for pulseindex in range(n_pulse):
theta = atan2(pulseShapeInten[pulseindex], relativeomega[ind]/gammaB1max)
alpha = nu1maxdt * sqrt(pulseShapeInten[pulseindex]**2+(romega/gammaB1max)**2)
grotation(alpha, theta, phi[pulseindex], rtmp)
matmul(rtmp, r, rtmp2)
r[:] = rtmp2[:]
xyzdata[0,ind] = sdot(r[0], inputVector)
xyzdata[1,ind] = sdot(r[1], inputVector)
xyzdata[2,ind] = sdot(r[2], inputVector)
return xyzdata
In [35]:
inv2 = inputVector.T.squeeze().astype(np.float64)
%timeit pulseprop_cython(relativeomega, pulseShapeInten,
pulseShapePhase, gammaB1max,
nu1maxdt, inv2, n_pulse, n_freq)
10 loops, best of 3: 152 ms per loop
In [36]:
print("Relative speed up of %.0f times for cython over the slow numpy function." % (86./0.152))
print("Relative speed up of %.0f times for cython over the vectorized numpy function." % (0.428/0.152))
print("Relative speed up of %.0f times for cython over the fortran function." % (0.140/0.152))
Relative speed up of 566 times for cython over the slow numpy function.
Relative speed up of 3 times for cython over the vectorized numpy function.
Relative speed up of 1 times for cython over the fortran function.
Joshua reports that on his system, this cython function is about 15% faster than fortran. In my hands, it is slightly slower. Most likely this can be attributed to differences in our systems and
setup. Clearly, both functions offer some advantage over the vectorized numpy function, however.
Visualizing the Effects of the Shaped Pulse on Magnetization¶
The effects of the shaped pulse on the initial magnetization can be visualized by plotting. It is also a convenient way to compare the results of the NumPy and fortran functions.
In [37]:
xyzdata_numpyS = pulseprop_py_slow(relativeomega, pulseShapeInten,
pulseShapePhase, gammaB1max,
nu1maxdt, inputVector, n_pulse, n_freq)
xyzdata_numpyF = pulseprop_py_fast(relativeomega, pulseShapeInten,
pulseShapePhase, gammaB1max, nu1maxdt,
vectorComponent, n_pulse, n_freq)
xyzdata_fortran = pulseprop_fort(relativeomega, pulseShapeInten,
pulseShapePhase, gammaB1max, nu1maxdt,
vectorComponent, n_pulse, n_freq)
xyzdata_cython = pulseprop_cython(relativeomega, pulseShapeInten,
pulseShapePhase, gammaB1max, nu1maxdt,
inv2, n_pulse, n_freq)
This simulation started with z-magnetization, so let's see how the shaped pulse affected that.
In [38]:
f = plt.figure()
f.set_facecolor((1.0, 1.0, 1.0, 1.0))
ax5 = plt.axes()
ax5.plot(relativeomega, xyzdata_numpyS[2], 'k', linewidth=3.0, label='NumPy Slow, Mz')
ax5.plot(relativeomega, xyzdata_numpyF[2], 'r', label='NumPy Fast, Mz')
ax5.plot(relativeomega, xyzdata_fortran[2], 'bs', label='Fortran, Mz')
ax5.plot(relativeomega ,xyzdata_cython[2], 'c.', label='Cython, Mz')
ax5.set_xlabel('Omega (Hz)')
ax5.set_ylabel('Relative Intensity')
ax5.set_ylim(-1.1, 1.1)
<matplotlib.legend.Legend at 0x10e70cb50>
The result is an inversion of the magnetization from the z-axis for the frequency range of approximately -2000 to 2000 Hz. Note the inverted "top hat" shape referred to earlier in the post. This is
exactly what is expected from a 180ยบ pulse with the added benefit of bandwidth selectivity. Just outside of this range, the pulse does not fully invert the magnetization and the results can be
unpredictable (see below). Further outside of the bandwidth limit, at around -4000 and 4000 Hz, the pulse has little effect on the magnetization. Again, exactly as expected.
The effect of the shaped pulse on the x- and y-components is quite different from that on the z-component, particularly near the limit of +/-2500 Hz for the inverted tophat.
In [39]:
f = plt.figure()
f.set_facecolor((1.0, 1.0, 1.0, 1.0))
ax6 = plt.axes()
ax6.plot(relativeomega, xyzdata_numpyS[0], 'k', linewidth=3.0, label='NumPy Slow, Mx')
ax6.plot(relativeomega, xyzdata_numpyF[0], 'r', label='NumPy Fast, Mx')
ax6.plot(relativeomega, xyzdata_fortran[0], 'bs', label='Fortran, Mx')
ax6.plot(relativeomega, xyzdata_cython[0], 'c.', label='Cython, Mx')
ax6.set_xlabel('Omega (Hz)')
ax6.set_ylabel('Relative Intensity')
ax6.set_ylim(-1.1, 1.1)
<matplotlib.legend.Legend at 0x1096f1fd0>
In [40]:
f = plt.figure()
ax7 = plt.axes()
ax7.plot(relativeomega, xyzdata_numpyS[1], 'k', linewidth=3.0, label='NumPy Slow, My')
ax7.plot(relativeomega, xyzdata_numpyF[1], 'r', label='NumPy Fast, My')
ax7.plot(relativeomega, xyzdata_fortran[1], 'bs', label='Fortran, My')
ax7.plot(relativeomega, xyzdata_cython[1], 'c.', label='Cython, My')
ax7.set_xlabel('Omega (Hz)')
ax7.set_ylabel('Relative Intensity')
ax7.set_ylim(-1.1, 1.1)
<matplotlib.legend.Legend at 0x109728e50>
Edit: The original version of this notebook contained two errors. The first involved the modification of NumPy ndarray views during matrix multiplication within the faster NumPy function. The second
chose vectorComponent from the wrong axis at the conclusion of the faster NumPy function and in the Python wrapper that calls the Fortran function. This resulted in an inversion of the y-axis
magnetization. Both errors have been corrected. Thanks to Joshua Adelman for catching the second error. A minor change to the matrix simplification calculation due to an update of the SymPy package
was also made, however this change had no effect on the result.
Edit: A Cython version of the propagator function provided by Joshua Adelman has been added above.
This post was written in an IPython notebook, which can be downloaded here, or viewed statically here.
1. Ideally this function could be vectorized with something like SymPy's lambdify, autowrap, or ufuncify functions or with the Theano package. Unfortunately, lambdify isn't compatible with NumPy
arrays. Likewise, autowrap isn't compatible with SymPy's Matrix function yet. I was unsuccessful in getting unfuncify to work with this case, and my MacBook Air doesn't have the graphics card
necessary for GPU computing, so I didn't attempt to use Theano. ↩
2. It would also be interesting to compare the speed of these functions to versions implemented in Numba and Cython. My efforts at implementing this subroutine in Numba were not very successful and
I'm not fond of C, so I decided to pass on Cython. ↩
|
{"url":"http://themodernscientist.com/posts/2013/2013-06-09-simulation_of_nmr_shaped_pulses/","timestamp":"2024-11-09T15:43:27Z","content_type":"text/html","content_length":"411875","record_id":"<urn:uuid:b2be2f10-1def-46a6-8db8-d176433e972b>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00271.warc.gz"}
|
Deriving CDF of Kolmogorov-Smirnov Test Statistic
1. Introduction
The article’s goal is to present a comprehensive summary of deriving the distribution of the usual Kolmogorov-Smirnov test statistic, both in its exact and approximate form. We concentrate on
practical aspects of this exercise, meaning that
• reaching a modest (three significant digit) accuracy is usually considered quite adequate,
• computing critical and P-values of the test is the primary objective, implying that it is the upper tail of the distribution which is most important,
• methods capable of producing practically instantaneous results are preferable to those taking several seconds, minutes, or more,
• simple, easy to understand (and to code) techniques have a great conceptual advantage over complex, black-box type algorithms.
This is the reason why our review excludes some existing results (however deep and mathematically interesting they may be); we concentrate only on the most relevant techniques (this is also the
reason why our bibliography is deliberately far from complete).
1.1. Test Statistic
The Kolmogorov-Smirnov one-sample test works like this: the null hypothesis states that a random independent sample of size n has been drawn from a specific (including the value of each of its
parameters, if any) continuous distribution. The test statistics (denoted ${D}_{n}$ ) is the largest (in the limit-superior sense) absolute-value difference between the corresponding empirical
cumulative density function (CDF) and the theoretical CDF, denoted $F\left(x\right)$, of the hypothesized distribution; the former is defined by
where ${X}_{1},{X}_{2},\cdots ,{X}_{n}$ are the individual sample values and ${I}_{{X}_{i}<x}$ is the usual indicator function (equal to 1 when ${X}_{i}$ is smaller than x, equal to 0 otherwise).
Note that ${F}_{e}\left(x\right)$ is a step function which starts at 0 and increases, by $\frac{1}{n}$ at each ${X}_{i}$, until it reaches the value of 1.
To complete the test, we need to know the CDF of ${D}_{n}$ under the assumption that the null hypothesis is correct. Deriving this CDF is a difficult task; there are several exact techniques for
doing that; in this article, we expound only the major ones. We then derive the $n\to \infty$ limit of the resulting distribution, to serve as an approximation when n is relatively large. Since the
accuracy of this limit is not very impressive (unless n is extremely large), we show how to remove the $\frac{1}{\sqrt{n}}$ -proportional, $\frac{1}{n}$ -proportional, etc. error of this
approximation, making it sufficiently accurate for samples of practically any size.
1.2. Transforming to $\mathcal{U}\left(0,1\right)$
The first thing we do is to define
where $F\left(x\right)$ is the CDF of the hypothesized distribution; the ${U}_{1},{U}_{2},\cdots ,{U}_{n}$ then constitute (under the null hypothesis) a random independent sample from the uniform
distribution over the $\left(0,1\right)$ interval, the new theoretical CDF is then simply $F\left(u\right)=u$. It is important to realize that doing this does not change the vertical distances
between the empirical and theoretical CDFs; it transforms only the corresponding horizontal scale as Figure 1 and Figure 2 demonstrate (the original sample is from Exponential distribution).
This implies that the resulting value of ${D}_{n}$ (and consequently, its distribution) remains the same. We can then conveniently assume (from now on) that our sample has been drawn from $\mathcal
{U}\left(0,1\right)$ ; yet the results apply to any hypothesized distribution.
1.3. Discretization
In this article, we aim to find the CDF of ${D}_{n}$, namely
$Pr\left({D}_{n}\le d\right)$(3)
only for a discrete set of n values of d, namely for $d=\frac{1}{n},\frac{2}{n},\cdots ,\frac{n}{n}$, even though ${D}_{n}$ is a continuous random variable whose support is the $\left(\frac{1}{2n},1\
right)$ interval. This proves to be sufficient for any (but extremely small) n, since our discrete results can be easily extended to all values of d by a sensible interpolation.
There are technique capable of yielding exact results for any value of d (see [1] or [2]), but they have some of the disadvantages mentioned above and will not be discussed here in any detail;
nevertheless, for completeness, we present a Mathematica code of Durbin’s algorithm in the Appendix.
2. Linear-Algebra Solution
This, and the next two section, are all based mainly on [3], later summarized by [4].
We start by defining $n+1$ integer-valued random variables
${T}_{i}\stackrel{\text{def}}{=}n\cdot \left({F}_{e}\left({d}_{i}\right)-{d}_{i}\right)$(4)
where ${d}_{i}=\frac{i}{n}$, $i=0,1,2,\cdots ,n$ ; note that $n\cdot {F}_{e}\left({d}_{i}\right)$ equals the number of the ${U}_{i}$ observations which are smaller than ${d}_{i}$, also note that ${T}
_{0}$ and ${T}_{n}$ are always identically equal to 0. We can then show that
Claim 1. ${D}_{n}>{d}_{j}$ if and only if at least one of the ${T}_{i}$ values is equal to j or –j.
Proof. When ${T}_{i}=j$, then there is a value of d to the left of ${d}_{i}$ such that ${F}_{e}\left(d\right)-d>j$, implying that ${D}_{n}>\frac{j}{n}$ ; similarly, when ${T}_{i}=-j$ then there is a
value of d to the right of ${d}_{i}$ such that ${F}_{e}\left(d\right)-d<-j$, implying the same.
To prove the reverse, we must first realize that no one-step decrease in the ${T}_{0},{T}_{1},\cdots ,{T}_{n}$ sequence can be bigger than 1 (this happens when there are no observations between the
corresponding ${d}_{i}$ and ${d}_{i+1}$ ); this implies that the T sequence must always pass through all integers between the smallest and the largest value ever reached by T.
Since $n\cdot {D}_{n}>j$ implies that either $n\cdot \left({F}_{e}\left(d\right)-d\right)$ has exceeded the value of j at some d, or it has reached a value smaller than −j, it then follows that at
least one ${T}_{i}$ has to be equal to either j or −j respectively. ■
2.1. Total-Probability Formula
Now, consider the sample space of all possible (integer) values of ${T}_{1},{T}_{2},\cdots ,{T}_{n-1}$, and a fixed integer J between 1 and $n-1$ inclusive (we use the capital font to emphasize J’s
special role in all subsequent formulas). If ${T}_{i}$ is the first of the ${T}_{1},{T}_{2},\cdots ,{T}_{n-1}$ random variables to reach the value of either J or –J, we denote the corresponding event
${A}_{i}$ and ${B}_{i}$ respectively ( $C$ means that none of the T[i]s have ever reached either J or −J); ${A}_{1},{A}_{2},\cdots ,{A}_{n-1},{B}_{1},{B}_{2},\cdots ,{B}_{n-1},C$ then constitute a
partition of this sample space.
By a routine application of the formula of total probability, we can write, for any k between 1 and $n-J$ ( ${T}_{k}=J$ cannot happen for any other T)
$\begin{array}{c}Pr\left({T}_{k}=J\right)=\underset{i=1}{\overset{n-1}{\sum }}Pr\left({A}_{i}\right)\cdot Pr\left({T}_{k}=J|{A}_{i}\right)+\underset{i=1}{\overset{n-1}{\sum }}Pr\left({B}_{i}\right)\
cdot Pr\left({T}_{k}=J|{B}_{i}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+Pr\left(C\right)\cdot Pr\left({T}_{k}=J|C\right)\end{array}$(5)
We know that, given $C$, ${T}_{k}=J$ could not have happened. Similarly, given ${A}_{i}$ (given ${B}_{i}$ ), ${T}_{k}=J$ cannot happen any earlier than at $k\ge i$ ( $k>i$ ). And finally, $Pr\left
({B}_{i}\right)$ is equal to 0 when $i<J$ (we need at least J steps to reach ${T}_{i}=-J$ from ${T}_{0}=0$ ). We can thus simplify (5) to read
$Pr\left({T}_{k}=J\right)=\underset{i=1}{\overset{k}{\sum }}Pr\left({A}_{i}\right)\cdot Pr\left({T}_{k}=J|{A}_{i}\right)+\underset{i=J}{\overset{k-1}{\sum }}Pr\left({B}_{i}\right)\cdot Pr\left({T}_
where $1\le k\le n-J$, with the understanding that an empty sum (lower limit exceeding the upper limit) equals to 0.
From (4) it is obvious that ${T}_{k}=J$ is equivalent to having (exactly to be understood from now on) $k+J$ observations smaller than $\frac{k}{n}$.The corresponding probability is the same as that
of getting $k+J$ successes in a binomial-type experiment with n trials and a single-success probability of $\frac{k}{n}$ ; we will denote it ${\mathbb{B}}_{k+J}^{n}\left(\frac{k}{n}\right)$.
Similarly, ${T}_{k}=J|{A}_{i}$ has the same probability as ${T}_{k}=J|{T}_{i}=J$ (earlier values of T becoming irrelevant), which means that, out of the remaining $n-i-J$ observations, $k-i$ must be
in the $\left({d}_{i},{d}_{k}\right)$ interval; this probability is equal to ${\mathbb{B}}_{k-i}^{n-i-J}\left(\frac{k-i}{n-i}\right)$.
Finally, $Pr\left({T}_{k}=J|{B}_{i}\right)=Pr\left({T}_{k}=J|{T}_{i}=-J\right)$, which means that, out of the remaining $n-i+J$ observations, $k-i+2J$ must be in the $\left({d}_{i},{d}_{k}\right)$
interval; this probability equals to ${\mathbb{B}}_{i-J}^{n-i+J}\left(\frac{k-i}{n-i}\right)$.
2.2. Resulting Equations
We can thus simplify (6) to
${\mathbb{B}}_{k+J}^{n}\left(\frac{k}{n}\right)=\underset{i=1}{\overset{k}{\sum }}Pr\left({A}_{i}\right)\cdot {\mathbb{B}}_{k-i}^{n-i-J}\left(\frac{k-i}{n-i}\right)+\underset{i=J}{\overset{k-1}{\sum
}}Pr\left({B}_{i}\right)\cdot {\mathbb{B}}_{k-i+2J}^{n-i+J}\left(\frac{k-i}{n-i}\right)$(7)
(with $1\le k\le n-J$ ), where the $\mathbb{B}$ coefficients are readily computable. This constitutes $n-J$ linear equations for the unknown values of $Pr\left({A}_{1}\right),Pr\left({A}_{2}\right),\
cdots ,Pr\left({A}_{n-J}\right)$, $Pr\left({B}_{J}\right),Pr\left({B}_{J+1}\right),\cdots ,Pr\left({B}_{n-1}\right)$.
By the same kind of reasoning we can show that, for any k between J and $n-1$
$Pr\left({T}_{k}=-J\right)=\underset{i=1}{\overset{k-2J}{\sum }}Pr\left({A}_{i}\right)\cdot Pr\left({T}_{k}=-J|{A}_{i}\right)+\underset{i=J}{\overset{k}{\sum }}Pr\left({B}_{i}\right)\cdot Pr\left({T}
(note that the T sequence needs at least 2J steps to reach -J at ${T}_{k}$ from J at ${T}_{i}$ ), leading to
${\mathbb{B}}_{k-J}^{n}\left(\frac{k}{n}\right)=\underset{i=1}{\overset{k-2J}{\sum }}Pr\left({A}_{i}\right)\cdot {\mathbb{B}}_{k-i-2J}^{n-i-J}\left(\frac{k-i}{n-i}\right)+\underset{i=J}{\overset{k}{\
sum }}Pr\left({B}_{i}\right)\cdot {\mathbb{B}}_{k-i}^{n-i+J}\left(\frac{k-i}{n-i}\right)$(9)
when $J\le k\le n$.
Combining (7) and (9), we end up with the total of $2\left(n-J\right)$ linear equations for the same number of unknowns. Furthermore, these equations have a “doubly triangular” form, meaning that
proceeding in the right order, i.e. $Pr\left({A}_{1}\right),Pr\left({B}_{J}\right),Pr\left({A}_{2}\right),Pr\left({B}_{J+1}\right),\cdots$, we are always solving only for a single unknown (this is
made obvious by the next Mathematica code).
Having found the solution, we can then compute (based on Claim 1)
$Pr\left({D}_{n}>{d}_{J}\right)=\underset{i=1}{\overset{n-J}{\sum }}Pr\left({A}_{i}\right)+\underset{i=J}{\overset{n-1}{\sum }}Pr\left({B}_{i}\right)$(10)
which yields a single value of the desired CDF (or rather, of its complement) of ${D}_{n}$. To get the full (at least in the discretized sense) picture of the distribution, the procedure now needs to
be repeated for each possible value of J.
The whole algorithm can be summarized by the following Mathematica code (note that instead of superscripts, interpreted by Mathematica as powers, we have to use “overscripts”).
(for improved efficiency, we use only the relevant range of J values).
The program takes over one minute to execute; the results are displayed in Figure 3.
We can easily interpolate values of the corresponding table to convert it into a continuous function, thereby finding any desired value to a sufficient accuracy.
The main problem with this algorithm lies in its execution time, which increases (like most matrix-based computation) with roughly the third power of n. This makes the current approach rather
prohibitive when dealing with samples consisting of thousands of observations.
In this context it is fair to mention that none of our programs have been optimized for run-time efficiency; even though some improvement in this regard is definitely possible, we do not believe that
it would substantially change our general conclusions.
3. Generating-Function Solution
We now present an alternate way of building the same (discretized, but otherwise exact) solution. We start by defining the following function of two integer arguments
Note that, when $i+j$ is negative (i is always positive), ${\mathfrak{p}}_{j}^{i}$ is equal to 0.
Claim 2. The binomial probability ${\mathbb{B}}_{i}^{n}\left(\frac{k}{n}\right)$ can be expressed in terms of three such $\mathfrak{p}$ functions, as follows
${\mathbb{B}}_{i}^{n}\left(\frac{k}{m}\right)=\frac{{\mathfrak{p}}_{i-k}^{k}\cdot {\mathfrak{p}}_{n-i-m+k}^{m-k}}{{\mathfrak{p}}_{n-m}^{m}}$(12)
${\mathbb{B}}_{i}^{n}\left(\frac{k}{m}\right)=\frac{n!}{i!\left(n-i\right)!}{\left(\frac{k}{m}\right)}^{i}{\left(\frac{m-k}{m}\right)}^{n-i}=\frac{\frac{{k}^{i}}{i!}\cdot \frac{{\left(m-k\right)}^
Note that $\mathbb{B}$ has the value of 0 whenever the number of successes (the subscript) is either negative or bigger than n (the superscript). Similarly, ${\mathbb{B}}_{0}^{0}$ is always equal to
3.1. Modified Equations
The new function (11) enables us to express (7) and (9) in the following manner:
$\frac{{\mathfrak{p}}_{J}^{k}\cdot {\mathfrak{p}}_{-J}^{n-k}}{{\mathfrak{p}}_{0}^{n}}=\underset{i=1}{\overset{k}{\sum }}Pr\left({A}_{i}\right)\cdot \frac{{\mathfrak{p}}_{0}^{k-i}\cdot {\mathfrak{p}}_
{-J}^{n-k}}{{\mathfrak{p}}_{-J}^{n-i}}+\underset{i=J}{\overset{k-1}{\sum }}Pr\left({B}_{i}\right)\cdot \frac{{\mathfrak{p}}_{2J}^{k-i}\cdot {\mathfrak{p}}_{-J}^{n-k}}{{\mathfrak{p}}_{J}^{n-i}}$(14)
$\frac{{\mathfrak{p}}_{-J}^{k}\cdot {\mathfrak{p}}_{J}^{n-k}}{{\mathfrak{p}}_{0}^{n}}=\underset{i=1}{\overset{k-1}{\sum }}Pr\left({A}_{i}\right)\cdot \frac{{\mathfrak{p}}_{-2J}^{k-i}\cdot {\mathfrak
{p}}_{J}^{n-k}}{{\mathfrak{p}}_{-J}^{n-i}}+\underset{i=J}{\overset{k}{\sum }}Pr\left({B}_{i}\right)\cdot \frac{{\mathfrak{p}}_{0}^{k-i}\cdot {\mathfrak{p}}_{J}^{n-k}}{{\mathfrak{p}}_{J}^{n-i}}$(15)
Cancelling ${\mathfrak{p}}_{-J}^{n-k}$ in each term of (14) and multiplying by ${\mathfrak{p}}_{0}^{n}$ yields
${\mathfrak{p}}_{J}^{k}=\underset{i=1}{\overset{k}{\sum }}\frac{{\mathfrak{p}}_{0}^{n}}{{\mathfrak{p}}_{-J}^{n-i}}Pr\left({A}_{i}\right)\cdot {\mathfrak{p}}_{0}^{k-i}+\underset{i=J}{\overset{k-1}{\
sum }}\frac{{\mathfrak{p}}_{0}^{n}}{{\mathfrak{p}}_{J}^{n-i}}Pr\left({B}_{i}\right)\cdot {\mathfrak{p}}_{2J}^{k-i}$(16)
which can be written as
${\mathfrak{p}}_{J}^{k}=\underset{i=1}{\overset{k}{\sum }}\text{ }\text{ }{a}_{i}\cdot {\mathfrak{p}}_{0}^{k-i}+\underset{i=J}{\overset{k-1}{\sum }}\text{ }\text{ }{b}_{i}\cdot {\mathfrak{p}}_{2J}^
(for any positive integer k), by defining
Note that n has disappeared from (17), making ${a}_{i}$ and ${b}_{i}$ potentially infinite sequences (consider letting n have any positive value; in that sense ${a}_{i}$ is well defined for any i
from 1 to $\infty$ and ${b}_{i}$ for any i from J to $\infty$ ). Once we solve for these two sequences, converting them back to $Pr\left({A}_{i}\right)$ and $Pr\left({B}_{i}\right)$ for any specific
value of n is a simple task; this approach thus effectively deals with all n at the same time!
Similarly modifying (15) results in
${\mathfrak{p}}_{-J}^{k}=\underset{i=1}{\overset{k-1}{\sum }}\text{ }\text{ }{a}_{i}\cdot {\mathfrak{p}}_{-2J}^{k-i}+\underset{i=J}{\overset{k}{\sum }}\text{ }\text{ }{b}_{i}\cdot {\mathfrak{p}}_{0}^
(for any $k>J$ ), utilizing the previous definition of ${a}_{i}$ and ${b}_{i}$. The equations, together with (17), constitute an infinite set of linear equations for elements of the two sequences. To
find the corresponding solution, we reach for a different mathematical tool.
3.2. Generating Functions
Let us introduce the following generating functions
${G}_{a}\left(t\right)\stackrel{\text{def}}{=}\underset{k=1}{\overset{\infty }{\sum }}\text{ }\text{ }{a}_{k}\cdot {t}^{k}$(21)
${G}_{b}\left(t\right)\stackrel{\text{def}}{=}\underset{k=1}{\overset{\infty }{\sum }}\text{ }\text{ }{b}_{k}\cdot {t}^{k}$
${G}_{j}\left(t\right)\stackrel{\text{def}}{=}{\delta }_{j,0}+\underset{k=1}{\overset{\infty }{\sum }}\text{ }\text{ }{\mathfrak{p}}_{j}^{k}\cdot {t}^{k}$
where j is a non-negative integer, and ${\delta }_{j,0}$ (Kronecker’s $\delta$ ) is equal to 1 when $j=0$, equal to 0 otherwise.
Multiplying (17) by ${t}^{k}$ and summing over k from 1 to $\infty$ yields
${G}_{J}\left(t\right)={G}_{a}\left(t\right)\cdot {G}_{0}\left(t\right)+{G}_{b}\left(t\right)\cdot {G}_{2J}\left(t\right)$(Gj)
since ${\sum }_{i=1}^{k}\text{ }\text{ }{a}_{i}\cdot {\mathfrak{p}}_{0}^{k-i}$ is the coefficient of ${t}^{k}$ in the expansion of ${G}_{a}\left(t\right)\cdot {G}_{0}\left(t\right)$, and ${\sum }_{i=
J}^{k-1}\text{ }\text{ }{b}_{i}\cdot {\mathfrak{p}}_{2J}^{k-i}$ is the coefficient of ${t}^{k}$ in the expansion of ${G}_{b}\left(t\right)\cdot {G}_{2J}\left(t\right)$ ; combining two sequences in
this manner is called their convolution. Note the importance (for correctness of the ${G}_{a}\cdot {G}_{0}$ result) of including ${\delta }_{j,0}$ in the definition of ${G}_{0}\left(t\right)$.
Similarly, it follows from (20) that
${G}_{-J}\left(t\right)={G}_{a}\left(t\right)\cdot {G}_{-2J}\left(t\right)+{G}_{b}\left(t\right)\cdot {G}_{0}\left(t\right)$(22)
3.3. Resulting Solution
The last two (simple, linear) equations can be so easily solved for ${G}_{a}\left(t\right)$ and ${G}_{b}\left(t\right)$ that we do not even quote the answer.
Going back to a specific sample size n, we now need to find the value of (10), namely
$\frac{{\sum }_{i=1}^{n-1}{a}_{i}\cdot {\mathfrak{p}}_{-J}^{n-i}+{\sum }_{i=1}^{n-1}{b}_{i}\cdot {\mathfrak{p}}_{J}^{n-i}}{{\mathfrak{p}}_{0}^{n}}$(23)
which follows from solving (18) and (19) for $Pr\left({A}_{i}\right)$ and $Pr\left({B}_{i}\right)$ respectively. The numerator of the last expression is clearly (by the same convolution argument) the
coefficient of ${t}^{n}$ in the expansion of
${G}_{a}\left(t\right)\cdot {G}_{-J}\left(t\right)+{G}_{b}\left(t\right)\cdot {G}_{J}\left(t\right)$(24)
An important point is that, in actual computation, the G functions need to be expanded only up to and including the ${t}^{n}$ term, making them long but otherwise simple polynomials.
The algorithm to find $Pr\left({D}_{n}>{d}_{J}\right)$ then requires us to build ${G}_{0}\left(t\right)$, ${G}_{J}\left(t\right)$, ${G}_{-J}\left(t\right)$, ${G}_{2J}\left(t\right)$ and ${G}_{-2J}\
left(t\right)$, and Taylor-expand, up to the same ${t}^{n}$ term,
_{-2J}\left(t\right)}{\left({G}_{0}{\left(t\right)}^{2}-{G}_{2J}\left(t\right){G}_{-2J}\left(t\right)\right)\cdot {\mathfrak{p}}_{0}^{n}}$(25)
which is obtained by substituting the solution to (Gj) and (22) into (24), and further dividing by ${\mathfrak{p}}_{0}^{n}$ ; $\mathrm{Pr}\left({D}_{n}>{d}_{J}\right)$ is then provided by the
resulting coefficient of ${t}^{n}$.
Note that, based on the same expansion, we can get $\mathrm{Pr}\left({D}_{n}>{d}_{J}\right)$ for any smaller n as well, just by correspondingly replacing the value of ${\mathfrak{p}}_{0}^{n}$.
Nevertheless, the process still needs to be repeated with all relevant values of J.
The corresponding Mathematica code looks as follows:
It produces results identical to those of the matrix-algebra algorithm, but has several advantages: the coding is somehow easier, it (almost) automatically yields results for any $n\le 300$ (not a
part of our code) and it executes faster (taking about 17 seconds). Nevertheless, its run-time still increases with roughly the third power of n, thus preventing us from using it with a much larger
value of n.
We now proceed to find several approximate solutions of increasing accuracy, all based on (25).
4. Asymptotic Solution
As we have seen, neither of the previous two solutions is very practical (and ultimately not even feasible) as the sample size increases. In that case, we have to switch to using an approximate (also
referred to as asymptotic) solution.
Large-n Formulas
First, we must replace the old definition of ${\mathfrak{p}}_{j}^{i}$, namely (11), by
${\mathfrak{p}}_{j}^{i}\stackrel{\text{def}}{=}\frac{{i}^{i+j}\cdot {\text{e}}^{-i}}{\left(i+j\right)!}$(26)
Note that this does not affect (12), nor any of the subsequent formulas up to and including (25), since the various ${\text{e}}^{-i}$ factors always cancel out.
Also note that the definition can be easily extended to real (not just integer) arguments by using $\Gamma \left(i+j+1\right)$ in place of $\left(i+j\right)!$, where $\Gamma$ denotes the usual gamma
1) Laplace representation
Note that, from now on, the summations defining the G functions in (21) stay infinite (no longer truncated to the first n terms only).
Consider a (rather general) generating function
$G\left(t\right)\stackrel{\text{def}}{=}\underset{k=0}{\overset{\infty }{\sum }}\text{ }\text{ }{p}_{k}\cdot {t}^{k}$(27)
and an integer n ( $p$ may be implicitly a function of n as well as k); our goal is to find an approximation for ${p}_{n}$ as n increases.
After replacing k and t with two new variables x and s, thus
$k=n\cdot x$(28)
$G\left({\text{e}}^{-s/n}\right)$ becomes
$\underset{\begin{array}{c}x=0\\ \text{instepsof}\text{\hspace{0.17em}}\frac{1}{n}\end{array}}{\overset{\infty }{\sum }}{p}_{x\cdot n}exp\left(-s\cdot x\right)$(29)
Making the assumption that expanding ${p}_{x\cdot n}$ in powers of $\frac{1}{\sqrt{n}}$ results in
${p}_{x\cdot n}=\frac{q\left(x\right)}{n}+O\left(\frac{1}{{n}^{3/2}}\right)\simeq \frac{q\left(x\right)}{n}$(30)
(and our results do have this property), then (29) is approximately equal to
$\frac{1}{n}\cdot \underset{\begin{array}{c}x=0\\ \text{instepsof}\frac{1}{n}\end{array}}{\overset{\infty }{\sum }}q\left(x\right)exp\left(-s\cdot x\right)+\cdots$(31)
which, in the $n\to \infty$ limit, yields the following (large-n) approximation to $G\left({\text{e}}^{-s/n}\right)$ :
$L\left(s\right)\stackrel{\text{def}}{=}{\int }_{0}^{\infty }\text{ }\text{ }q\left(x\right)exp\left(-s\cdot x\right)\text{d}x$(32)
Note that $L\left(s\right)$ is the so-called Laplace transform of $q\left(x\right)$ ; we call it the Laplace representation of G.
To find an approximate value of the coefficient of ${t}^{n}$ (i.e. ${p}_{n}\simeq \frac{q\left(1\right)}{n}$ ) of (27),
we need to find the so-called inverse Laplace transform (ILT) of $L\left(s\right)$ yielding the corresponding $q\left(x\right)$ then substitute 1 for x and divide by n (this is the gist of the
technique of this section).
To improve this approximation, $q\left(x\right)$ itself and consequently $L\left(s\right)$ can be expanded in further powers of $\frac{1}{\sqrt{n}}$ (done eventually; but currently we concentrate on
the $n\to \infty$ limit).
2) Approximating G[j]
Let us now find Laplace representation of our ${G}_{j}$, i.e. the last line of (21), further divided by $\sqrt{n}$ (this is necessary to meet (30), yet it does not change (25) as long as ${\mathfrak
{p}}_{0}^{n}$ of that formula is divided by $\sqrt{n}$ as well). To find the corresponding $q\left(x\right)$, we need the $n\to \infty$ limit of
$n\cdot \frac{{\mathfrak{p}}_{j}^{k}}{\sqrt{n}}=\frac{{\left(n\cdot x\right)}^{n\cdot x+j}exp\left(-n\cdot x\right)\sqrt{n}}{\left(n\cdot x+j\right)!}$(33)
To be able to reach a finite answer, j itself needs to be replaced by $z\sqrt{n}$ ; note that doing that with our J changes $\mathrm{Pr}\left({D}_{n}>{d}_{J}\right)$ to $\mathrm{Pr}\left(\sqrt{n}\
cdot {D}_{n}>z\right)$.
It happens to be easier to take the limit of the natural logarithm of (33), namely
$\left(x\cdot n+z\sqrt{n}\right)ln\left(x\cdot n\right)-x\cdot n+\frac{1}{2}lnn-ln\left(x\cdot n+z\sqrt{n}\right)!$(34)
With the help of the following version of Stirling’s formula (ignore its last term for the time being)
$ln\left(m!\right)\simeq mlnm-m-\frac{1}{2}lnm+ln\sqrt{2\pi }+\frac{1}{12m}+\cdots$(35)
and of (we do not need the last two terms as yet)
$ln\left(x\cdot n+z\sqrt{n}\right)\simeq ln\left(x\cdot n\right)+\frac{z}{x\sqrt{n}}-\frac{{z}^{2}}{2{x}^{2}n}+\frac{{z}^{3}}{3{x}^{3}{n}^{3/2}}-\frac{{z}^{4}}{4{x}^{4}{n}^{2}}+\cdots$(36)
we get (this kind of tedious algebra is usually delegated to a computer)
$lnq\left(x\right)\simeq -\frac{{z}^{2}}{2x}-ln\sqrt{2\pi x}+\cdots$(37)
We thus end up with
$\frac{{G}_{j}\left({\text{e}}^{-s/n}\right)}{\sqrt{n}}\underset{n\to \infty }{\to }\frac{1}{\sqrt{2\pi }}{\int }_{0}^{\infty }\frac{exp\left(-\frac{{z}^{2}}{2x}-x\cdot s\right)}{\sqrt{x}}\text{d}x=\
where $z=\frac{j}{\sqrt{n}}$ ; this follows from (32) and the following result:
Claim 3.
${I}_{v}\stackrel{\text{def}}{=}{\int }_{0}^{\infty }\frac{exp\left(-\frac{v}{x}-x\cdot s\right)}{\sqrt{x}}\text{d}x=\sqrt{\frac{\pi }{s}}\cdot exp\left(-2\sqrt{v\cdot s}\right)$(39)
when v and s are positive
Proof. Since
$\frac{\text{d}{I}_{v}}{\text{d}v}={\int }_{0}^{\infty }\frac{exp\left(-\frac{v}{x}-x\cdot s\right)}{{x}^{3/2}}\text{d}x$(40)
${I}_{v}={\int }_{0}^{\infty }\frac{exp\left(-s\cdot y-\frac{v}{y}\right)}{\sqrt{\frac{v}{s\cdot y}}}\cdot \frac{v}{s\cdot {y}^{2}}\text{d}y=\sqrt{\frac{v}{s}}\cdot \frac{\text{d}{I}_{v}}{\text{d}v}$
after the $x=\frac{v}{s\cdot y}$ substitution. Solving the resulting simple differential equation for ${I}_{v}$ yields
${I}_{v}=c\cdot exp\left(2\sqrt{v\cdot s}\right)$(42)
where c is equal to
${I}_{0}={\int }_{0}^{\infty }\frac{exp\left(-x\cdot s\right)}{\sqrt{x}}\text{d}x={\int }_{0}^{\infty }\frac{exp\left(-{u}^{2}\cdot s\right)}{u}\cdot 2u\text{d}u=\sqrt{\frac{\pi }{s}}$(43)
the last being a well-known integral (related to Normal distribution). ■
To find the $n\to \infty$ limit of (25), we first evaluate the right hand side of (38) with $j=-2J,-J,0,J$ and 2J, getting
$\frac{{G}_{0}\left({\text{e}}^{-s/n}\right)}{\sqrt{n}}\underset{n\to \infty }{\to }\frac{1}{\sqrt{2s}}$(44)
$\frac{{G}_{J}\left({\text{e}}^{-s/n}\right)}{\sqrt{n}}=\frac{{G}_{-J}\left({\text{e}}^{-s/n}\right)}{\sqrt{n}}\underset{n\to \infty }{\to }\frac{exp\left(-z\sqrt{2s}\right)}{\sqrt{2s}}$
$\frac{{G}_{2J}\left({\text{e}}^{-s/n}\right)}{\sqrt{n}}=\frac{{G}_{-2J}\left({\text{e}}^{-s/n}\right)}{\sqrt{n}}\underset{n\to \infty }{\to }\frac{exp\left(-2z\sqrt{2s}\right)}{\sqrt{2s}}$
where $z=\frac{J}{\sqrt{n}}$ (always positive).
3) Approximating G[D]
The corresponding Laplace representation of (25) further divided by n, let us denote it ${L}_{D/n}\left(s\right)$, is then equal to
$\frac{2\cdot \frac{E}{2s\sqrt{2s}}-2\cdot \frac{{E}^{2}}{2s\sqrt{2s}}}{\left(\frac{1-{E}^{2}}{2s}\right)\cdot \frac{1}{\sqrt{2\pi }}}=\frac{2\cdot E}{1+E}\cdot \sqrt{\frac{2\pi }{2s}}=2\cdot \sqrt{\
frac{2\pi }{2s}}\cdot \underset{k=1}{\overset{\infty }{\sum }}{\left(-1\right)}^{k-1}{E}^{k}$(45)
where $E\stackrel{\text{def}}{=}\mathrm{exp}\left(-2z\sqrt{2s}\right)$. This is based on substituting the right-hand sides of (44) into (25), and on the following result:
$\underset{n\to \infty }{lim}\frac{{\mathfrak{p}}_{0}^{n}}{\sqrt{n}}\cdot n=\underset{n\to \infty }{lim}\frac{{n}^{n}{\text{e}}^{-n}\sqrt{n}}{n!}=\frac{1}{\sqrt{2\pi }}$(46)
(Stirling’s formula again); the last limit also makes it clear why we had to divide (25) by n: to ensure getting a finite result again.
We now need to find the ${q}_{D/n}\left(x\right)$ function corresponding to (45), i.e. the latter’s ILT, and convert it to ${p}_{n}=\frac{{q}_{D/n}\left(1\right)}{n}$ according to (30); this yields
an approximation for the coefficient of ${t}^{n}$ in the expansion of (25), still divided by n. The ultimate answer to $Pr\left(\sqrt{n}{D}_{n}>z\right)$ is thus $\frac{{q}_{D/n}\left(1\right)}{n}\
cdot n={q}_{D/n}\left(1\right)$.
Since the ILT of
$\sqrt{\frac{\pi }{s}}\cdot {E}^{k}=\sqrt{\frac{\pi }{s}}\cdot \mathrm{exp}\left(-2kz\sqrt{2s}\right)$(47)
(where k is a positive integer) is equal to
(this follows from (32) and (38), after replacing z by $z\cdot k$ ), its contribution to ${q}_{D/n}\left(1\right)$ is
Applied to the last line of (45), this leads to
$Pr\left(\sqrt{n}{D}_{n}>z\right)\underset{n\to \infty }{\to }2{\mathbb{T}}_{0}\left(z\right)$(50)
or, equivalently,
$Pr\left(\sqrt{n}{D}_{n}\le z\right)\simeq 1-2{\mathbb{T}}_{0}\left(z\right)$(51)
${\mathbb{T}}_{0}\left(z\right)\stackrel{\text{def}}{=}\underset{k=1}{\overset{\infty }{\sum }}{\left(-1\right)}^{k-1}exp\left(-2{z}^{2}{k}^{2}\right)$(52)
Note that the error of this approximation is of the $O\left(\frac{1}{\sqrt{n}}\right)$ type, which means that it decreases, roughly (since there are also terms proportional to $\frac{1}{n}$, $\frac
{1}{{n}^{3/2}}$, etc.), with $\frac{1}{\sqrt{n}}$. Also note that the right hand side of (51) can be easily evaluated by calling a special function readily available (under various names) with most
symbolic programming languages, for example “JacobiTheta4(0, exp(−2∙z^2))” of Maple or “EllipticTheta[4, 0, Exp[−2z^2]]” of Mathematica.
The last formula has several advantages over the approach of the previous two sections: firstly, it is easy and practically instantaneous to evaluate (the infinite series converges rather quickly
only between 2 and 10 terms are required to reach a sufficient accuracy when $0.3<z$ the CDF is practically zero otherwise), secondly, it is automatically a continuous function of z (no need to
interpolate), and finally, it provides an approximate distribution of $\sqrt{n}{D}_{n}$ for all values of n (the larger the n, the better the approximation).
But a big disappointment is the formula’s accuracy, becoming adequate only when the sample size n reaches thousands of observations; for smaller samples, an improvement is clearly necessary. To
demonstrate this, we have computed the difference between the exact and approximate CDF when $n=300$ ; see Figure 4, which is in agreement with a similar graph of [2].
We can see that the maximum possible error of the approximation is over 1.5% (when computing the probability of ${D}_{300}>0.046$ ); errors of this size are generally not considered acceptable.
5. High-Accuracy Solution
Results of this section were obtained (in a slightly different form, and building on previously published results) by [5] and further expounded by a more accessible [6]; their method is based on
expanding (in powers of $\frac{1}{\sqrt{n}}$ ) the matrix-algebra solution. Here we present an alternate approach, similarly expanding the generating-function solution instead; this appears an easier
way of deriving the individual $\frac{1}{\sqrt{n}}$ and $\frac{1}{n}$ -proportional corrections to (50). We should mention that the cited articles include the $\frac{1}{{n}^{3/2}}$ -proportional
correction as well; it would not be difficult to extend our results in the same manner, if deemed beneficial.
To improve accuracy of our previous asymptotic solution, (34) and, consequently, (38) have to be extended by extra $\frac{1}{\sqrt{n}}$ and $\frac{1}{n}$ -proportional terms (note that (35) and (36)
were already presented in this extended form), getting
$\begin{array}{l}\frac{{G}_{j}\left({\text{e}}^{-s/n}\right)}{\sqrt{n}}\\ \simeq {\int }_{0}^{\infty }\frac{exp\left(-\frac{{z}^{2}}{2x}-s\cdot x\right)\cdot \left(1+\frac{{z}^{3}-3z\cdot x}{6{x}^{2}
\sqrt{n}}+\frac{{z}^{6}-12{z}^{4}x+27{z}^{2}{x}^{2}-6{x}^{3}}{72{x}^{4}n}+\cdots \right)}{\sqrt{2\pi \cdot x}}\text{d}x\\ =\frac{exp\left(-\sqrt{2{z}^{2}s}\right)}{\sqrt{2s}}\cdot \left(1+\frac{s\
cdot z\mp \sqrt{2s}}{3\sqrt{n}}+\frac{{z}^{2}{s}^{2}-3\sqrt{2{z}^{2}{s}^{3}}+3s}{18n}+\cdots \right)\end{array}$(53)
where the $\mp$ sign corresponds to a positive (negative) $z\stackrel{\text{def}}{=}\frac{j}{\sqrt{n}}$, respectively. The corresponding tedius algebra is usually delegated to a computer (it is no
longer feasible to show all the details here), the necessary integrals are found by differentiating each side of the equation in (38) with respect to z^2, from one up to four times.
The last expression represents an excellent approximation to the G functions of (44), with the exception of
${G}_{0}\left({\text{e}}^{-s/n}\right)\stackrel{\text{def}}{=}\underset{k=0}{\overset{\infty }{\sum }}\text{ }\text{ }{\mathfrak{p}}_{0}^{k}\cdot exp\left(-\frac{ks}{n}\right)$(54)
which now requires a different approach.
Claim 4.
$\frac{{G}_{0}\left({\text{e}}^{-s/n}\right)}{\sqrt{n}}\simeq \frac{1}{\sqrt{2s}}+\frac{1}{3\sqrt{n}}+\frac{\sqrt{2s}}{12n}+\cdots$(55)
Proof. The following elegant proof has been suggested by [7].
It is well known that the value of Lambert $W\left(z\right)$ function is defined as a solution to $w{\text{e}}^{w}=z$, and that its Taylor expansion is given by
$\underset{k=1}{\overset{\infty }{\sum }}\frac{{\left(-k\right)}^{k-1}}{k!}{z}^{k}$(56)
implying that
$\underset{k=0}{\overset{\infty }{\sum }}\frac{{k}^{k}}{k!}{\text{e}}^{-k\left(1+\lambda \right)}=1+\frac{\text{d}}{\text{d}\lambda }W\left(-{\text{e}}^{-\lambda -1}\right)$(57)
$w{\text{e}}^{w}=-{\text{e}}^{-\lambda -1}$(58)
with respect to $\lambda$, cancelling ${\text{e}}^{w}$, and solving for $\frac{\text{d}w}{\text{d}\lambda }$ yields
$\frac{\text{d}w}{\text{d}\lambda }=-\frac{w}{1+w}$(59)
implying that
$\underset{k=0}{\overset{\infty }{\sum }}\frac{{k}^{k}}{k!}{\text{e}}^{-k\left(1+\lambda \right)}=\frac{1}{1+w}\stackrel{\text{def}}{=}\frac{1}{u}$(60)
where u (being equal to $1+w$ ) is now the solution of
$\left(u-1\right){\text{e}}^{u}=-{\text{e}}^{-\lambda }$(61)
rather than (58). Solving the last equation for $\lambda$ and expanding the answer in powers of u results in
$\lambda =\frac{{u}^{2}}{2}+\frac{{u}^{3}}{3}+\frac{{u}^{4}}{4}+\cdots$(62)
Inverting the last power series (which can be easily done to any number of terms) yields the following expansion:
$u=\sqrt{2\lambda }-\frac{2\lambda }{3n}+\frac{{\left(2\lambda \right)}^{3/2}}{36}+\cdots$(63)
Similarly expanding $\frac{1}{u}$, replacing $\lambda$ by $\frac{s}{n}$ and further dividing by $\sqrt{n}$ proves our claim.
Having achieved more accurate approximation for all our G functions, and with the following extension of (46)
$\frac{{n}^{n}{\text{e}}^{-n}\sqrt{n}}{n!}\simeq \frac{1}{\sqrt{2\pi }}\cdot \left(1-\frac{1}{12n}+\cdots \right)$(64)
we can now complete the corresponding refinement of (45) by substituting all these expansions into (25), further divided by n. This results in
$\begin{array}{c}{L}_{D/n}\left(s\right)\simeq \sqrt{2\pi }\cdot \frac{2{E}_{+}}{1+{E}_{+}}\cdot \frac{1}{\sqrt{2s}}+\frac{\sqrt{2\pi }}{n}\cdot \frac{E}{6\left(1+E\right)}\cdot \frac{1}{\sqrt{2s}}\\
\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{\sqrt{2\pi }}{n}\cdot \left(\frac{E}{9\left(1+E\right)}-\frac{E}{18\left(1-E\right)}\right)\sqrt{2s}-\frac{\sqrt{2\pi }}{n}\cdot \frac{z\cdot E}{9{\
left(1+E\right)}^{2}}\cdot 2s+\cdots \end{array}$(65)
The last formula consists of two types of corrections: replacing E by
in its leading term removes the $\frac{1}{\sqrt{n}}$ -proportional error of (45); the remaining terms similarly represent the $\frac{1}{n}$ -proportional correction; the error of (65) is thus of the
$O\left(\frac{1}{{n}^{3/2}}\right)$ type.
Note that
${E}_{+}\simeq E\left(1+\frac{\sqrt{2s}}{3\sqrt{n}}+\frac{s}{9n}+\cdots \right)$(67)
enables us to express (65) in terms of E only; this is needed for its explicit verification (something we leave to a computer).
What we must do now is to convert (65) to the corresponding ${q}_{D/n}\left(1\right)$, thus approximating the coefficient of ${t}^{n}$ in the expansion of (25). We already possess the answer for the
first two terms of (65), which are both identical to (45), except that ${\mathbb{T}}_{0}\left(z\right)$ needs to be replaced by ${\mathbb{T}}_{0}\left(z+\frac{1}{6\sqrt{n}}\right)$ in the first case,
and divided by 12 in the second one.
To convert the remaining terms of (65) to their ${q}_{D/n}\left(1\right)$ contribution, we must first expand them in powers of E, then take the ILT of individual terms of these expansions, and
finally set x equal to 1; the following table helps with the last two steps:
(the first row has already been proven; the remaining three follow by differentiating both of its sides with respect to zk (taken as a single variable), up to three times).
This results in the following replacement
$\sqrt{2\pi }\cdot \frac{E}{1+E}\cdot \sqrt{2s}\to \underset{k=1}{\overset{\infty }{\sum }}{\left(-1\right)}^{k-1}{\text{e}}^{-2{z}^{2}{k}^{2}}\left(4{k}^{2}{z}^{2}-1\right)$
$\sqrt{2\pi }\cdot \frac{E}{1-E}\cdot \sqrt{2s}\to \underset{k=1}{\overset{\infty }{\sum }}\text{ }\text{ }{\text{e}}^{-2{z}^{2}{k}^{2}}\left(4{k}^{2}{z}^{2}-1\right)$(69)
$\sqrt{2\pi }\cdot \frac{zE}{{\left(1+E\right)}^{2}}\cdot 2s\to z\underset{k=1}{\overset{\infty }{\sum }}\left(\begin{array}{c}-2\\ k-1\end{array}\right){\text{e}}^{-2{z}^{2}{k}^{2}}\left(8{k}^{3}{z}
where all three series are still fast-converging. Note that the binomial coefficient of the last sum equals to ${\left(-1\right)}^{k-1}k$.
We can then present our final answer for $\mathrm{Pr}\left(\sqrt{n}{D}_{n}>z\right)$ in the manner of the following Mathematica code; the resulting KS function can then compute (practically
instantaneously) this probability for any n and z.
The resulting improvement in accuracy over the previous, asymptotic approximation is quite dramatic; Figure 5 again displays the difference between the exact and approximate CDF of ${D}_{300}$.
This time, the maximum error has been reduced to an impressive 0.0036%, this happens when computing $\mathrm{Pr}\left(0.027<{D}_{300}<0.0475\right)$ ; note that potential errors become substantially
smaller in the right hand tail (the critical part) of the distribution. Most importantly, when the same computation is repeated with $n=10$, the corresponding graph indicates that errors of the new
can never exceed 0.20%; such an accuracy would be normally considered quite adequate (approximating Student’s ${t}_{30}$ by Normal distribution can yield an error almost as large as 1%).
As mentioned already, the approximation of $Pr\left(\sqrt{n}{D}_{n}>z\right)$ can be made even more accurate by adding, to the current expansion, the following extra ${n}^{-3/2}$ -proportional
$\begin{array}{l}+\frac{z}{27{n}^{3/2}}\underset{k=1}{\overset{\infty }{\sum }}{\left(-1\right)}^{k-1}\mathrm{exp}\left(-2{z}^{2}{k}^{2}\right)\\ \text{ }×{k}^{2}\left\{\left({k}^{2}+\frac{107}{5}+3
{\left(-1\right)}^{k}\right)\cdot \left(1-\frac{4}{3}{k}^{2}{z}^{2}\right)-\frac{78}{5}+16{k}^{4}{z}^{4}\right\}\end{array}$(70)
At $n=300$, this reduces the corresponding error by a factor of 4; nevertheless, from a practical point of view, such high accuracy is hardly ever required. Furthermore, the new term reduces the
maximum error of the $n=10$ result from the previous 0.17% only to 0.10%; even though this represents an undisputable improvement, it is achieved at the expense of increased complexity. Note that
adding higher ( $\frac{1}{{n}^{2}}$ -proportional, etc.) terms of the expansion would no longer (at $n=10$ ) improve its accuracy, since the expansion starts diverging (a phenomenon also observed
with, and effectively inherited from, the Stirling expansion); this happens quite early when n is small (and, when n is large, higher accuracy is no longer needed).
When simplicity, speed of computation, and reasonable accuracy are desired in a single formula, the next section presents a possible solution.
Final Simplification
We have already seen that the $\frac{1}{\sqrt{n}}$ -proportional error is removed by the following trivial modification of (50)
$Pr\left(\sqrt{n}{D}_{n}\le z\right)=1-2{\mathbb{T}}_{0}\left(z+\frac{1}{6\sqrt{n}}\right)$(71)
Note that this amounts only to a slight shift of the whole curve to the left, but leaves us with a full $O\left(\frac{1}{n}\right)$ -type error.
When willing to compromise, [8] has taken this one step further: it is possible to show that, by extending the argument of ${\mathbb{T}}_{0}$ to
yields results which are very close to achieving the full $\frac{1}{n}$ -proportional correction of (65) as well; this is a fortuitous empirical results which can be easily verified computationally
(when $n=10$, the maximum error of the last approximation increases to 0.27%, for $n=300$ it goes up to 0.0096% still practically negligible).
6. Conclusions and Summary
In this article, we hope to have met two goals:
• explaining, in every possible detail, the traditional derivations (two of them yielding exact results, several of them being approximate) of the ${D}_{n}$ distribution,
• proposing the following simple modification of the commonly used formula:
$Pr\left(\sqrt{n}{D}_{n}\le z\right)\simeq 1+2\underset{k=1}{\overset{\infty }{\sum }}{\left(-1\right)}^{k}exp\left(-2{\left(z+\frac{1}{6\sqrt{n}}+\frac{z-1}{4n}\right)}^{2}{k}^{2}\right)$(73)
making it accurate enough to be used as a practical substitute for exact results even with relatively small samples. Furthermore, the right hand side of this formula can be easily evaluated by
computer software (see the comment following (52)).
The following Mathematica function computes the exact $Pr\left({D}_{n}\le d\right)$ for any value of d; using it to produce a full graph of the corresponding CDF will work only for a sample size not
much bigger than 700, since the algorithm’s computational time increases exponentially with not only n, but also with increasing values of d.
Nevertheless, computing only a single value of this function (such as a P value of an observed ${D}_{n}$ ) becomes feasible even for a substantially bigger sample size; for example: typing KS[3000,
0.031467] results in 0.994855, taking about 13 seconds on an average computer. Increasing n any further would necessitate switching to one of the (at that point, extremely accurate) approximations of
our article.
|
{"url":"https://www.scirp.org/journal/paperinformation?paperid=98893","timestamp":"2024-11-10T17:40:22Z","content_type":"application/xhtml+xml","content_length":"337775","record_id":"<urn:uuid:692fd514-3240-43f1-a43d-9afcb15dd6bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00307.warc.gz"}
|
SADT & SAPT Testing Services | ASTM D 4809 | Franklin OH
SADT / SAPT Testing Process
Kinetica, Inc. specializes in Self-Accelerating Decomposition Temperature (SADT) and Self-Accelerating Polymerization Temperature (SAPT) testing.
Testing involves selected UN protocols (H.1 and H.4) described in the UN Recommendations on the Transport of Dangerous Goods, Model Regulations (Orange Source Book) Volumes 1 & 2 and tests developed
from calorimetric data measured with differential scanning calorimetry (DSC) and adiabatic calorimetry (AdC).
UN sanctioned tests and applicability consist of the following:
H.1 — United States SADT, package transport
H.2 — Adiabatic storage test (AST), packages, IBCs and tanks
H.3 — Isothermal storage test (IST), packages, IBCs and tanks packages, IBCs and tanks
H.4 — Heat accumulation storage test, packages, IBCs and small tanks
The H.1 test is applicable to packages with a maximum volume of 225 liters; H.4 is a Dewar flask test that correlates with 25 kg packages.
In-house testing based on DSC and AdC methods involve computation of the temperature of no return (T[NR]) and the SADT and SAPT from the criticality parameters for convective and conductive heat
transfer given in equations 1-2,
where ΔΗ is reaction enthalpy, r is radius, ρ is density, C is concentration, A and E are Arrhenius parameters, U is the surface heat-transfer coefficient, S is surface area, e is the base of the
natural logarithm, T is temperature, λ is thermal conductivity, R is the ideal gas constant, and δ is the shape factor. Thermokinetic parameters may be evaluated from appropriate calorimetric data
and employed in equations 1 and 2 to determine the T[NR].
It follows that SADT and/or SAPT are determined from equation 3.
|
{"url":"https://thermochemistry.com/sadt-sapt-testing/","timestamp":"2024-11-09T22:19:00Z","content_type":"text/html","content_length":"26705","record_id":"<urn:uuid:3c27e4f7-7fd5-4d58-85c4-5a63f75c9d78>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00698.warc.gz"}
|
of Ivanteyevka, Moscow Oblast, Russia
Geographic coordinates of Ivanteyevka, Moscow Oblast, Russia
Latitude: 55°58′15″ N
Longitude: 37°55′14″ E
Elevation above sea level: 140 m = 459 ft
Coordinates of Ivanteyevka in decimal degrees
Latitude: 55.9711100°
Longitude: 37.9208300°
Coordinates of Ivanteyevka in degrees and decimal minutes
Latitude: 55°58.2666′ N
Longitude: 37°55.2498′ E
UTM coordinates of Ivanteyevka
UTM Zone: 37U
Easting: 432645.81205767
Northing: 6203389.9293824
Geographic coordinate systems
WGS 84 coordinate reference system is the latest revision of the World Geodetic System, which is used in mapping and navigation, including GPS satellite navigation system (the Global Positioning
Geographic coordinates (latitude and longitude) define a position on the Earth’s surface. Coordinates are angular units. The canonical form of latitude and longitude representation uses degrees (°),
minutes (′), and seconds (″). GPS systems widely use coordinates in degrees and decimal minutes, or in decimal degrees.
Latitude varies from −90° to 90°. The latitude of the Equator is 0°; the latitude of the South Pole is −90°; the latitude of the North Pole is 90°. Positive latitude values correspond to the
geographic locations north of the Equator (abbrev. N). Negative latitude values correspond to the geographic locations south of the Equator (abbrev. S).
Longitude is counted from the prime meridian (IERS Reference Meridian for WGS 84) and varies from −180° to 180°. Positive longitude values correspond to the geographic locations east of the prime
meridian (abbrev. E). Negative longitude values correspond to the geographic locations west of the prime meridian (abbrev. W).
UTM or Universal Transverse Mercator coordinate system divides the Earth’s surface into 60 longitudinal zones. The coordinates of a location within each zone are defined as a planar coordinate pair
related to the intersection of the equator and the zone’s central meridian, and measured in meters.
Elevation above sea level is a measure of a geographic location’s height. We are using the global digital elevation model GTOPO30.
Ivanteyevka, Moscow Oblast, Russia
|
{"url":"https://dateandtime.info/citycoordinates.php?id=555111","timestamp":"2024-11-04T15:22:29Z","content_type":"text/html","content_length":"22111","record_id":"<urn:uuid:787ef12a-0dda-4baa-8a39-1b6d48668dea>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00296.warc.gz"}
|
Leetcode: 202. Happy Number
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps
Leetcode: 202. Happy Number
Determining Happy Numbers in C++
Introduction: In this blog post, we'll explore a C++ solution to determine whether a given number is happy or not. A happy number is a positive integer that, when repeatedly replaced by the sum of
the squares of its digits, eventually reaches 1. Understanding how to efficiently check for happy numbers can be useful in various scenarios, such as in number theory or programming challenges
involving mathematical properties.
Problem Description: The problem can be defined as follows: Given a positive integer n, we need to determine if it is a happy number. A happy number is one where the sum of the squares of its digits,
when calculated repeatedly, eventually leads to 1. If the process results in a cycle that doesn't reach 1, the number is not happy.
Approach: To solve this problem, we will implement two functions: getNext and isHappy. The getNext function calculates the sum of the squares of the digits of a given number, and the isHappy function
uses the getNext function to determine if the number is happy.
#include <unordered_set>
class Solution {
int getNext(int n) {
int totalSum = 0;
while (n > 0) {
int d = n % 10;
n = n / 10;
totalSum += d * d;
return totalSum;
bool isHappy(int n) {
std::unordered_set seen;
while (n != 1 && seen.find(n) == seen.end()) {
n = getNext(n);
return n == 1;
Explanation: Let's break down the implementation step by step: 1. We define a helper function called getNext that calculates the sum of the squares of the digits of a given number n. 2. The while
loop inside getNext runs until n becomes 0, extracting each digit and adding its square to the totalSum. 3. The totalSum represents the sum of the squares of the digits of the original number. 4. The
function returns totalSum, which will be used in the isHappy function. 5. In the isHappy function, we use an unordered_set to keep track of the numbers we've seen so far during the calculation
process. 6. The while loop in isHappy runs until the number becomes 1 (a happy number) or we detect a cycle (a non-happy number). 7. Within the loop, we insert the current number into the seen set
and update the number to the next one using the getNext function. 8. If the loop terminates with n == 1, the function returns true, indicating that the number is happy. 9. If the loop terminates with
a number that has been seen before (cycle detected), the function returns false, indicating that the number is not happy.
Example: Let's illustrate the implementation with an example. Consider the input number n = 19:
1. Initially, we start with n = 19.
2. getNext(19) calculates the sum of the squares of its digits: 1^2 + 9^2 = 82.
3. Now, we update n to 82.
4. getNext(82) calculates the sum of the squares of its digits: 8^2 + 2^2 = 68.
5. Now, we update n to 68.
6. getNext(68) calculates the sum of the squares of its digits: 6^2 + 8^2 = 100.
7. Now, we update n to 100.
8. getNext(100) calculates the sum of the squares of its digits: 1^2 + 0^2 + 0^2 = 1.
9. The loop terminates as n becomes 1, and the function returns true, indicating that the number 19 is a happy number.
Conclusion: We have successfully implemented a C++ solution to determine whether a given number is a happy number or not. The provided functions, getNext and isHappy, efficiently calculate the sum of
squares of digits and detect cycles using an unordered_set, making it an optimal solution. Understanding this bit manipulation technique can be useful in various programming tasks that involve
mathematical properties and cyclic patterns. Whether you're working on number theory or solving programming challenges, this knowledge can prove invaluable. Happy coding!
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps
|
{"url":"https://blog.smshovan.com/2023/07/leetcode-202-happy-number.html","timestamp":"2024-11-01T22:10:40Z","content_type":"text/html","content_length":"101227","record_id":"<urn:uuid:f2981627-e6a7-461d-92b1-b49cbce17e61>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00187.warc.gz"}
|
Texas Hold'em Math Test - Texas Hold‘em Poker
To succeed at the poker table, you need to have logical thinking and problem-solving abilities.
Among all the skills, perhaps the most important is your mathematical ability.
However, even those who are not very good at math can still grasp some fundamental poker math principles.
In today's content, we will present 10 questions to test your understanding of the core mathematical principles of the game.
After completing the test, you can compare your final score to see which famous poker player you match up with. Different score ranges will correspond to different professional players.
♠ Question 1 ♠
The pot is $20. Suppose your opponent bets $10 on the flop. What is the cost of your call (expressed as a percentage)?
A. 20%
B. 25%
C. 33%
D. 50%
Answer: B, 25%. You need to invest $10 to compete for $30. The odds are 3:1, which is 25%. This means your winning percentage needs to be at least 25% for a call to break even.
♠ Question 2 ♠
The board is J♠8♠7♥3♥, and your hole cards are A♠9♠. What is the probability of hitting a straight or flush on the river?
A. 26%
B. 33%
C. 41%
Answer: A, 26%. A9s has 12 outs (9 flush outs + 3 straight outs). Therefore, the probability of hitting your draw on the river is 26%. Using the Rule of 2 and 4, it's 12*2=24%, which is close to the
actual probability.
♠ Question 3 ♠
On the river, suppose your opponent bets the amount of the entire pot. What win percentage do you need to call to break even?
A. 25%
B. 33%
C. 50%
D. 75%
Answer: B, 33%. If your opponent bets the size of the pot, the breakeven point is calculated as: Investment ÷ (Investment + Return) = 1 ÷ (2 + 1) = 1/3, which is approximately 33%.
♠ Question 4 ♠
Suppose you have a pocket pair. What is the approximate probability of hitting a set or better on the flop?
A. 9%
B. 12%
C. 15%
D. 17%
Answer: B, 12%. More precisely, the probability of a pocket pair hitting a set or better on the flop is 11.8%.
♠ Question 5 ♠
On the river, you bluff by betting the amount equivalent to the pot. What is the success rate needed for your bluff to break even?
A. 33%
B. 50%
C. 75%
D. 100%
Answer: B, 50%. Because it's a bluff, your win rate when called is 0. The breakeven point for the bluff is calculated as: (Pot before the bet) ÷ (Bet amount + Pot before the bet) = 1/2 = 50%.
♠ Question 6 ♠
Pre-flop, the button opens with a raise, and only the BB calls. Which player is more likely to realize their equity in this situation?
A. Button
B. BB
Answer: A, Button. “Equity realization” refers to the probability that a hand will win the pot compared to its raw equity percentage.
For example, a hand's raw equity might be 45%, but due to other factors (positive or negative), the long-term realization of its equity in the pot might be higher or lower than 45%.
In this example, the pre-flop raiser on the button has a better chance of realizing their equity in the pot than the BB caller. The button's advantages in this situation include:
1. A stronger and uncapped range
♠ Question 7 ♠
In which of the following situations is your implied odds the highest?
A. Four-way pot, flop is K-Q-7 rainbow, and you hold JT with an open-ended straight draw.
B. Four-way pot, flop is 2-8-3 all of the same suit, and you have the third nut flush draw.
C. Four-way pot, flop is 6-7-9 rainbow, and you hold 45 with an open-ended straight draw.
Answer: A. Among all the options, holding JT on a K-Q-7 rainbow flop with an open-ended straight draw has the highest implied odds. You have 8 outs (all 9s and As), and if you hit, you make the nut
straight. In options B and C, opponents might have stronger flushes in B or stronger straights in C, and even if you hit, it's harder to get paid by weaker hands.
♠ Question 8 ♠
Pre-flop, you open-raise with TT from the button, and the BB 3-bets. Assuming your opponent 3-bets with hands JJ+ only, how many combinations of these hands are there?
A. 4 combinations
B. 16 combinations
C. 24 combinations
D. 32 combinations
Answer: C, 24 combinations. Generally, there are 16 combinations for non-pair starting hands (12 offsuit + 4 suited) and 6 combinations for pocket pairs. Since JJ+ includes JJ, QQ, KK, and AA, that's
4 pocket pairs × 6 combinations each = 24 combinations.
♠ Question 9 ♠
Suppose you call with a flush draw on the turn. What is the approximate probability of hitting your flush on the river?
A. 12%
B. 15%
C. 20%
D. 25%
Answer: C, 20%. With a flush draw, you have 9 outs. On the turn, using the Rule of 2, you calculate 9 outs × 2 = 18%, so the probability of hitting your flush is approximately 20%.
♠ Question 10 ♠
The pot is $100. Suppose your opponent bets $100 on the river. You estimate you have the best hand 40% of the time when you call. What is the EV (expected value) of your call?
A. -$20
B. $20
C. $80
D. $100
Answer: B, $20. There are two outcomes when you call: ① Winning $200 with a 40% probability; ② Losing $100 with a 60% probability.
So, EV = 0.4 × 200 + 0.6 × -100 = 80 – 60 = $20.
Score Assessment
01 Answered 1-2 questions correctly
Matched Player: Guy Laliberté. This businessman lost over $31 million online between 2006-2012 (less than 3% of his net worth), playing under seven or eight different accounts. The players who won
this money include Phil Ivey, Tom Dwan, the Dang brothers, Phil Galfond, and Patrik Antonius. So, if your score matches this businessman, you still have some work to do.
02 Answered 3-5 questions correctly
Matched Player: Mike Matusow. Nicknamed “The Mouth,” Mike had a significant impact in the early part of his career, but in recent years… it seems like he has burned out.
03 Answered 6-8 questions correctly
Matched Player: Patrik Antonius. The Finnish poker legend had his golden years from 2003-2010.
04 Answered 9-10 questions correctly
Matched Player: Phil Ivey. Does this man even need an introduction?!
|
{"url":"https://www.texas-holdem.poker/texas-holdem-math-test/","timestamp":"2024-11-07T19:38:33Z","content_type":"text/html","content_length":"141201","record_id":"<urn:uuid:5f229fd3-c79c-4604-89e9-3d52e913ae77>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00628.warc.gz"}
|
KSEEB Solutions for Class 6 Maths Chapter 10 Mensuration Ex 10.2
Students can Download Chapter 10 Mensuration Ex 10.2 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 6 Maths helps you to revise the complete Karnataka State Board Syllabus and score more
marks in your examinations.
Karnataka State Syllabus Class 6 Maths Chapter 10 Mensuration Ex 10.2
Question 1.
Find the areas of the following figures by counting square
(a) The figure contains 9 fully filled squares only. Therefore, the area of this figure will be 9 square units.
(b) The figure contains 5 fully filled squares only. Therefore, the area of this figure will be 5 square units.
(c) The figure contains 2 fully filled squares and 4 half-filled squares. Therefore, the area of this
figure will be 4 square units.
(d) The figure contains 8 fully filled squares only. Therefore, the area of this figure will be 8 square units.
(e) The figure contains 10 fully filled squares only. Therefore, the area of this figure will be 10 square units.
(f) The figure contains 2 fully filled squares and 4 half-filled squares. Therefore, the area of this figure will be 4 square units.
(g) The figure contains 4 fully filled squares and 4 half-filled squares. Therefore, the area of this figure will be 6 square units.
(h) The figure contains 5 fully filled squares only. Therefore, the area of this figure will be 5 square units.
(i) The figure contains 9 fully filled squares only. Therefore, the area of this figure will be 9 square units.
(j) The figure contains 2 fully filled squares and 4 half-filled squares. Therefore, the area of this figure will be 4 square units.
(k) The figure contains 4 fully filled squares and 2 half-filled squares. Therefore, the area of this figure will be 5 square units.
(l) From the given figure, it can be observed that,
Total area = 2 + 6 = 8 square units
(m) From the given figure, it can be observed that,
Total area = 5 + 9 = 14 square units
(n) From the given figure, it can be observed that,
Total area = 8 + 10 = 18 square units
|
{"url":"https://ktbssolutions.com/kseeb-solutions-for-class-6-maths-chapter-10-ex-10-2/","timestamp":"2024-11-07T19:28:38Z","content_type":"text/html","content_length":"87319","record_id":"<urn:uuid:505ce7fe-d049-4744-b6eb-ada7c174538d>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00355.warc.gz"}
|
Memorized by 10+ users
• noun
A statement that expresses the mathematical relationship between two or more variables.
- "The equation y = mx + b represents the line of best fit for a set of data points."
- "Solve the equation 2x + 3 = 5 for x."
1. /ɪˈkweɪʃən/
Source: "https://commons.wikimedia.org/w/index.php?curid=50497461"
List of all variants of equation that leads to same result
equation of time
equations of time
characteristic equation
characteristic equations
cubic equation
cubic equations
differential equation
differential equations
Diophantine equation
Diophantine equations
linear equation
linear equations
parametric equation
parametric equations
partial differential equation
partial differential equations
personal equation
personal equations
origin and the way in which meanings have changed throughout history.
The term 'equation' comes from the Latin word 'aequatio' meaning 'making equal'.
Any details, considerations, events or pieces of information regarding the word
1. The longest equation ever written is the Riemann Hypothesis, which is still not proven or disproven.
2. The equation e = mc², which relates energy (e) to mass (m) and the speed of light (c²), is one of the most famous equations in physics.
Related Concepts
informations on related concepts or terms closely associated with the word. Discuss semantic fields or domains that the word belongs to
1. Algebra: Equations are a fundamental concept in algebra, which deals with the study of symbols and their relationships.
2. Calculus: Equations are also used extensively in calculus, which deals with the study of rates of change and the optimization of functions.
Any cultural, historical, or symbolic significance of the word. Explore how the word has been used in literature, art, music, or other forms of expression.
Equations have been used throughout history to describe various scientific and mathematical concepts, from simple arithmetic equations to complex mathematical formulas. They have been used to
describe the laws of physics, solve problems in engineering, and model real-world phenomena.
How to Memorize "equation"
1. visualize
- To visualize an equation, imagine the variables as objects that can be manipulated to maintain equality.
- For example, to visualize the equation x + 3 = 5, imagine adding 3 to x until x = 2, then subtracting 3 from 5 to get the other side equal to x.
2. associate
- To memorize an equation, associate it with a mnemonic or a memorable phrase.
- For example, to remember the quadratic formula x = (-b ± √(b² - 4ac)) / 2a, one could use the mnemonic 'Buns Are Good, Apples Cost Extra' to remember the order of the terms.
3. mnemonics
- Create a mnemonic to help remember the order of operations in PEMDAS (Parentheses, Exponents, Multiplication and Division, Addition and Subtraction): 'Please Excuse My Dear Aunt Sally'.
Memorize "equation" using Dictozo
The best and recommended way to memorize equation is, by using Dictozo. Just save the word in Dictozo extension and let the app handle the rest. It enhances the memorization process in two ways:
1. Highlighting:
Whenever users encounters the saved word on a webpage, Dictozo highlights it, drawing the user's attention and reinforcing memorization.
2. Periodic Reminders:
Dictozo will send you periodic reminders to remind you the saved word, it will ask you quiz. These reminders could be in the form of notifications or emails, prompting users to recall and
reinforce their knowledge.
|
{"url":"https://dictozo.com/w/equation","timestamp":"2024-11-12T12:17:32Z","content_type":"text/html","content_length":"25066","record_id":"<urn:uuid:c6b6b187-3de9-4d5b-af07-165a9c585c7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00146.warc.gz"}
|
Finding all shortest paths between given (specific) pair of vertices
Finding all shortest paths between given (specific) pair of vertices
I am working with graphs in sage and need a method of finding all shortest paths between some pair (or all pairs) of vertices.
Note that it is important to have all shortest paths registred, not just one, as seen in many Bellman-Ford/Dijkstra implementations (for instance Graph.shortest_path_all_pairs or
networkx.algorithms.shortest_paths.all_pairs_shortest_path), and not just a number of those paths.
I am also satisfied with only a list of "optimal" predecessors... as long as the list is complete.
Thank you for answers!
3 Answers
Sort by ยป oldest newest most voted
You can easily find the list of all predecessors : just call the "distance_all_pairs" method, and you can then get all predecessors of x in a shortest u-v path as the list of all neighbors p of x
such that d(u,p)+1+d(x,v)=d(u,v).
That is a good approach. Little more work on my side, but it gets the job done.
Morgoth ( 2013-07-09 11:19:48 +0100 )edit
Would you show a minimum working example so that the answer implementation is clear(er)?
rickhg12hs ( 2013-07-12 02:31:15 +0100 )edit
I do not know how to paste the code here, therefore I wrote an answer below.
Morgoth ( 2013-07-15 09:57:05 +0100 )edit
There seem not to be an existing method in Sage for doing that out of the box, so you should write your own algorithm. If you are lazy, you can still use the following slow brute force method,
depending on the size of your graph:
sage: G = graphs.GrotzschGraph()
sage: allp = G.all_paths(1,4)
sage: m = min(len(p) for p in allp) ; m
sage: [p for p in allp if len(p) == m]
[[1, 0, 4], [1, 10, 4]]
Thank you, but this is too much brute force for me. Anyway, it is good to know all_paths function.
Morgoth ( 2013-07-09 11:20:39 +0100 )edit
The method, suggested by Nathann is the following:
for v in G.vertices():
for u in G.vertices():
predec_list[u][v]=[i for i in G.neighbors(v) if all_dist[u][i]==all_dist[u][v]-1]
Given such list (p_l), it is easy to recursively construct a list of shortest paths from vertex a to vertex b, as follows:
def s_p_d(p_l,a,b):
if p_l[a,b]==[a]:
for i in p_l[a,b]:
for j in s_p_d(p_l,a,i):
return r
rickhg12hs: here is my code snippet. This works for me :) Make sure to have networkx imported.
def list_of_predecessors(G):
predec_list= {}
for v in G.vertices():
for u in G.vertices():
predec_list[u,v]=[i for i in G.neighbors(v) if all_dist[u][i]==all_dist[u][v]-1]
return predec_list
def shortest_path_d(G):
a_p= { "init":0}
p_l = list_of_predecessors(G)
for u in G.vertices():
for v in G.vertices():
del a_p["init"]
return a_p
def s_p_d(p_l,a,b):
if p_l[a,b]==[a]:
for i in p_l[a,b]:
for j in s_p_d(p_l,a,i):
return r
Sorry, I'm a graph theory noob. Where does all_dist come from? [The above snippet crashes.]
rickhg12hs ( 2013-07-16 00:27:43 +0100 )edit
Hello! The all_dist dictionary comes from "distance_all_pairs" method, mentioned by Nathann. If you add something like "all_dist = G.distance_all_pairs()" in the beginning, it should work. If you
wish, I can publish full code here, but I would rather not, since I am not really sage master myself :)
Morgoth ( 2013-07-26 07:50:30 +0100 )edit
Defining all_dist as above isn't quite enough to make the code work (still crashes). Wish @Nathann would post a complete minimum working example (MWE). Perhaps @tmonteil would be kind enough to
provide a MWE for the method of @Nathann.
rickhg12hs ( 2013-07-26 16:22:56 +0100 )edit
there you go :)
Morgoth ( 2013-08-14 17:33:03 +0100 )edit
Took me a while to see that this produces the shortest path for all pairs. Nice!
rickhg12hs ( 2013-08-21 20:07:08 +0100 )edit
|
{"url":"https://ask.sagemath.org/question/10332/finding-all-shortest-paths-between-given-specific-pair-of-vertices/","timestamp":"2024-11-08T22:20:31Z","content_type":"application/xhtml+xml","content_length":"83448","record_id":"<urn:uuid:a338dbb5-cd07-4aff-9cef-6f0ec70a751e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00514.warc.gz"}
|
orksheets for 7th Class
8th Grade 1.8 Square Roots and Cube roots
Square Roots & Cube Roots
Powers, Exponents, and Roots
Perfect Squares & Square Roots
Estimating/Simplifying Square Roots
Prefixes Suffixes and Roots
Greek & Latin Roots: spec
Exponents and Square roots
Explore Roots Worksheets by Grades
Explore Other Subject Worksheets for class 7
Explore printable Roots worksheets for 7th Class
Roots worksheets for Class 7 are an essential tool for teachers looking to enhance their students' understanding of math and number sense. These worksheets provide a variety of exercises and problems
that challenge students to apply their knowledge of roots, exponents, and other mathematical concepts. By incorporating these worksheets into their lesson plans, teachers can ensure that their Class
7 students develop a strong foundation in math, which will serve them well as they progress through their academic careers. Additionally, these roots worksheets for Class 7 are designed to be
engaging and interactive, making it easier for students to grasp complex mathematical concepts and retain the information they've learned.
Quizizz is an excellent platform for teachers to access a wide range of resources, including roots worksheets for Class 7, math games, and other interactive learning tools. This platform allows
teachers to create customized quizzes and assignments that align with their curriculum and cater to their students' individual needs. By utilizing Quizizz in conjunction with roots worksheets for
Class 7, teachers can monitor their students' progress and identify areas where additional support may be needed. Furthermore, Quizizz offers a variety of features that make learning math and number
sense more enjoyable and engaging for Class 7 students, such as gamification elements and real-time feedback. By incorporating Quizizz into their lesson plans, teachers can ensure that their Class 7
students develop a strong foundation in math and number sense, setting them up for success in their future academic endeavors.
|
{"url":"https://quizizz.com/en/math-roots-worksheets-class-7?page=1","timestamp":"2024-11-06T20:52:49Z","content_type":"text/html","content_length":"151784","record_id":"<urn:uuid:01e22767-266f-4215-9cf3-fb12b5f10e99>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00869.warc.gz"}
|
Basic Math: How to Perform PEMDAS Correctly - Engineer Dee's Blog
There are lots of math problems going around Facebook that can essentially be answered by the concept of PEMDAS. This should be easy for engineers who experienced a considerable amount of math in
college; but some may have banged a wall that caused amnesia and totally forgot about this elementary math concept.
There is only one definite answer to every arithmetic problem that requires the concept called order of operations.
Others are already familiar with PEMDAS, or Parentheses, Exponents, Multiplication, Division, Addition, and Subtraction, which indicates the sequence. Others even use the mnemonic “Please Excuse My
Dear Aunt Sally,” but they may be missing something: it’s not like reading English that is always done from left to right. The arithmetic of order of operations has special rules.
Solving such problems start by performing first the operations inside the parentheses or the brackets. This is followed by exponents. Then, the multiplication AND/OR division. Finally, the addition
AND/OR subtraction.
What is usually missed is the rule of AND/OR. Once the exponents are addressed, the next thing to do is to assess whichever between multiplication and division comes first, then that is to be
performed. This is also true to addition and subtraction.
So the concept of PEMDAS is not actually absolute. It can either be PEDMAS, PEDMSA, or PEMDSA.
To explain further, let’s take for example the thumbnail photo of this article. Or we can use the photo below with different calculators used.
The equation is: 6÷2(1+2), but it showed two different answers, 1 and 9. Which one is correct?
Following the PEMDAS rule, the answer is 9.
First the equation inside the parenthesis is performed, leaving us with:
= 6÷2(3)
But this is where it gets tricky. Other may interpret that the 2(3) is a parenthetical operation so it should be done first, but actually it is a multiplication. Since according to the PEMDAS rule,
division and multiplication have the same precedence, the correct order is to evaluate from left to right.
= 6÷2×3
= 3×3
= 9
Now a question is raised, “Why would the other calculator show that the answer is 1?”
One Quora user from 2012 (yes, this PEMDAS problem has plagued that community that long) said that, “The calculator on the left is interpreting everything after the division sign as a group.
[Handwritten], it would be 6/(2*(2+1)).” In this interpretation, this will yield the answer to be 1.
Source: Quora
Disclaimer: This is rehashed from an article I wrote in June 2016 for GineersNow. Some parts are added or edited for clarity.
|
{"url":"https://engineerdee.com/pemdas/","timestamp":"2024-11-03T04:36:16Z","content_type":"text/html","content_length":"64324","record_id":"<urn:uuid:810ef652-e6a9-4673-8c73-63bfd98fdcb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00322.warc.gz"}
|
Open Journal of Discrete Mathematics
Vol.3 No.1(2013), Article ID:27368,3 pages DOI:10.4236/ojdm.2013.31004
Accelerated Series for Riemann Zeta Function at Odd Integer Arguments
^1VTT Technical Research Centre of Finland, Espoo, Finland
^2Department of Applied Physics, University of Eastern Finland, Kuopio, Finland
Email: juuso.olkkonen@vtt.fi, hannu.olkkonen@uef.fi
Received August 29, 2012; revised September 29, 2012; accepted October 12, 2012
Keywords: Riemann Zeta Function; Converging Series; Number Theory; Cryptography; Signal Processing; Compressive Sensing
Riemann zeta function is an important tool in signal analysis and number theory. Applications of the zeta function include e.g. the generation of irrational and prime numbers. In this work we present
a new accelerated series for Riemann zeta function. As an application we describe the recursive algorithm for computation of the zeta function at odd integer arguments.
1. Introduction
The Riemann zeta function
has a central role in number theory and appears in many areas of science and technology [1]. The Riemann zeta function is closely related to the prime numbers via (1) and it is an important tool in
cryptography. Algorithms for evaluating
In this work we describe a new accelerated series for the Riemann zeta function at integer arguments. The main result is involved in Theorem 1.
Theorem 1: Let us suppose that
In Section 2 we give the proof of Theorem 1. In Section 3 we present derivatives of Theorem 1 and describe the method for accelerating the zeta function series given by Theorem 1. In Section 4 we
describe the recursive algorithm for evaluation of the Riemann zeta function at integer arguments.
2. Proof of Theorem 1
We may deduce
The series (2) converges very slowly. However, we may write
which has an accelerated convergence. The proof is now completed.
3. Derivatives of Theorem 1
Lemma 1: For
Proof: Similar as Theorem 1.
Lemma 2: For
Proof: Follows directly from Lemma 1 by elimination of the first term in series (4).
Lemma 3:
Proof: Similar as Theorem 1.
Lemma 4:
Proof: Follows directly Lemma 3.
Lemmas 3 and 4 can be generalised as Lemma 5: For
Lemma 6: For
The last series (Lemma 6) can be further generalized as Lemma 7: For
The series can be further accelerated by noting that
which gives Lemma 8: For
4. Recursive Algorithm
From Lemma 8 we may deduce Lemma 9: For
The last series can be computed if
Both series in (13) have accelerated convergence and to obtain the required accuracy only a few previously computed
5. Discussion
In this work we present a new accelerated series for Riemann zeta function. The key observation is presented in Theorem 1. The infinite summation of the zeta functions weighted by
The zeta function values for odd integers are generally believed to be irrational, thought consistent proof is given only for
Recently a close connection with the log-time sampled signals and the zeta function has been observed [8]. The zeta transform allows the analysis and synthesis of the log-time sampled signals for
example in compressive sensing applications.
6. Acknowledgements
This work was supported by the National Technology Agency of Finland (TEKES). The authors would like to thank the anonymous reviewers for their valuable comments.
1. J. M. Borwein, D. M. Bradley and R. E. Crandall, “Computational Strategies for the Riemann Zeta Function,” Journal of Computational and Applied Mathematics, Vol. 121, No. 1-2, 2000, pp. 247-296.
2. E. Grosswald, “Remarks Concerning the Values of the Riemann Zeta Function at Integral, Odd Arguments,” Journal of Number Theory, Vol. 4, No. 3, 1972, pp. 225- 235. doi:10.1016/0022-314X(72)
3. D. Cvijović and J. Klinowski, “Integral Representations of the Riemann Zeta Function for Odd-Integer Arguments,” Journal of Computational and Applied Mathematics, Vol. 142, No. 2, 2002, pp.
435-439. doi:10.1016/S0377-0427(02)00358-8
4. T. Ito, “On an Integral Representation of Special Values of the Zeta Function at Odd Integers,” Journal of Mathematical Society of Japan, Vol. 58, No. 3, 2006, pp. 681- 691. doi:10.2969/jmsj/
5. H. Olkkonen and J. T. Olkkonen, “Fast Converging Series for Riemann Zeta Function,” Open Journal of Discrete Mathematics, Vol. 2, No. 4, 2012, pp. 131-133. doi:10.4236/ojdm.2012.24025
6. R. Apéry, “Irrationalité de ζ(2) et ζ(3),” Astérisque, Vol. 61, 1979, pp. 11-13.
7. F. Beukers, “A Note on the Irrationality of ζ(2) and ζ(3),” Bulletin London Mathematical Society, Vol. 11, No. 3, 1979, pp. 268-272. doi:10.1112/blms/11.3.268
8. H. Olkkonen and J. T. Olkkonen, “Log-Time Sampling of Signals: Zeta Transform,” Open Journal of Discrete Mathematics, Vol. 1, No. 2, 2011, pp. 62-65. doi:10.4236/ojdm.2011.12008
|
{"url":"https://file.scirp.org/Html/4-1200110_27368.htm","timestamp":"2024-11-14T11:48:25Z","content_type":"application/xhtml+xml","content_length":"28930","record_id":"<urn:uuid:9366d126-6354-431f-8737-272e5506b874>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00454.warc.gz"}
|
Steady state of a two-species annihilation process with separated reactants
We describe the steady state of the annihilation process of a one-dimensional system of two initially separated reactants A and B. The parameters that define the dynamical behavior of the system are
the diffusion constant, the reaction rate, and the deposition rate. Depending on the ratio between those parameters, the system exhibits a crossover between a diffusion-limited (DL) regime and a
reaction-limited (RL) regime. We found that a key quantity to describe the reaction process in the system is the probability p(xA, xB) to find the rightmost A (RMA) particle and the leftmost B (LMB)
particle at the positions xA and xB, respectively. The statistical behavior of the system in both regimes is described using the density of particles, the gap length distribution xB − xA, the
marginal probabilities pA(xA) and pB(xB), and the reaction kernel. For both regimes, this kernel can be approximated by using p(xA, xB). We found an excellent agreement between the numerical and
analytical results for all calculated quantities despite the reaction process being quite different in both regimes. In the DL regime, the reaction kernel can be approximated by the probability to
find the RMA and LMB particles in adjacent sites. In the RL regime, the kernel depends on the marginal probabilities pA(xA) and pB(xB).
|
{"url":"https://fisstat.uniandes.edu.co/en/article/steady-state-of-a-two-species-annihilation-process-with-separated-reactants-2/","timestamp":"2024-11-08T08:10:41Z","content_type":"text/html","content_length":"43385","record_id":"<urn:uuid:b8d4016e-ff07-40c5-9406-ddcd5da2d971>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00653.warc.gz"}
|
Most Recent Proofs - Metamath Proof Explorer
Metamath Proof Explorer
Most Recent Proofs
Mirrors > Home > MPE Home > Th. List > Recent ILE Most Recent Other > MM 100
Most recent proofs These are the 100 (Unicode, GIF) or 1000 (Unicode, GIF) most recent proofs in the set.mm database for the Metamath Proof Explorer (and the Hilbert Space Explorer). The set.mm
database is maintained on GitHub with master (stable) and develop (development) versions. This page was created from develop commit 212ee7d9, also available here: set.mm (43MB) or set.mm.bz2
(compressed, 13MB).
The original proofs of theorems with recently shortened proofs can often be found by appending "OLD" to the theorem name, for example 19.43OLD for 19.43. The "OLD" versions are usually deleted after
a year.
Other links Email: Norm Megill. Mailing list: Metamath Google Group Updated 7-Dec-2021 . Contributing: How can I contribute to Metamath? Syndication: RSS feed (courtesy of Dan Getz) Related wikis:
Ghilbert site; Ghilbert Google Group.
Recent news items (7-Aug-2021) Version 0.198 of the metamath program fixes a bug in "write source ... /rewrap" that prevented end-of-sentence punctuation from appearing in column 79, causing some
rewrapped lines to be shorter than necessary. Because this affects about 2000 lines in set.mm, you should use version 0.198 or later for rewrapping before submitting to GitHub.
(7-May-2021) Mario Carneiro has written a Metamath verifier in Lean.
(5-May-2021) Marnix Klooster has written a Metamath verifier in Zig.
(24-Mar-2021) Metamath was mentioned in a couple of articles about OpenAI: Researchers find that large language models struggle with math and What Is GPT-F?.
(26-Dec-2020) Version 0.194 of the metamath program adds the keyword "htmlexturl" to the $t comment to specify external versions of theorem pages. This keyward has been added to set.mm, and you must
update your local copy of set.mm for "verify markup" to pass with the new program version.
(19-Dec-2020) Aleksandr A. Adamov has translated the Wikipedia Metamath page into Russian.
(19-Nov-2020) Eric Schmidt's checkmm.cpp was used as a test case for C'est, "a non-standard version of the C++20 standard library, with enhanced support for compile-time evaluation." See C++20
Compile-time Metamath Proof Verification using C'est.
(10-Nov-2020) Filip Cernatescu has updated the XPuzzle (Android app) to version 1.2. XPuzzle is a puzzle with math formulas derived from the Metamath system. At the bottom of the web page is a link
to the Google Play Store, where the app can be found.
(7-Nov-2020) Richard Penner created a cross-reference guide between Frege's logic notation and the notation used by set.mm.
(4-Sep-2020) Version 0.192 of the metamath program adds the qualifier '/extract' to 'write source'. See 'help write source' and also this Google Group post.
(23-Aug-2020) Version 0.188 of the metamath program adds keywords Conclusion, Fact, Introduction, Paragraph, Scolia, Scolion, Subsection, and Table to bibliographic references. See 'help write
bibliography' for the complete current list.
Last updated on 13-Nov-2024 at 5:20 AM ET.
Recent Additions to the Metamath Proof Explorer Notes (last updated 7-Dec-2020 )
│ Date │ Label │ Description │
│ Theorem │
│ 9-Nov-2024 │ bj-flddrng 35230 │ Fields are division rings (elemental version). (Contributed by BJ, 9-Nov-2024.) │
│ ⊢ (𝐹 ∈ Field → 𝐹 ∈ DivRing) │
│ 9-Nov-2024 │ bj-dfid2ALT │ Alternate version of dfid2 5473. (Contributed by BJ, 9-Nov-2024.) (Proof modification is discouraged.) Use df-id 5471 instead to make the semantics of the │
│ │ 35006 │ construction df-opab 5132 clearer. (New usage is discouraged.) │
│ ⊢ I = {〈𝑥, 𝑥〉 ∣ ⊤} │
│ 6-Nov-2024 │ sn-iotaex 39958 │ iotaex 6380 without ax-10 2143, ax-11 2160, ax-12 2177. (Contributed by SN, 6-Nov-2024.) │
│ ⊢ (℩𝑥𝜑) ∈ V │
│ 6-Nov-2024 │ sn-iotassuni │ iotassuni 6379 without ax-10 2143, ax-11 2160, ax-12 2177. (Contributed by SN, 6-Nov-2024.) │
│ │ 39957 │ │
│ ⊢ (℩𝑥𝜑) ⊆ ∪ {𝑥 ∣ 𝜑} │
│ 6-Nov-2024 │ sn-iotanul 39956 │ Version of iotanul 6378 using df-iota 6358 instead of dfiota2 6359. (Contributed by SN, 6-Nov-2024.) │
│ ⊢ (¬ ∃𝑦{𝑥 ∣ 𝜑} = {𝑦} → (℩𝑥𝜑) = ∅) │
│ 6-Nov-2024 │ sn-iotauni 39955 │ Version of iotauni 6375 using df-iota 6358 instead of dfiota2 6359. (Contributed by SN, 6-Nov-2024.) │
│ ⊢ (∃𝑦{𝑥 ∣ 𝜑} = {𝑦} → (℩𝑥𝜑) = ∪ {𝑥 ∣ 𝜑}) │
│ 6-Nov-2024 │ sn-iotaval 39954 │ Version of iotaval 6374 using df-iota 6358 instead of dfiota2 6359. (Contributed by SN, 6-Nov-2024.) │
│ ⊢ ({𝑥 ∣ 𝜑} = {𝑦} → (℩𝑥𝜑) = 𝑦) │
│ 6-Nov-2024 │ sn-iotalemcor │ Corollary of sn-iotalem 39952. Compare sb8iota 6370. (Contributed by SN, 6-Nov-2024.) │
│ │ 39953 │ │
│ ⊢ (℩𝑥𝜑) = (℩𝑦{𝑥 ∣ 𝜑} = {𝑦}) │
│ 6-Nov-2024 │ sn-iotalem 39952 │ An unused lemma showing that many equivalences involving df-iota 6358 are potentially provable without ax-10 2143, ax-11 2160, ax-12 2177. (Contributed by SN, │
│ │ │ 6-Nov-2024.) │
│ ⊢ {𝑦 ∣ {𝑥 ∣ 𝜑} = {𝑦}} = {𝑧 ∣ {𝑦 ∣ {𝑥 ∣ 𝜑} = {𝑦}} = {𝑧}} │
│ 6-Nov-2024 │ eqimssd 39946 │ Equality implies inclusion, deduction version. (Contributed by SN, 6-Nov-2024.) │
│ ⊢ (𝜑 → 𝐴 = 𝐵) ⇒ ⊢ (𝜑 → 𝐴 ⊆ 𝐵) │
│ │ │ Alternate definition of the identity relation. Instance of dfid3 5474 not requiring auxiliary axioms. (Contributed by NM, 15-Mar-2007.) Reduce axiom usage. │
│ 5-Nov-2024 │ dfid2 5473 │ (Revised by Gino Giotto, 4-Nov-2024.) (Proof shortened by BJ, 5-Nov-2024.) │
│ │ │ │
│ │ │ Use df-id 5471 instead to make the semantics of the constructor df-opab 5132 clearer. (New usage is discouraged.) │
│ ⊢ I = {〈𝑥, 𝑥〉 ∣ 𝑥 = 𝑥} │
│ 5-Nov-2024 │ r19.30 3267 │ Restricted quantifier version of 19.30 1889. (Contributed by Scott Fenton, 25-Feb-2011.) (Proof shortened by Wolf Lammen, 5-Nov-2024.) │
│ ⊢ (∀𝑥 ∈ 𝐴 (𝜑 ∨ 𝜓) → (∀𝑥 ∈ 𝐴 𝜑 ∨ ∃𝑥 ∈ 𝐴 𝜓)) │
│ 4-Nov-2024 │ sbthfi 8897 │ Schroeder-Bernstein Theorem for finite sets, proved without using the Axiom of Power Sets (unlike sbth 8791). (Contributed by BTernaryTau, 4-Nov-2024.) │
│ ⊢ ((𝐵 ∈ Fin ∧ 𝐴 ≼ 𝐵 ∧ 𝐵 ≼ 𝐴) → 𝐴 ≈ 𝐵) │
│ 4-Nov-2024 │ sbthfilem 8896 │ Lemma for sbthfi 8897. (Contributed by BTernaryTau, 4-Nov-2024.) │
│ ⊢ 𝐴 ∈ V & ⊢ 𝐷 = {𝑥 ∣ (𝑥 ⊆ 𝐴 ∧ (𝑔 “ (𝐵 ∖ (𝑓 “ 𝑥))) ⊆ (𝐴 ∖ 𝑥))} & ⊢ 𝐻 = ((𝑓 ↾ ∪ 𝐷) ∪ (^◡𝑔 ↾ (𝐴 ∖ ∪ 𝐷))) & ⊢ 𝐵 ∈ V ⇒ ⊢ ((𝐵 ∈ Fin ∧ 𝐴 ≼ 𝐵 ∧ 𝐵 ≼ 𝐴) → 𝐴 ≈ 𝐵) │
│ 4-Nov-2024 │ r19.29vva 3265 │ A commonly used pattern based on r19.29 3184, version with two restricted quantifiers. (Contributed by Thierry Arnoux, 26-Nov-2017.) (Proof shortened by Wolf │
│ │ │ Lammen, 4-Nov-2024.) │
│ ⊢ ((((𝜑 ∧ 𝑥 ∈ 𝐴) ∧ 𝑦 ∈ 𝐵) ∧ 𝜓) → 𝜒) & ⊢ (𝜑 → ∃𝑥 ∈ 𝐴 ∃𝑦 ∈ 𝐵 𝜓) ⇒ ⊢ (𝜑 → 𝜒) │
│ 4-Nov-2024 │ r19.29d2r 3263 │ Theorem 19.29 of [Margaris] p. 90 with two restricted quantifiers, deduction version. (Contributed by Thierry Arnoux, 30-Jan-2017.) (Proof shortened by Wolf │
│ │ │ Lammen, 4-Nov-2024.) │
│ ⊢ (𝜑 → ∀𝑥 ∈ 𝐴 ∀𝑦 ∈ 𝐵 𝜓) & ⊢ (𝜑 → ∃𝑥 ∈ 𝐴 ∃𝑦 ∈ 𝐵 𝜒) ⇒ ⊢ (𝜑 → ∃𝑥 ∈ 𝐴 ∃𝑦 ∈ 𝐵 (𝜓 ∧ 𝜒)) │
│ 4-Nov-2024 │ r19.12 3254 │ Restricted quantifier version of 19.12 2328. (Contributed by NM, 15-Oct-2003.) (Proof shortened by Andrew Salmon, 30-May-2011.) Avoid ax-13 2373, ax-ext 2710. │
│ │ │ (Revised by Wolf Lammen, 17-Jun-2023.) (Proof shortened by Wolf Lammen, 4-Nov-2024.) │
│ ⊢ (∃𝑥 ∈ 𝐴 ∀𝑦 ∈ 𝐵 𝜑 → ∀𝑦 ∈ 𝐵 ∃𝑥 ∈ 𝐴 𝜑) │
│ 4-Nov-2024 │ ralrexbid 3251 │ Formula-building rule for restricted existential quantifier, using a restricted universal quantifier to bind the quantified variable in the antecedent. │
│ │ │ (Contributed by AV, 21-Oct-2023.) Reduce axiom usage. (Revised by SN, 13-Nov-2023.) (Proof shortened by Wolf Lammen, 4-Nov-2024.) │
│ ⊢ (𝜑 → (𝜓 ↔ 𝜃)) ⇒ ⊢ (∀𝑥 ∈ 𝐴 𝜑 → (∃𝑥 ∈ 𝐴 𝜓 ↔ ∃𝑥 ∈ 𝐴 𝜃)) │
│ 4-Nov-2024 │ reximdvai 3200 │ Deduction quantifying both antecedent and consequent, based on Theorem 19.22 of [Margaris] p. 90. (Contributed by NM, 14-Nov-2002.) Reduce dependencies on axioms. │
│ │ │ (Revised by Wolf Lammen, 8-Jan-2020.) (Proof shortened by Wolf Lammen, 4-Nov-2024.) │
│ ⊢ (𝜑 → (𝑥 ∈ 𝐴 → (𝜓 → 𝜒))) ⇒ ⊢ (𝜑 → (∃𝑥 ∈ 𝐴 𝜓 → ∃𝑥 ∈ 𝐴 𝜒)) │
│ 4-Nov-2024 │ exexw 2059 │ Existential quantification is idempotent. Weak version of bj-exexbiex 34652, requiring fewer axioms. (Contributed by Gino Giotto, 4-Nov-2024.) │
│ ⊢ (𝑥 = 𝑦 → (𝜑 ↔ 𝜓)) ⇒ ⊢ (∃𝑥𝜑 ↔ ∃𝑥∃𝑥𝜑) │
│ 3-Nov-2024 │ nelb 3195 │ A definition of ¬ 𝐴 ∈ 𝐵. (Contributed by Thierry Arnoux, 20-Nov-2023.) (Proof shortened by SN, 23-Jan-2024.) (Proof shortened by Wolf Lammen, 3-Nov-2024.) │
│ ⊢ (¬ 𝐴 ∈ 𝐵 ↔ ∀𝑥 ∈ 𝐵 𝑥 ≠ 𝐴) │
│ 3-Nov-2024 │ rexbi 3170 │ Distribute restricted quantification over a biconditional. (Contributed by Scott Fenton, 7-Aug-2024.) (Proof shortened by Wolf Lammen, 3-Nov-2024.) │
│ ⊢ (∀𝑥 ∈ 𝐴 (𝜑 ↔ 𝜓) → (∃𝑥 ∈ 𝐴 𝜑 ↔ ∃𝑥 ∈ 𝐴 𝜓)) │
│ 2-Nov-2024 │ rexab 3624 │ Existential quantification over a class abstraction. (Contributed by Mario Carneiro, 23-Jan-2014.) (Revised by Mario Carneiro, 3-Sep-2015.) Reduce axiom usage. │
│ │ │ (Revised by Gino Giotto, 2-Nov-2024.) │
│ ⊢ (𝑦 = 𝑥 → (𝜑 ↔ 𝜓)) ⇒ ⊢ (∃𝑥 ∈ {𝑦 ∣ 𝜑}𝜒 ↔ ∃𝑥(𝜓 ∧ 𝜒)) │
│ 2-Nov-2024 │ ralab 3621 │ Universal quantification over a class abstraction. (Contributed by Jeff Madsen, 10-Jun-2010.) Reduce axiom usage. (Revised by Gino Giotto, 2-Nov-2024.) │
│ ⊢ (𝑦 = 𝑥 → (𝜑 ↔ 𝜓)) ⇒ ⊢ (∀𝑥 ∈ {𝑦 ∣ 𝜑}𝜒 ↔ ∀𝑥(𝜓 → 𝜒)) │
│ 31-Oct-2024 │ aks4d1p7 39860 │ Technical step in AKS lemma 4.1 (Contributed by metakunt, 31-Oct-2024.) │
│ ⊢ (𝜑 → 𝑁 ∈ (ℤ[≥]‘3)) & ⊢ 𝐴 = ((𝑁↑(⌊‘(2 log[b] 𝐵))) · ∏𝑘 ∈ (1...(⌊‘((2 log[b] 𝑁)↑2)))((𝑁↑𝑘) − 1)) & ⊢ 𝐵 = (⌈‘((2 log[b] 𝑁)↑5)) & ⊢ 𝑅 = inf({𝑟 ∈ (1...𝐵) ∣ ¬ 𝑟 ∥ 𝐴}, ℝ, < ) ⇒ ⊢ (𝜑 → ∃𝑝 ∈ ℙ (𝑝 ∥ 𝑅 ∧ │
│ ¬ 𝑝 ∥ 𝑁)) │
│ 31-Oct-2024 │ aks4d1p7d1 39859 │ Technical step in AKS lemma 4.1 (Contributed by metakunt, 31-Oct-2024.) │
│ ⊢ (𝜑 → 𝑁 ∈ (ℤ[≥]‘3)) & ⊢ 𝐴 = ((𝑁↑(⌊‘(2 log[b] 𝐵))) · ∏𝑘 ∈ (1...(⌊‘((2 log[b] 𝑁)↑2)))((𝑁↑𝑘) − 1)) & ⊢ 𝐵 = (⌈‘((2 log[b] 𝑁)↑5)) & ⊢ 𝑅 = inf({𝑟 ∈ (1...𝐵) ∣ ¬ 𝑟 ∥ 𝐴}, ℝ, < ) & ⊢ (𝜑 → ∀𝑝 ∈ ℙ (𝑝 ∥ 𝑅 → │
│ 𝑝 ∥ 𝑁)) ⇒ ⊢ (𝜑 → 𝑅 ∥ (𝑁↑(⌊‘(2 log[b] 𝐵)))) │
│ │ │ If 𝑅 is set-like over 𝐴, then the transitive closure of the restriction of 𝑅 to 𝐴 is set-like over 𝐴. │
│ 31-Oct-2024 │ ttrclse 33556 │ │
│ │ │ This theorem requires the axioms of infinity and replacement for its proof. (Contributed by Scott Fenton, 31-Oct-2024.) │
│ ⊢ (𝑅 Se 𝐴 → t++(𝑅 ↾ 𝐴) Se 𝐴) │
│ 31-Oct-2024 │ ttrclselem2 │ Lemma for ttrclse 33556. Show that a suc 𝑁 element long chain gives membership in the 𝑁-th predecessor class and vice-versa. (Contributed by Scott Fenton, │
│ │ 33555 │ 31-Oct-2024.) │
│ ⊢ 𝐹 = rec((𝑏 ∈ V ↦ ∪ 𝑤 ∈ 𝑏 Pred(𝑅, 𝐴, 𝑤)), Pred(𝑅, 𝐴, 𝑋)) ⇒ ⊢ ((𝑁 ∈ ω ∧ 𝑅 Se 𝐴 ∧ 𝑋 ∈ 𝐴) → (∃𝑓(𝑓 Fn suc suc 𝑁 ∧ ((𝑓‘∅) = 𝑦 ∧ (𝑓‘suc 𝑁) = 𝑋) ∧ ∀𝑎 ∈ suc 𝑁(𝑓‘𝑎)(𝑅 ↾ 𝐴)(𝑓‘suc 𝑎)) ↔ 𝑦 ∈ (𝐹‘𝑁))) │
│ 31-Oct-2024 │ ttrclselem1 │ Lemma for ttrclse 33556. Show that all finite ordinal function values of 𝐹 are subsets of 𝐴. (Contributed by Scott Fenton, 31-Oct-2024.) │
│ │ 33554 │ │
│ ⊢ 𝐹 = rec((𝑏 ∈ V ↦ ∪ 𝑤 ∈ 𝑏 Pred(𝑅, 𝐴, 𝑤)), Pred(𝑅, 𝐴, 𝑋)) ⇒ ⊢ (𝑁 ∈ ω → (𝐹‘𝑁) ⊆ 𝐴) │
│ 31-Oct-2024 │ rdg0n 33441 │ If 𝐴 is a proper class, then the recursive function generator at ∅ is the empty set. (Contributed by Scott Fenton, 31-Oct-2024.) │
│ ⊢ (¬ 𝐴 ∈ V → (rec(𝐹, 𝐴)‘∅) = ∅) │
│ 31-Oct-2024 │ reximia 3173 │ Inference quantifying both antecedent and consequent. (Contributed by NM, 10-Feb-1997.) (Proof shortened by Wolf Lammen, 31-Oct-2024.) │
│ ⊢ (𝑥 ∈ 𝐴 → (𝜑 → 𝜓)) ⇒ ⊢ (∃𝑥 ∈ 𝐴 𝜑 → ∃𝑥 ∈ 𝐴 𝜓) │
│ 31-Oct-2024 │ ralcom4 3162 │ Commutation of restricted and unrestricted universal quantifiers. (Contributed by NM, 26-Mar-2004.) (Proof shortened by Andrew Salmon, 8-Jun-2011.) Reduce axiom │
│ │ │ dependencies. (Revised by BJ, 13-Jun-2019.) (Proof shortened by Wolf Lammen, 31-Oct-2024.) │
│ ⊢ (∀𝑥 ∈ 𝐴 ∀𝑦𝜑 ↔ ∀𝑦∀𝑥 ∈ 𝐴 𝜑) │
│ 31-Oct-2024 │ ralbida 3157 │ Formula-building rule for restricted universal quantifier (deduction form). (Contributed by NM, 6-Oct-2003.) (Proof shortened by Wolf Lammen, 31-Oct-2024.) │
│ ⊢ Ⅎ𝑥𝜑 & ⊢ ((𝜑 ∧ 𝑥 ∈ 𝐴) → (𝜓 ↔ 𝜒)) ⇒ ⊢ (𝜑 → (∀𝑥 ∈ 𝐴 𝜓 ↔ ∀𝑥 ∈ 𝐴 𝜒)) │
│ │ │ Similar to Lemma 24 of [Monk2] p. 114, except the quantification of the antecedent is restricted. Derived automatically from hbra2VD 42200. Version of nfra2 3155 │
│ 31-Oct-2024 │ nfra2w 3152 │ with a disjoint variable condition not requiring ax-13 2373. (Contributed by Alan Sare, 31-Dec-2011.) (Revised by Gino Giotto, 24-Sep-2024.) (Proof shortened by │
│ │ │ Wolf Lammen, 31-Oct-2024.) │
│ ⊢ Ⅎ𝑦∀𝑥 ∈ 𝐴 ∀𝑦 ∈ 𝐵 𝜑 │
│ 30-Oct-2024 │ aks4d1p6 39858 │ The maximal prime power exponent is smaller than the binary logarithm floor of 𝐵. (Contributed by metakunt, 30-Oct-2024.) │
│ ⊢ (𝜑 → 𝑁 ∈ (ℤ[≥]‘3)) & ⊢ 𝐴 = ((𝑁↑(⌊‘(2 log[b] 𝐵))) · ∏𝑘 ∈ (1...(⌊‘((2 log[b] 𝑁)↑2)))((𝑁↑𝑘) − 1)) & ⊢ 𝐵 = (⌈‘((2 log[b] 𝑁)↑5)) & ⊢ 𝑅 = inf({𝑟 ∈ (1...𝐵) ∣ ¬ 𝑟 ∥ 𝐴}, ℝ, < ) & ⊢ (𝜑 → 𝑃 ∈ ℙ) & ⊢ (𝜑 → │
│ 𝑃 ∥ 𝑅) & ⊢ 𝐾 = (𝑃 pCnt 𝑅) ⇒ ⊢ (𝜑 → 𝐾 ≤ (⌊‘(2 log[b] 𝐵))) │
│ 30-Oct-2024 │ aks4d1p5 39857 │ Show that 𝑁 and 𝑅 are coprime for AKS existence theorem. Precondition will be eliminated in further theorem. (Contributed by metakunt, 30-Oct-2024.) │
│ ⊢ (𝜑 → 𝑁 ∈ (ℤ[≥]‘3)) & ⊢ 𝐴 = ((𝑁↑(⌊‘(2 log[b] 𝐵))) · ∏𝑘 ∈ (1...(⌊‘((2 log[b] 𝑁)↑2)))((𝑁↑𝑘) − 1)) & ⊢ 𝐵 = (⌈‘((2 log[b] 𝑁)↑5)) & ⊢ 𝑅 = inf({𝑟 ∈ (1...𝐵) ∣ ¬ 𝑟 ∥ 𝐴}, ℝ, < ) & ⊢ (((𝜑 ∧ 1 < (𝑁 gcd 𝑅)) │
│ ∧ (𝑅 / (𝑁 gcd 𝑅)) ∥ 𝐴) → ¬ (𝑅 / (𝑁 gcd 𝑅)) ∥ 𝐴) ⇒ ⊢ (𝜑 → (𝑁 gcd 𝑅) = 1) │
│ 30-Oct-2024 │ pm13.181 3026 │ Theorem *13.181 in [WhiteheadRussell] p. 178. (Contributed by Andrew Salmon, 3-Jun-2011.) (Proof shortened by Wolf Lammen, 30-Oct-2024.) │
│ ⊢ ((𝐴 = 𝐵 ∧ 𝐵 ≠ 𝐶) → 𝐴 ≠ 𝐶) │
│ 29-Oct-2024 │ pm13.18 3025 │ Theorem *13.18 in [WhiteheadRussell] p. 178. (Contributed by Andrew Salmon, 3-Jun-2011.) (Proof shortened by Wolf Lammen, 29-Oct-2024.) │
│ ⊢ ((𝐴 = 𝐵 ∧ 𝐴 ≠ 𝐶) → 𝐵 ≠ 𝐶) │
│ 28-Oct-2024 │ aks4d1p4 39856 │ There exists a small enough number such that it does not divide 𝐴. (Contributed by metakunt, 28-Oct-2024.) │
│ ⊢ (𝜑 → 𝑁 ∈ (ℤ[≥]‘3)) & ⊢ 𝐴 = ((𝑁↑(⌊‘(2 log[b] 𝐵))) · ∏𝑘 ∈ (1...(⌊‘((2 log[b] 𝑁)↑2)))((𝑁↑𝑘) − 1)) & ⊢ 𝐵 = (⌈‘((2 log[b] 𝑁)↑5)) & ⊢ 𝑅 = inf({𝑟 ∈ (1...𝐵) ∣ ¬ 𝑟 ∥ 𝐴}, ℝ, < ) ⇒ ⊢ (𝜑 → (𝑅 ∈ (1...𝐵) ∧ ¬ │
│ 𝑅 ∥ 𝐴)) │
│ 28-Oct-2024 │ predpo 6201 │ Property of the predecessor class for partial orders. (Contributed by Scott Fenton, 28-Apr-2012.) (Proof shortened by Scott Fenton, 28-Oct-2024.) │
│ ⊢ ((𝑅 Po 𝐴 ∧ 𝑋 ∈ 𝐴) → (𝑌 ∈ Pred(𝑅, 𝐴, 𝑋) → Pred(𝑅, 𝐴, 𝑌) ⊆ Pred(𝑅, 𝐴, 𝑋))) │
│ 28-Oct-2024 │ predtrss 6200 │ If 𝑅 is transitive over 𝐴 and 𝑌𝑅𝑋, then Pred(𝑅, 𝐴, 𝑌) is a subclass of Pred(𝑅, 𝐴, 𝑋). (Contributed by Scott Fenton, 28-Oct-2024.) │
│ ⊢ ((((𝑅 ∩ (𝐴 × 𝐴)) ∘ (𝑅 ∩ (𝐴 × 𝐴))) ⊆ 𝑅 ∧ 𝑌 ∈ Pred(𝑅, 𝐴, 𝑋) ∧ 𝑋 ∈ 𝐴) → Pred(𝑅, 𝐴, 𝑌) ⊆ Pred(𝑅, 𝐴, 𝑋)) │
│ 28-Oct-2024 │ necon3ai 2968 │ Contrapositive inference for inequality. (Contributed by NM, 23-May-2007.) (Proof shortened by Andrew Salmon, 25-May-2011.) (Proof shortened by Wolf Lammen, │
│ │ │ 28-Oct-2024.) │
│ ⊢ (𝜑 → 𝐴 = 𝐵) ⇒ ⊢ (𝐴 ≠ 𝐵 → ¬ 𝜑) │
│ 28-Oct-2024 │ sbabel 2941 │ Theorem to move a substitution in and out of a class abstraction. (Contributed by NM, 27-Sep-2003.) (Revised by Mario Carneiro, 7-Oct-2016.) (Proof shortened by │
│ │ │ Wolf Lammen, 28-Oct-2024.) │
│ ⊢ Ⅎ𝑥𝐴 ⇒ ⊢ ([𝑦 / 𝑥]{𝑧 ∣ 𝜑} ∈ 𝐴 ↔ {𝑧 ∣ [𝑦 / 𝑥]𝜑} ∈ 𝐴) │
│ 27-Oct-2024 │ aks4d1p3 39855 │ There exists a small enough number such that it does not divide 𝐴. (Contributed by metakunt, 27-Oct-2024.) │
│ ⊢ (𝜑 → 𝑁 ∈ (ℤ[≥]‘3)) & ⊢ 𝐴 = ((𝑁↑(⌊‘(2 log[b] 𝐵))) · ∏𝑘 ∈ (1...(⌊‘((2 log[b] 𝑁)↑2)))((𝑁↑𝑘) − 1)) & ⊢ 𝐵 = (⌈‘((2 log[b] 𝑁)↑5)) ⇒ ⊢ (𝜑 → ∃𝑟 ∈ (1...𝐵) ¬ 𝑟 ∥ 𝐴) │
│ 27-Oct-2024 │ aks4d1p2 39854 │ Technical lemma for existence of non-divisor. (Contributed by metakunt, 27-Oct-2024.) │
│ ⊢ (𝜑 → 𝑁 ∈ (ℤ[≥]‘3)) & ⊢ 𝐴 = ((𝑁↑(⌊‘(2 log[b] 𝐵))) · ∏𝑘 ∈ (1...(⌊‘((2 log[b] 𝑁)↑2)))((𝑁↑𝑘) − 1)) & ⊢ 𝐵 = (⌈‘((2 log[b] 𝑁)↑5)) ⇒ ⊢ (𝜑 → (2↑𝐵) ≤ (lcm‘(1...𝐵))) │
│ 27-Oct-2024 │ dfwrecs2 35012 │ TODO: Replace df-wrecs 8070 with this definition, and shorten theorems using wrecs with it. (Contributed by BJ, 27-Oct-2024.) │
│ ⊢ wrecs(𝑅, 𝐴, 𝐹) = frecs(𝑅, 𝐴, (𝐹 ∘ 2^nd )) │
│ 27-Oct-2024 │ opco2 7914 │ Value of an operation precomposed with the projection on the second component. (Contributed by BJ, 27-Oct-2024.) │
│ ⊢ (𝜑 → 𝐴 ∈ 𝑉) & ⊢ (𝜑 → 𝐵 ∈ 𝑊) ⇒ ⊢ (𝜑 → (𝐴(𝐹 ∘ 2^nd )𝐵) = (𝐹‘𝐵)) │
│ 27-Oct-2024 │ opco1 7913 │ Value of an operation precomposed with the projection on the first component. (Contributed by Mario Carneiro, 28-May-2014.) Generalize to closed form. (Revised by │
│ │ │ BJ, 27-Oct-2024.) │
│ ⊢ (𝜑 → 𝐴 ∈ 𝑉) & ⊢ (𝜑 → 𝐵 ∈ 𝑊) ⇒ ⊢ (𝜑 → (𝐴(𝐹 ∘ 1^st )𝐵) = (𝐹‘𝐴)) │
│ 27-Oct-2024 │ predexg 6195 │ The predecessor class exists when 𝐴 does. (Contributed by Scott Fenton, 8-Feb-2011.) Generalize to closed form. (Revised by BJ, 27-Oct-2024.) │
│ ⊢ (𝐴 ∈ 𝑉 → Pred(𝑅, 𝐴, 𝑋) ∈ V) │
│ 26-Oct-2024 │ sticksstones22 │ Non-exhaustive sticks and stones. (Contributed by metakunt, 26-Oct-2024.) │
│ │ 39887 │ │
│ ⊢ (𝜑 → 𝑁 ∈ ℕ[0]) & ⊢ (𝜑 → 𝑆 ∈ Fin) & ⊢ (𝜑 → 𝑆 ≠ ∅) & ⊢ 𝐴 = {𝑓 ∣ (𝑓:𝑆⟶ℕ[0] ∧ Σ𝑖 ∈ 𝑆 (𝑓‘𝑖) ≤ 𝑁)} ⇒ ⊢ (𝜑 → (♯‘𝐴) = ((𝑁 + (♯‘𝑆))C(♯‘𝑆))) │
│ 26-Oct-2024 │ dfttrcl2 33553 │ When 𝑅 is a set and a relationship, then its transitive closure can be defined by an intersection. (Contributed by Scott Fenton, 26-Oct-2024.) │
│ ⊢ ((𝑅 ∈ 𝑉 ∧ Rel 𝑅) → t++𝑅 = ∩ {𝑧 ∣ (𝑅 ⊆ 𝑧 ∧ (𝑧 ∘ 𝑧) ⊆ 𝑧)}) │
│ 26-Oct-2024 │ ttrclexg 33552 │ If 𝑅 is a set, then so is t++𝑅. (Contributed by Scott Fenton, 26-Oct-2024.) │
│ ⊢ (𝑅 ∈ 𝑉 → t++𝑅 ∈ V) │
│ 26-Oct-2024 │ rnttrcl 33551 │ The range of a transitive closure is the same as the range of the original class. (Contributed by Scott Fenton, 26-Oct-2024.) │
│ ⊢ ran t++𝑅 = ran 𝑅 │
│ 26-Oct-2024 │ dmttrcl 33550 │ The domain of a transitive closure is the same as the domain of the original class. (Contributed by Scott Fenton, 26-Oct-2024.) │
│ ⊢ dom t++𝑅 = dom 𝑅 │
│ 26-Oct-2024 │ nfttrcld 33539 │ Bound variable hypothesis builder for transitive closure. Deduction form. (Contributed by Scott Fenton, 26-Oct-2024.) │
│ ⊢ (𝜑 → Ⅎ𝑥𝑅) ⇒ ⊢ (𝜑 → Ⅎ𝑥t++𝑅) │
│ 26-Oct-2024 │ nfopab 5138 │ Bound-variable hypothesis builder for class abstraction. (Contributed by NM, 1-Sep-1999.) Remove disjoint variable conditions. (Revised by Andrew Salmon, │
│ │ │ 11-Jul-2011.) (Revised by Scott Fenton, 26-Oct-2024.) │
│ ⊢ Ⅎ𝑧𝜑 ⇒ ⊢ Ⅎ𝑧{〈𝑥, 𝑦〉 ∣ 𝜑} │
│ 26-Oct-2024 │ nfopabd 5137 │ Bound-variable hypothesis builder for class abstraction. Deduction form. (Contributed by Scott Fenton, 26-Oct-2024.) │
│ ⊢ Ⅎ𝑥𝜑 & ⊢ Ⅎ𝑦𝜑 & ⊢ (𝜑 → Ⅎ𝑧𝜓) ⇒ ⊢ (𝜑 → Ⅎ𝑧{〈𝑥, 𝑦〉 ∣ 𝜓}) │
│ 26-Oct-2024 │ sbceqal 3778 │ Class version of one implication of equvelv 2039. (Contributed by Andrew Salmon, 28-Jun-2011.) (Proof shortened by SN, 26-Oct-2024.) │
│ ⊢ (𝐴 ∈ 𝑉 → (∀𝑥(𝑥 = 𝐴 → 𝑥 = 𝐵) → 𝐴 = 𝐵)) │
│ 26-Oct-2024 │ sbcim1 3767 │ Distribution of class substitution over implication. One direction of sbcimg 3762 that holds for proper classes. (Contributed by NM, 17-Aug-2018.) Avoid ax-10 │
│ │ │ 2143, ax-12 2177. (Revised by SN, 26-Oct-2024.) │
│ ⊢ ([𝐴 / 𝑥](𝜑 → 𝜓) → ([𝐴 / 𝑥]𝜑 → [𝐴 / 𝑥]𝜓)) │
│ 26-Oct-2024 │ sbievg 2364 │ Substitution applied to expressions linked by implicit substitution. The proof was part of a former cbvabw 2814 version. (Contributed by GG and WL, 26-Oct-2024.) │
│ ⊢ Ⅎ𝑦𝜑 & ⊢ Ⅎ𝑥𝜓 & ⊢ (𝑥 = 𝑦 → (𝜑 ↔ 𝜓)) ⇒ ⊢ ([𝑧 / 𝑥]𝜑 ↔ [𝑧 / 𝑦]𝜓) │
│ 25-Oct-2024 │ hbab1 2725 │ Bound-variable hypothesis builder for a class abstraction. (Contributed by NM, 26-May-1993.) (Proof shortened by Wolf Lammen, 25-Oct-2024.) │
│ ⊢ (𝑦 ∈ {𝑥 ∣ 𝜑} → ∀𝑥 𝑦 ∈ {𝑥 ∣ 𝜑}) │
│ │ │ If 𝑧 is not free in 𝜑, it is not free in [𝑦 / 𝑥]𝜑 when 𝑧 is distinct from 𝑥 and 𝑦. Version of nfsb 2528 requiring more disjoint variables, but fewer axioms. │
│ 25-Oct-2024 │ nfsbv 2331 │ (Contributed by Mario Carneiro, 11-Aug-2016.) (Revised by Wolf Lammen, 7-Feb-2023.) Remove disjoint variable condition on 𝑥, 𝑦. (Revised by Steven Nguyen, │
│ │ │ 13-Aug-2023.) (Proof shortened by Wolf Lammen, 25-Oct-2024.) │
│ ⊢ Ⅎ𝑧𝜑 ⇒ ⊢ Ⅎ𝑧[𝑦 / 𝑥]𝜑 │
│ 24-Oct-2024 │ sticksstones21 │ Lift sticks and stones to arbitrary finite non-empty sets. (Contributed by metakunt, 24-Oct-2024.) │
│ │ 39886 │ │
│ ⊢ (𝜑 → 𝑁 ∈ ℕ[0]) & ⊢ (𝜑 → 𝑆 ∈ Fin) & ⊢ (𝜑 → 𝑆 ≠ ∅) & ⊢ 𝐴 = {𝑓 ∣ (𝑓:𝑆⟶ℕ[0] ∧ Σ𝑖 ∈ 𝑆 (𝑓‘𝑖) = 𝑁)} ⇒ ⊢ (𝜑 → (♯‘𝐴) = ((𝑁 + ((♯‘𝑆) − 1))C((♯‘𝑆) − 1))) │
│ 24-Oct-2024 │ sticksstones20 │ Lift sticks and stones to arbitrary finite non-empty sets. (Contributed by metakung, 24-Oct-2024.) │
│ │ 39885 │ │
│ ⊢ (𝜑 → 𝑁 ∈ ℕ[0]) & ⊢ (𝜑 → 𝑆 ∈ Fin) & ⊢ (𝜑 → 𝐾 ∈ ℕ) & ⊢ 𝐴 = {𝑔 ∣ (𝑔:(1...𝐾)⟶ℕ[0] ∧ Σ𝑖 ∈ (1...𝐾)(𝑔‘𝑖) = 𝑁)} & ⊢ 𝐵 = {ℎ ∣ (ℎ:𝑆⟶ℕ[0] ∧ Σ𝑖 ∈ 𝑆 (ℎ‘𝑖) = 𝑁)} & ⊢ (𝜑 → (♯‘𝑆) = 𝐾) ⇒ ⊢ (𝜑 → (♯‘𝐵) = ((𝑁 + (𝐾 │
│ − 1))C(𝐾 − 1))) │
│ 24-Oct-2024 │ eldifsucnn 33440 │ Condition for membership in the difference of ω and a nonzero finite ordinal. (Contributed by Scott Fenton, 24-Oct-2024.) │
│ ⊢ (𝐴 ∈ ω → (𝐵 ∈ (ω ∖ suc 𝐴) ↔ ∃𝑥 ∈ (ω ∖ 𝐴)𝐵 = suc 𝑥)) │
│ 24-Oct-2024 │ eqtr3 2765 │ A transitive law for class equality. (Contributed by NM, 20-May-2005.) (Proof shortened by Wolf Lammen, 24-Oct-2024.) │
│ ⊢ ((𝐴 = 𝐶 ∧ 𝐵 = 𝐶) → 𝐴 = 𝐵) │
│ 24-Oct-2024 │ eqtr2 2763 │ A transitive law for class equality. (Contributed by NM, 20-May-2005.) (Proof shortened by Andrew Salmon, 25-May-2011.) (Proof shortened by Wolf Lammen, │
│ │ │ 24-Oct-2024.) │
│ ⊢ ((𝐴 = 𝐵 ∧ 𝐴 = 𝐶) → 𝐵 = 𝐶) │
│ 23-Oct-2024 │ sticksstones19 │ Extend sticks and stones to finite sets, bijective builder. (Contributed by metakunt, 23-Oct-2024.) │
│ │ 39884 │ │
│ ⊢ (𝜑 → 𝑁 ∈ ℕ[0]) & ⊢ (𝜑 → 𝐾 ∈ ℕ[0]) & ⊢ 𝐴 = {𝑔 ∣ (𝑔:(1...𝐾)⟶ℕ[0] ∧ Σ𝑖 ∈ (1...𝐾)(𝑔‘𝑖) = 𝑁)} & ⊢ 𝐵 = {ℎ ∣ (ℎ:𝑆⟶ℕ[0] ∧ Σ𝑖 ∈ 𝑆 (ℎ‘𝑖) = 𝑁)} & ⊢ (𝜑 → 𝑍:(1...𝐾)–1-1-onto→𝑆) & ⊢ 𝐹 = (𝑎 ∈ 𝐴 ↦ (𝑥 ∈ 𝑆 ↦ (𝑎‘ │
│ (^◡𝑍‘𝑥)))) & ⊢ 𝐺 = (𝑏 ∈ 𝐵 ↦ (𝑦 ∈ (1...𝐾) ↦ (𝑏‘(𝑍‘𝑦)))) ⇒ ⊢ (𝜑 → 𝐹:𝐴–1-1-onto→𝐵) │
│ 23-Oct-2024 │ sticksstones18 │ Extend sticks and stones to finite sets, bijective builder. (Contributed by metakunt, 23-Oct-2024.) │
│ │ 39883 │ │
│ ⊢ (𝜑 → 𝑁 ∈ ℕ[0]) & ⊢ (𝜑 → 𝐾 ∈ ℕ[0]) & ⊢ 𝐴 = {𝑔 ∣ (𝑔:(1...𝐾)⟶ℕ[0] ∧ Σ𝑖 ∈ (1...𝐾)(𝑔‘𝑖) = 𝑁)} & ⊢ 𝐵 = {ℎ ∣ (ℎ:𝑆⟶ℕ[0] ∧ Σ𝑖 ∈ 𝑆 (ℎ‘𝑖) = 𝑁)} & ⊢ (𝜑 → 𝑍:(1...𝐾)–1-1-onto→𝑆) & ⊢ 𝐹 = (𝑎 ∈ 𝐴 ↦ (𝑥 ∈ 𝑆 ↦ (𝑎‘ │
│ (^◡𝑍‘𝑥)))) ⇒ ⊢ (𝜑 → 𝐹:𝐴⟶𝐵) │
│ 23-Oct-2024 │ sticksstones17 │ Extend sticks and stones to finite sets, bijective builder. (Contributed by metakunt, 23-Oct-2024.) │
│ │ 39882 │ │
│ ⊢ (𝜑 → 𝑁 ∈ ℕ[0]) & ⊢ (𝜑 → 𝐾 ∈ ℕ[0]) & ⊢ 𝐴 = {𝑔 ∣ (𝑔:(1...𝐾)⟶ℕ[0] ∧ Σ𝑖 ∈ (1...𝐾)(𝑔‘𝑖) = 𝑁)} & ⊢ 𝐵 = {ℎ ∣ (ℎ:𝑆⟶ℕ[0] ∧ Σ𝑖 ∈ 𝑆 (ℎ‘𝑖) = 𝑁)} & ⊢ (𝜑 → 𝑍:(1...𝐾)–1-1-onto→𝑆) & ⊢ 𝐺 = (𝑏 ∈ 𝐵 ↦ (𝑦 ∈ (1...𝐾) │
│ ↦ (𝑏‘(𝑍‘𝑦)))) ⇒ ⊢ (𝜑 → 𝐺:𝐵⟶𝐴) │
│ 23-Oct-2024 │ eqeq12 2756 │ Equality relationship among four classes. (Contributed by NM, 3-Aug-1994.) (Proof shortened by Wolf Lammen, 23-Oct-2024.) │
│ ⊢ ((𝐴 = 𝐵 ∧ 𝐶 = 𝐷) → (𝐴 = 𝐶 ↔ 𝐵 = 𝐷)) │
│ 23-Oct-2024 │ eqeq12d 2755 │ A useful inference for substituting definitions into an equality. (Contributed by NM, 5-Aug-1993.) (Proof shortened by Andrew Salmon, 25-May-2011.) (Proof │
│ │ │ shortened by Wolf Lammen, 23-Oct-2024.) │
│ ⊢ (𝜑 → 𝐴 = 𝐵) & ⊢ (𝜑 → 𝐶 = 𝐷) ⇒ ⊢ (𝜑 → (𝐴 = 𝐶 ↔ 𝐵 = 𝐷)) │
│ 23-Oct-2024 │ eqeqan12d 2753 │ A useful inference for substituting definitions into an equality. See also eqeqan12dALT 2761. (Contributed by NM, 9-Aug-1994.) (Proof shortened by Andrew Salmon, │
│ │ │ 25-May-2011.) Shorten other proofs. (Revised by Wolf Lammen, 23-Oct-2024.) │
│ ⊢ (𝜑 → 𝐴 = 𝐵) & ⊢ (𝜓 → 𝐶 = 𝐷) ⇒ ⊢ ((𝜑 ∧ 𝜓) → (𝐴 = 𝐶 ↔ 𝐵 = 𝐷)) │
│ 21-Oct-2024 │ unifndxnbasendx │ The slot for the uniform set is not the slot for the base set in an extensible structure. (Contributed by AV, 21-Oct-2024.) │
│ │ 16940 │ │
│ ⊢ (UnifSet‘ndx) ≠ (Base‘ndx) │
│ 21-Oct-2024 │ dsndxnbasendx │ The slot for the distance is not the slot for the base set in an extensible structure. (Contributed by AV, 21-Oct-2024.) │
│ │ 16937 │ │
│ ⊢ (dist‘ndx) ≠ (Base‘ndx) │
│ 21-Oct-2024 │ plendxnbasendx │ The slot for the order is not the slot for the base set in an extensible structure. (Contributed by AV, 21-Oct-2024.) │
│ │ 16927 │ │
│ ⊢ (le‘ndx) ≠ (Base‘ndx) │
│ 21-Oct-2024 │ tsetndxnbasendx │ The slot for the topology is not the slot for the base set in an extensible structure. (Contributed by AV, 21-Oct-2024.) │
│ │ 16919 │ │
│ ⊢ (TopSet‘ndx) ≠ (Base‘ndx) │
│ 21-Oct-2024 │ ipndxnbasendx │ The slot for the inner product is not the slot for the base set in an extensible structure. (Contributed by AV, 21-Oct-2024.) │
│ │ 16900 │ │
│ ⊢ (·[𝑖]‘ndx) ≠ (Base‘ndx) │
│ 21-Oct-2024 │ scandxnbasendx │ The slot for the scalar is not the slot for the base set in an extensible structure. (Contributed by AV, 21-Oct-2024.) │
│ │ 16889 │ │
│ ⊢ (Scalar‘ndx) ≠ (Base‘ndx) │
│ 20-Oct-2024 │ sticksstones16 │ Sticks and stones with collapsed definitions for positive integers. (Contributed by metakunt, 20-Oct-2024.) │
│ │ 39881 │ │
│ ⊢ (𝜑 → 𝑁 ∈ ℕ[0]) & ⊢ (𝜑 → 𝐾 ∈ ℕ) & ⊢ 𝐴 = {𝑔 ∣ (𝑔:(1...𝐾)⟶ℕ[0] ∧ Σ𝑖 ∈ (1...𝐾)(𝑔‘𝑖) = 𝑁)} ⇒ ⊢ (𝜑 → (♯‘𝐴) = ((𝑁 + (𝐾 − 1))C(𝐾 − 1))) │
│ 20-Oct-2024 │ ttrclss 33549 │ If 𝑅 is a subclass of 𝑆 and 𝑆 is transitive, then the transitive closure of 𝑅 is a subclass of 𝑆. (Contributed by Scott Fenton, 20-Oct-2024.) │
│ ⊢ ((𝑅 ⊆ 𝑆 ∧ (𝑆 ∘ 𝑆) ⊆ 𝑆) → t++𝑅 ⊆ 𝑆) │
│ 20-Oct-2024 │ cottrcl 33548 │ Composition law for the transitive closure of a relationship. (Contributed by Scott Fenton, 20-Oct-2024.) │
│ ⊢ (𝑅 ∘ t++𝑅) ⊆ t++𝑅 │
│ 20-Oct-2024 │ ttrclco 33547 │ Composition law for the transitive closure of a relationship. (Contributed by Scott Fenton, 20-Oct-2024.) │
│ ⊢ (t++𝑅 ∘ 𝑅) ⊆ t++𝑅 │
│ 20-Oct-2024 │ ttrclresv 33546 │ The transitive closure of 𝑅 restricted to V is the same as the transitive closure of 𝑅 itself. (Contributed by Scott Fenton, 20-Oct-2024.) │
│ ⊢ t++(𝑅 ↾ V) = t++𝑅 │
│ 19-Oct-2024 │ resseqnbas 16825 │ The components of an extensible structure except the base set remain unchanged on a structure restriction. (Contributed by Mario Carneiro, 26-Nov-2014.) (Revised │
│ │ │ by Mario Carneiro, 2-Dec-2014.) (Revised by AV, 19-Oct-2024.) │
│ ⊢ 𝑅 = (𝑊 ↾[s] 𝐴) & ⊢ 𝐶 = (𝐸‘𝑊) & ⊢ 𝐸 = Slot (𝐸‘ndx) & ⊢ (𝐸‘ndx) ≠ (Base‘ndx) ⇒ ⊢ (𝐴 ∈ 𝑉 → 𝐶 = (𝐸‘𝑅)) │
│ 18-Oct-2024 │ vscandxnbasendx │ The slot for the scalar product is not the slot for the base set in an extensible structure. Formerly part of proof for rmodislmod 19999. (Contributed by AV, │
│ │ 16892 │ 18-Oct-2024.) │
│ ⊢ ( ·[𝑠] ‘ndx) ≠ (Base‘ndx) │
│ 18-Oct-2024 │ starvndxnbasendx │ The slot for the involution function is not the slot for the base set in an extensible structure. Formerly part of proof for ressstarv 16881. (Contributed by AV, │
│ │ 16879 │ 18-Oct-2024.) │
│ ⊢ (*[𝑟]‘ndx) ≠ (Base‘ndx) │
│ 17-Oct-2024 │ ttrcltr 33545 │ The transitive closure of a class is transitive. (Contributed by Scott Fenton, 17-Oct-2024.) │
│ ⊢ (t++𝑅 ∘ t++𝑅) ⊆ t++𝑅 │
│ 17-Oct-2024 │ ssttrcl 33544 │ If 𝑅 is a relation, then it is a subclass of its transitive closure. (Contributed by Scott Fenton, 17-Oct-2024.) │
│ ⊢ (Rel 𝑅 → 𝑅 ⊆ t++𝑅) │
│ 17-Oct-2024 │ relttrcl 33541 │ The transitive closure of a class is a relation. (Contributed by Scott Fenton, 17-Oct-2024.) │
│ ⊢ Rel t++𝑅 │
│ 17-Oct-2024 │ nfttrcl 33540 │ Bound variable hypothesis builder for transitive closure. (Contributed by Scott Fenton, 17-Oct-2024.) │
│ ⊢ Ⅎ𝑥𝑅 ⇒ ⊢ Ⅎ𝑥t++𝑅 │
│ 17-Oct-2024 │ ttrcleq 33538 │ Equality theorem for transitive closure. (Contributed by Scott Fenton, 17-Oct-2024.) │
│ ⊢ (𝑅 = 𝑆 → t++𝑅 = t++𝑆) │
│ 17-Oct-2024 │ df-ttrcl 33537 │ Define the transitive closure of a class. This is the smallest relationship containing 𝑅 (or more precisely, the relation (𝑅 ↾ V) induced by 𝑅) and having the │
│ │ │ transitive property. Definition from [Levy] p. 59, who denotes it as 𝑅∗ and calls it the "ancestral" of 𝑅. (Contributed by Scott Fenton, 17-Oct-2024.) │
│ ⊢ t++𝑅 = {〈𝑥, 𝑦〉 ∣ ∃𝑛 ∈ (ω ∖ 1[o])∃𝑓(𝑓 Fn suc 𝑛 ∧ ((𝑓‘∅) = 𝑥 ∧ (𝑓‘𝑛) = 𝑦) ∧ ∀𝑚 ∈ 𝑛 (𝑓‘𝑚)𝑅(𝑓‘suc 𝑚))} │
│ 17-Oct-2024 │ nnasmo 33439 │ Finite ordinal subtraction cancels on the left. (Contributed by Scott Fenton, 17-Oct-2024.) │
│ ⊢ (𝐴 ∈ ω → ∃*𝑥 ∈ ω (𝐴 +[o] 𝑥) = 𝐵) │
│ 17-Oct-2024 │ nnuni 33438 │ The union of a finite ordinal is a finite ordinal. (Contributed by Scott Fenton, 17-Oct-2024.) │
│ ⊢ (𝐴 ∈ ω → ∪ 𝐴 ∈ ω) │
│ 16-Oct-2024 │ thincciso 46048 │ Two thin categories are isomorphic iff the induced preorders are order-isomorphic. Example 3.26(2) of [Adamek] p. 33. (Contributed by Zhi Wang, 16-Oct-2024.) │
│ ⊢ 𝐶 = (CatCat‘𝑈) & ⊢ 𝐵 = (Base‘𝐶) & ⊢ 𝑅 = (Base‘𝑋) & ⊢ 𝑆 = (Base‘𝑌) & ⊢ 𝐻 = (Hom ‘𝑋) & ⊢ 𝐽 = (Hom ‘𝑌) & ⊢ (𝜑 → 𝑈 ∈ 𝑉) & ⊢ (𝜑 → 𝑋 ∈ 𝐵) & ⊢ (𝜑 → 𝑌 ∈ 𝐵) & ⊢ (𝜑 → 𝑋 ∈ ThinCat) & ⊢ (𝜑 → 𝑌 ∈ ThinCat) ⇒ │
│ ⊢ (𝜑 → (𝑋( ≃[𝑐] ‘𝐶)𝑌 ↔ ∃𝑓(∀𝑥 ∈ 𝑅 ∀𝑦 ∈ 𝑅 ((𝑥𝐻𝑦) = ∅ ↔ ((𝑓‘𝑥)𝐽(𝑓‘𝑦)) = ∅) ∧ 𝑓:𝑅–1-1-onto→𝑆))) │
│ 16-Oct-2024 │ bj-elabd2ALT │ Alternate proof of elabd2 3594 bypassing elab6g 3593 (and using sbiedvw 2102 instead of the ∀𝑥(𝑥 = 𝑦 → 𝜓) idiom). (Contributed by BJ, 16-Oct-2024.) (Proof │
│ │ 34883 │ modification is discouraged.) (New usage is discouraged.) │
│ ⊢ (𝜑 → 𝐴 ∈ 𝑉) & ⊢ (𝜑 → 𝐵 = {𝑥 ∣ 𝜓}) & ⊢ ((𝜑 ∧ 𝑥 = 𝐴) → (𝜓 ↔ 𝜒)) ⇒ ⊢ (𝜑 → (𝐴 ∈ 𝐵 ↔ 𝜒)) │
│ 16-Oct-2024 │ omsinds 7686 │ Strong (or "total") induction principle over the finite ordinals. (Contributed by Scott Fenton, 17-Jul-2015.) (Proof shortened by BJ, 16-Oct-2024.) │
│ ⊢ (𝑥 = 𝑦 → (𝜑 ↔ 𝜓)) & ⊢ (𝑥 = 𝐴 → (𝜑 ↔ 𝜒)) & ⊢ (𝑥 ∈ ω → (∀𝑦 ∈ 𝑥 𝜓 → 𝜑)) ⇒ ⊢ (𝐴 ∈ ω → 𝜒) │
│ 16-Oct-2024 │ predon 7590 │ The predecessor of an ordinal under E and On is itself. (Contributed by Scott Fenton, 27-Mar-2011.) (Proof shortened by BJ, 16-Oct-2024.) │
│ ⊢ (𝐴 ∈ On → Pred( E , On, 𝐴) = 𝐴) │
│ 16-Oct-2024 │ elpred 6194 │ Membership in a predecessor class. (Contributed by Scott Fenton, 4-Feb-2011.) (Proof shortened by BJ, 16-Oct-2024.) │
│ ⊢ 𝑌 ∈ V ⇒ ⊢ (𝑋 ∈ 𝐷 → (𝑌 ∈ Pred(𝑅, 𝐴, 𝑋) ↔ (𝑌 ∈ 𝐴 ∧ 𝑌𝑅𝑋))) │
(29-Jul-2020) Mario Carneiro presented MM0 at the CICM conference. See this Google Group post which includes a YouTube link.
(20-Jul-2020) Rohan Ridenour found 5 shorter D-proofs in our Shortest known proofs... file. In particular, he reduced *4.39 from 901 to 609 steps. A note on the Metamath Solitaire page mentions a
tool that he worked with.
(19-Jul-2020) David A. Wheeler posted a video (https://youtu.be/3R27Qx69jHc) on how to (re)prove Schwabh�user 4.6 for the Metamath Proof Explorer. See also his older videos.
(19-Jul-2020) In version 0.184 of the metamath program, "verify markup" now checks that mathboxes are independent i.e. do not cross-reference each other. To turn off this check, use "/mathbox_skip"
(30-Jun-2020) In version 0.183 of the metamath program, (1) "verify markup" now has checking for (i) underscores in labels, (ii) that *ALT and *OLD theorems have both discouragement tags, and (iii)
that lines don't have trailing spaces. (2) "save proof.../rewrap" no longer left-aligns $p/$a comments that contain the string "<HTML>"; see this note.
(5-Apr-2020) Glauco Siliprandi added a new proof to the 100 theorem list, e is Transcendental etransc, bringing the Metamath total to 74.
(12-Feb-2020) A bug in the 'minimize' command of metamath.exe versions 0.179 (29-Nov-2019) and 0.180 (10-Dec-2019) may incorrectly bring in the use of new axioms. Version 0.181 fixes it.
(20-Jan-2020) David A. Wheeler created a video called Walkthrough of the tutorial in mmj2. See the Google Group announcement for more details. (All of his videos are listed on the Other
Metamath-Related Topics page.)
(18-Jan-2020) The FOMM 2020 talks are on youtube now. Mario Carneiro's talk is Metamath Zero, or: How to Verify a Verifier. Since they are washed out in the video, the PDF slides are available
(14-Dec-2019) Glauco Siliprandi added a new proof to the 100 theorem list, Fourier series convergence fourier, bringing the Metamath total to 73.
(25-Nov-2019) Alexander van der Vekens added a new proof to the 100 theorem list, The Cayley-Hamilton Theorem cayleyhamilton, bringing the Metamath total to 72.
(25-Oct-2019) Mario Carneiro's paper "Metamath Zero: The Cartesian Theorem Prover" (submitted to CPP 2020) is now available on arXiv: https://arxiv.org/abs/1910.10703. There is a related discussion
on Hacker News.
(30-Sep-2019) Mario Carneiro's talk about MM0 at ITP 2019 is available on YouTube: x86 verification from scratch (24 minutes). Google Group discussion: Metamath Zero.
(29-Sep-2019) David Wheeler created a fascinating Gource video that animates the construction of set.mm, available on YouTube: Metamath set.mm contributions viewed with Gource through 2019-09-26 (4
minutes). Google Group discussion: Gource video of set.mm contributions.
(24-Sep-2019) nLab added a page for Metamath. It mentions Stefan O'Rear's Busy Beaver work using the set.mm axiomatization (and fails to mention Mario's definitional soundness checker)
(1-Sep-2019) Xuanji Li published a Visual Studio Code extension to support metamath syntax highlighting.
(10-Aug-2019) (revised 21-Sep-2019) Version 0.178 of the metamath program has the following changes: (1) "minimize_with" will now prevent dependence on new $a statements unless the new qualifier "/
allow_new_axioms" is specified. For routine usage, it is suggested that you use "minimize_with * /allow_new_axioms * /no_new_axioms_from ax-*" instead of just "minimize_with *". See "help
minimize_with" and this Google Group post. Also note that the qualifier "/allow_growth" has been renamed to "/may_grow". (2) "/no_versioning" was added to "write theorem_list".
(8-Jul-2019) Jon Pennant announced the creation of a Metamath search engine. Try it and feel free to comment on it at https://groups.google.com/d/msg/metamath/cTeU5AzUksI/5GesBfDaCwAJ.
(16-May-2019) Set.mm now has a major new section on elementary geometry. This begins with definitions that implement Tarski's axioms of geometry (including concepts such as congruence and
betweenness). This uses set.mm's extensible structures, making them easier to use for many circumstances. The section then connects Tarski geometry with geometry in Euclidean places. Most of the work
in this section is due to Thierry Arnoux, with earlier work by Mario Carneiro and Scott Fenton. [Reported by DAW.]
(9-May-2019) We are sad to report that long-time contributor Alan Sare passed away on Mar. 23. There is some more information at the top of his mathbox (click on "Mathbox for Alan Sare") and his
obituary. We extend our condolences to his family.
(10-Mar-2019) Jon Pennant and Mario Carneiro added a new proof to the 100 theorem list, Heron's formula heron, bringing the Metamath total to 71.
(22-Feb-2019) Alexander van der Vekens added a new proof to the 100 theorem list, Cramer's rule cramer, bringing the Metamath total to 70.
(6-Feb-2019) David A. Wheeler has made significant improvements and updates to the Metamath book. Any comments, errors found, or suggestions are welcome and should be turned into an issue or pull
request at https://github.com/metamath/metamath-book (or sent to me if you prefer).
(26-Dec-2018) I added Appendix 8 to the MPE Home Page that cross-references new and old axiom numbers.
(20-Dec-2018) The axioms have been renumbered according to this Google Groups post.
(24-Nov-2018) Thierry Arnoux created a new page on topological structures. The page along with its SVG files are maintained on GitHub.
(11-Oct-2018) Alexander van der Vekens added a new proof to the 100 theorem list, the Friendship Theorem friendship, bringing the Metamath total to 69.
(1-Oct-2018) Naip Moro has written gramm, a Metamath proof verifier written in Antlr4/Java.
(16-Sep-2018) The definition df-riota has been simplified so that it evaluates to the empty set instead of an Undef value. This change affects a significant part of set.mm.
(2-Sep-2018) Thierry Arnoux added a new proof to the 100 theorem list, Euler's partition theorem eulerpart, bringing the Metamath total to 68.
(1-Sep-2018) The Kate editor now has Metamath syntax highlighting built in. (Communicated by Wolf Lammen.)
(15-Aug-2018) The Intuitionistic Logic Explorer now has a Most Recent Proofs page.
(4-Aug-2018) Version 0.163 of the metamath program now indicates (with an asterisk) which Table of Contents headers have associated comments.
(10-May-2018) George Szpiro, journalist and author of several books on popular mathematics such as Poincare's Prize and Numbers Rule, used a genetic algorithm to find shorter D-proofs of "*3.37" and
"meredith" in our Shortest known proofs... file.
(19-Apr-2018) The EMetamath Eclipse plugin has undergone many improvements since its initial release as the change log indicates. Thierry uses it as his main proof assistant and writes, "I added
support for mmj2's auto-transformations, which allows it to infer several steps when building proofs. This added a lot of comfort for writing proofs.... I can now switch back and forth between the
proof assistant and editing the Metamath file.... I think no other proof assistant has this feature."
(11-Apr-2018) Benoît Jubin solved an open problem about the "Axiom of Twoness," showing that it is necessary for completeness. See item 14 on the "Open problems and miscellany" page.
(25-Mar-2018) Giovanni Mascellani has announced mmpp, a new proof editing environment for the Metamath language.
(27-Feb-2018) Bill Hale has released an app for the Apple iPad and desktop computer that allows you to browse Metamath theorems and their proofs.
(17-Jan-2018) Dylan Houlihan has kindly provided a new mirror site. He has also provided an rsync server; type "rsync uk.metamath.org::" in a bash shell to check its status (it should return
"metamath metamath").
(15-Jan-2018) The metamath program, version 0.157, has been updated to implement the file inclusion conventions described in the 21-Dec-2017 entry of mmnotes.txt.
(11-Dec-2017) I added a paragraph, suggested by Gérard Lang, to the distinct variable description here.
(10-Dec-2017) Per FL's request, his mathbox will be removed from set.mm. If you wish to export any of his theorems, today's version (master commit 1024a3a) is the last one that will contain it.
(11-Nov-2017) Alan Sare updated his completeusersproof program.
(3-Oct-2017) Sean B. Palmer created a web page that runs the metamath program under emulated Linux in JavaScript. He also wrote some programs to work with our shortest known proofs of the PM
propositional calculus theorems.
(28-Sep-2017) Ivan Kuckir wrote a tutorial blog entry, Introduction to Metamath, that summarizes the language syntax. (It may have been written some time ago, but I was not aware of it before.)
(26-Sep-2017) The default directory for the Metamath Proof Explorer (MPE) has been changed from the GIF version (mpegif) to the Unicode version (mpeuni) throughout the site. Please let me know if you
find broken links or other issues.
(24-Sep-2017) Saveliy Skresanov added a new proof to the 100 theorem list, Ceva's Theorem cevath, bringing the Metamath total to 67.
(3-Sep-2017) Brendan Leahy added a new proof to the 100 theorem list, Area of a Circle areacirc, bringing the Metamath total to 66.
(7-Aug-2017) Mario Carneiro added a new proof to the 100 theorem list, Principle of Inclusion/Exclusion incexc, bringing the Metamath total to 65.
(1-Jul-2017) Glauco Siliprandi added a new proof to the 100 theorem list, Stirling's Formula stirling, bringing the Metamath total to 64. Related theorems include 2 versions of Wallis' formula for π
(wallispi and wallispi2).
(7-May-2017) Thierry Arnoux added a new proof to the 100 theorem list, Betrand's Ballot Problem ballotth, bringing the Metamath total to 63.
(20-Apr-2017) Glauco Siliprandi added a new proof in the supplementary list on the 100 theorem list, Stone-Weierstrass Theorem stowei.
(28-Feb-2017) David Moews added a new proof to the 100 theorem list, Product of Segments of Chords chordthm, bringing the Metamath total to 62.
(1-Jan-2017) Saveliy Skresanov added a new proof to the 100 theorem list, Isosceles triangle theorem isosctr, bringing the Metamath total to 61.
(1-Jan-2017) Mario Carneiro added 2 new proofs to the 100 theorem list, L'Hôpital's Rule lhop and Taylor's Theorem taylth, bringing the Metamath total to 60.
(28-Dec-2016) David A. Wheeler is putting together a page on Metamath (specifically set.mm) conventions. Comments are welcome on the Google Group thread.
(24-Dec-2016) Mario Carneiro introduced the abbreviation "F/ x ph" (symbols: turned F, x, phi) in df-nf to represent the "effectively not free" idiom "A. x ( ph -> A. x ph )". Theorem nf2 shows a
version without nested quantifiers.
(22-Dec-2016) Naip Moro has developed a Metamath database for G. Spencer-Brown's Laws of Form. You can follow the Google Group discussion here.
(20-Dec-2016) In metamath program version 0.137, 'verify markup *' now checks that ax-XXX $a matches axXXX $p when the latter exists, per the discussion at https://groups.google.com/d/msg/metamath/
(24-Nov-2016) Mingl Yuan has kindly provided a mirror site in Beijing, China. He has also provided an rsync server; type "rsync cn.metamath.org::" in a bash shell to check its status (it should
return "metamath metamath").
(14-Aug-2016) All HTML pages on this site should now be mobile-friendly and pass the Mobile-Friendly Test. If you find one that does not, let me know.
(14-Aug-2016) Daniel Whalen wrote a paper describing the use of using deep learning to prove 14% of test theorems taken from set.mm: Holophrasm: a neural Automated Theorem Prover for higher-order
logic. The associated program is called Holophrasm.
(14-Aug-2016) David A. Wheeler created a video called Metamath Proof Explorer: A Modern Principia Mathematica
(12-Aug-2016) A Gitter chat room has been created for Metamath.
(9-Aug-2016) Mario Carneiro wrote a Metamath proof verifier in the Scala language as part of the ongoing Metamath -> MMT import project
(9-Aug-2016) David A. Wheeler created a GitHub project called metamath-test (last execution run) to check that different verifiers both pass good databases and detect errors in defective ones.
(4-Aug-2016) Mario gave two presentations at CICM 2016.
(17-Jul-2016) Thierry Arnoux has written EMetamath, a Metamath plugin for the Eclipse IDE.
(16-Jul-2016) Mario recovered Chris Capel's collapsible proof demo.
(13-Jul-2016) FL sent me an updated version of PDF (LaTeX source) developed with Lamport's pf2 package. See the 23-Apr-2012 entry below.
(12-Jul-2016) David A. Wheeler produced a new video for mmj2 called "Creating functions in Metamath". It shows a more efficient approach than his previous recent video "Creating functions in
Metamath" (old) but it can be of interest to see both approaches.
(10-Jul-2016) Metamath program version 0.132 changes the command 'show restricted' to 'show discouraged' and adds a new command, 'set discouragement'. See the mmnotes.txt entry of 11-May-2016
(updated 10-Jul-2016).
(12-Jun-2016) Dan Getz has written Metamath.jl, a Metamath proof verifier written in the Julia language.
(10-Jun-2016) If you are using metamath program versions 0.128, 0.129, or 0.130, please update to version 0.131. (In the bad versions, 'minimize_with' ignores distinct variable violations.)
(1-Jun-2016) Mario Carneiro added new proofs to the 100 theorem list, the Prime Number Theorem pnt and the Perfect Number Theorem perfect, bringing the Metamath total to 58.
(12-May-2016) Mario Carneiro added a new proof to the 100 theorem list, Dirichlet's theorem dirith, bringing the Metamath total to 56. (Added 17-May-2016) An informal exposition of the proof can be
found at http://metamath-blog.blogspot.com/2016/05/dirichlets-theorem.html
(10-Mar-2016) Metamath program version 0.125 adds a new qualifier, /fast, to 'save proof'. See the mmnotes.txt entry of 10-Mar-2016.
(6-Mar-2016) The most recent set.mm has a large update converting variables from letters to symbols. See this Google Groups post.
(16-Feb-2016) Mario Carneiro's new paper "Models for Metamath" can be found here and on arxiv.org.
(6-Feb-2016) There are now 22 math symbols that can be used as variable names. See mmascii.html near the 50th table row, starting with "./\".
(29-Jan-2016) Metamath program version 0.123 adds /packed and /explicit qualifiers to 'save proof' and 'show proof'. See this Google Groups post.
(13-Jan-2016) The Unicode math symbols now provide for external CSS and use the XITS web font. Thanks to David A. Wheeler, Mario Carneiro, Cris Perdue, Jason Orendorff, and Frédéric Liné for
discussions on this topic. Two commands, htmlcss and htmlfont, were added to the $t comment in set.mm and are recognized by Metamath program version 0.122.
(21-Dec-2015) Axiom ax-12, now renamed ax-12o, was replaced by a new shorter equivalent, ax-12. The equivalence is provided by theorems ax12o and ax12.
(13-Dec-2015) A new section on the theory of classes was added to the MPE Home Page. Thanks to Gérard Lang for suggesting this section and improvements to it.
(17-Nov-2015) Metamath program version 0.121: 'verify markup' was added to check comment markup consistency; see 'help verify markup'. You are encouraged to make sure 'verify markup */f' has no
warnings prior to mathbox submissions. The date consistency rules are given in this Google Groups post.
(23-Sep-2015) Drahflow wrote, "I am currently working on yet another proof assistant, main reason being: I understand stuff best if I code it. If anyone is interested: https://github.com/Drahflow/
Igor (but in my own programming language, so expect a complicated build process :P)"
(23-Aug-2015) Ivan Kuckir created MM Tool, a Metamath proof verifier and editor written in JavaScript that runs in a browser.
(25-Jul-2015) Axiom ax-10 is shown to be redundant by theorem ax10 , so it was removed from the predicate calculus axiom list.
(19-Jul-2015) Mario Carneiro gave two talks related to Metamath at CICM 2015, which are linked to at Other Metamath-Related Topics.
(18-Jul-2015) The metamath program has been updated to version 0.118. 'show trace_back' now has a '/to' qualifier to show the path back to a specific axiom such as ax-ac. See 'help show trace_back'.
(12-Jul-2015) I added the HOL Explorer for Mario Carneiro's hol.mm database. Although the home page needs to be filled out, the proofs can be accessed.
(11-Jul-2015) I started a new page, Other Metamath-Related Topics, that will hold miscellaneous material that doesn't fit well elsewhere (or is hard to find on this site). Suggestions welcome.
(23-Jun-2015) Metamath's mascot, Penny the cat (2007 photo), passed away today. She was 18 years old.
(21-Jun-2015) Mario Carneiro added 3 new proofs to the 100 theorem list: All Primes (1 mod 4) Equal the Sum of Two Squares 2sq, The Law of Quadratic Reciprocity lgsquad and the AM-GM theorem amgm,
bringing the Metamath total to 55.
(13-Jun-2015) Stefan O'Rear's smm, written in JavaScript, can now be used as a standalone proof verifier. This brings the total number of independent Metamath verifiers to 8, written in just as many
languages (C, Java. JavaScript, Python, Haskell, Lua, C#, C++).
(12-Jun-2015) David A. Wheeler added 2 new proofs to the 100 theorem list: The Law of Cosines lawcos and Ptolemy's Theorem ptolemy, bringing the Metamath total to 52.
(30-May-2015) The metamath program has been updated to version 0.117. (1) David A. Wheeler provided an enhancement to speed up the 'improve' command by 28%; see README.TXT for more information. (2)
In web pages with proofs, local hyperlinks on step hypotheses no longer clip the Expression cell at the top of the page.
(9-May-2015) Stefan O'Rear has created an archive of older set.mm releases back to 1998: https://github.com/sorear/set.mm-history/.
(7-May-2015) The set.mm dated 7-May-2015 is a major revision, updated by Mario, that incorporates the new ordered pair definition df-op that was agreed upon. There were 700 changes, listed at the top
of set.mm. Mathbox users are advised to update their local mathboxes. As usual, if any mathbox user has trouble incorporating these changes into their mathbox in progress, Mario or I will be glad to
do them for you.
(7-May-2015) Mario has added 4 new theorems to the 100 theorem list: Ramsey's Theorem ramsey, The Solution of a Cubic cubic, The Solution of the General Quartic Equation quart, and The Birthday
Problem birthday. In the Supplementary List, Stefan O'Rear added the Hilbert Basis Theorem hbt.
(28-Apr-2015) A while ago, Mario Carneiro wrote up a proof of the unambiguity of set.mm's grammar, which has now been added to this site: grammar-ambiguity.txt.
(22-Apr-2015) The metamath program has been updated to version 0.114. In MM-PA, 'show new_proof/unknown' now shows the relative offset (-1, -2,...) used for 'assign' arguments, suggested by Stefan
(20-Apr-2015) I retrieved an old version of the missing "Metamath 100" page from archive.org and updated it to what I think is the current state: mm_100.html. Anyone who wants to edit it can email
updates to this page to me.
(19-Apr-2015) The metamath program has been updated to version 0.113, mostly with patches provided by Stefan O'Rear. (1) 'show statement %' (or any command allowing label wildcards) will select
statements whose proofs were changed in current session. ('help search' will show all wildcard matching rules.) (2) 'show statement =' will select the statement being proved in MM-PA. (3) The proof
date stamp is now created only if the proof is complete.
(18-Apr-2015) There is now a section for Scott Fenton's NF database: New Foundations Explorer.
(16-Apr-2015) Mario describes his recent additions to set.mm at https://groups.google.com/forum/#!topic/metamath/VAGNmzFkHCs. It include 2 new additions to the Formalizing 100 Theorems list, Leibniz'
series for pi (leibpi) and the Konigsberg Bridge problem (konigsberg)
(10-Mar-2015) Mario Carneiro has written a paper, "Arithmetic in Metamath, Case Study: Bertrand's Postulate," for CICM 2015. A preprint is available at arXiv:1503.02349.
(23-Feb-2015) Scott Fenton has created a Metamath formalization of NF set theory: https://github.com/sctfn/metamath-nf/. For more information, see the Metamath Google Group posting.
(28-Jan-2015) Mario Carneiro added Wilson's Theorem (wilth), Ascending or Descending Sequences (erdsze, erdsze2), and Derangements Formula (derangfmla, subfaclim), bringing the Metamath total for
Formalizing 100 Theorems to 44.
(19-Jan-2015) Mario Carneiro added Sylow's Theorem (sylow1, sylow2, sylow2b, sylow3), bringing the Metamath total for Formalizing 100 Theorems to 41.
(9-Jan-2015) The hypothesis order of mpbi*an* was changed. See the Notes entry of 9-Jan-2015.
(1-Jan-2015) Mario Carneiro has written a paper, "Conversion of HOL Light proofs into Metamath," that has been submitted to the Journal of Formalized Reasoning. A preprint is available on arxiv.org.
(22-Nov-2014) Stefan O'Rear added the Solutions to Pell's Equation (rmxycomplete) and Liouville's Theorem and the Construction of Transcendental Numbers (aaliou), bringing the Metamath total for
Formalizing 100 Theorems to 40.
(22-Nov-2014) The metamath program has been updated with version 0.111. (1) Label wildcards now have a label range indicator "~" so that e.g. you can show or search all of the statements in a
mathbox. See 'help search'. (Stefan O'Rear added this to the program.) (2) A qualifier was added to 'minimize_with' to prevent the use of any axioms not already used in the proof e.g. 'minimize_with
* /no_new_axioms_from ax-*' will prevent the use of ax-ac if the proof doesn't already use it. See 'help minimize_with'.
(10-Oct-2014) Mario Carneiro has encoded the axiomatic basis for the HOL theorem prover into a Metamath source file, hol.mm.
(24-Sep-2014) Mario Carneiro added the Sum of the Angles of a Triangle (ang180), bringing the Metamath total for Formalizing 100 Theorems to 38.
(15-Sep-2014) Mario Carneiro added the Fundamental Theorem of Algebra (fta), bringing the Metamath total for Formalizing 100 Theorems to 37.
(3-Sep-2014) Mario Carneiro added the Fundamental Theorem of Integral Calculus (ftc1, ftc2). This brings the Metamath total for Formalizing 100 Theorems to 35. (added 14-Sep-2014) Along the way, he
added the Mean Value Theorem (mvth), bringing the total to 36.
(16-Aug-2014) Mario Carneiro started a Metamath blog at http://metamath-blog.blogspot.com/.
(10-Aug-2014) Mario Carneiro added Erdős's proof of the divergence of the inverse prime series (prmrec). This brings the Metamath total for Formalizing 100 Theorems to 34.
(31-Jul-2014) Mario Carneiro added proofs for Euler's Summation of 1 + (1/2)^2 + (1/3)^2 + .... (basel) and The Factor and Remainder Theorems (facth, plyrem). This brings the Metamath total for
Formalizing 100 Theorems to 33.
(16-Jul-2014) Mario Carneiro added proofs for Four Squares Theorem (4sq), Formula for the Number of Combinations (hashbc), and Divisibility by 3 Rule (3dvds). This brings the Metamath total for
Formalizing 100 Theorems to 31.
(11-Jul-2014) Mario Carneiro added proofs for Divergence of the Harmonic Series (harmonic), Order of a Subgroup (lagsubg), and Lebesgue Measure and Integration (itgcl). This brings the Metamath total
for Formalizing 100 Theorems to 28.
(7-Jul-2014) Mario Carneiro presented a talk, "Natural Deduction in the Metamath Proof Language," at the 6PCM conference. Slides Audio
(25-Jun-2014) In version 0.108 of the metamath program, the 'minimize_with' command is now more automated. It now considers compressed proof length; it scans the statements in forward and reverse
order and chooses the best; and it avoids $d conflicts. The '/no_distinct', '/brief', and '/reverse' qualifiers are obsolete, and '/verbose' no longer lists all statements scanned but gives more
details about decision criteria.
(12-Jun-2014) To improve naming uniformity, theorems about operation values now use the abbreviation "ov". For example, df-opr, opreq1, oprabval5, and oprvres are now called df-ov, oveq1, ov5, and
ovres respectively.
(11-Jun-2014) Mario Carneiro finished a major revision of set.mm. His notes are under the 11-Jun-2014 entry in the Notes
(4-Jun-2014) Mario Carneiro provided instructions and screenshots for syntax highlighting for the jEdit editor for use with Metamath and mmj2 source files.
(19-May-2014) Mario Carneiro added a feature to mmj2, in the build at https://github.com/digama0/mmj2/raw/dev-build/mmj2jar/mmj2.jar, which tests all but 5 definitions in set.mm for soundness. You
can turn on the test by adding
to your RunParms.txt file.
(17-May-2014) A number of labels were changed in set.mm, listed at the top of set.mm as usual. Note in particular that the heavily-used visset, elisseti, syl11anc, syl111anc were changed respectively
to vex, elexi, syl2anc, syl3anc.
(16-May-2014) Scott Fenton formalized a proof for "Sum of kth powers": fsumkthpow. This brings the Metamath total for Formalizing 100 Theorems to 25.
(9-May-2014) I (Norm Megill) presented an overview of Metamath at the "Formalization of mathematics in proof assistants" workshop at the Institut Henri Poincar� in Paris. The slides for this talk are
(22-Jun-2014) Version 0.107 of the metamath program adds a "PART" indention level to the Statement List table of contents, adds 'show proof ... /size' to show source file bytes used, and adds 'show
elapsed_time'. The last one is helpful for measuring the run time of long commands. See 'help write theorem_list', 'help show proof', and 'help show elapsed_time' for more information.
(2-May-2014) Scott Fenton formalized a proof of Sum of the Reciprocals of the Triangular Numbers: trirecip. This brings the Metamath total for Formalizing 100 Theorems to 24.
(19-Apr-2014) Scott Fenton formalized a proof of the Formula for Pythagorean Triples: pythagtrip. This brings the Metamath total for Formalizing 100 Theorems to 23.
(11-Apr-2014) David A. Wheeler produced a much-needed and well-done video for mmj2, called "Introduction to Metamath & mmj2". Thanks, David!
(15-Mar-2014) Mario Carneiro formalized a proof of Bertrand's postulate: bpos. This brings the Metamath total for Formalizing 100 Theorems to 22.
(18-Feb-2014) Mario Carneiro proved that complex number axiom ax-cnex is redundant (theorem cnex). See also Real and Complex Numbers.
(11-Feb-2014) David A. Wheeler has created a theorem compilation that tracks those theorems in Freek Wiedijk's Formalizing 100 Theorems list that have been proved in set.mm. If you find a error or
omission in this list, let me know so it can be corrected. (Update 1-Mar-2014: Mario has added eulerth and bezout to the list.)
(4-Feb-2014) Mario Carneiro writes:
The latest commit on the mmj2 development branch introduced an exciting new feature, namely syntax highlighting for mmp files in the main window. (You can pick up the latest mmj2.jar at https://
github.com/digama0/mmj2/blob/develop/mmj2jar/mmj2.jar .) The reason I am asking for your help at this stage is to help with design for the syntax tokenizer, which is responsible for breaking down
the input into various tokens with names like "comment", "set", and "stephypref", which are then colored according to the user's preference. As users of mmj2 and metamath, what types of
highlighting would be useful to you?
One limitation of the tokenizer is that since (for performance reasons) it can be started at any line in the file, highly contextual coloring, like highlighting step references that don't exist
previously in the file, is difficult to do. Similarly, true parsing of the formulas using the grammar is possible but likely to be unmanageably slow. But things like checking theorem labels
against the database is quite simple to do under the current setup.
That said, how can this new feature be optimized to help you when writing proofs?
(13-Jan-2014) Mathbox users: the *19.21a*, *19.23a* series of theorems have been renamed to *alrim*, *exlim*. You can update your mathbox with a global replacement of string '19.21a' with 'alrim' and
'19.23a' with 'exlim'.
(5-Jan-2014) If you downloaded mmj2 in the past 3 days, please update it with the current version, which fixes a bug introduced by the recent changes that made it unable to read in most of the proofs
in the textarea properly.
(4-Jan-2014) I added a list of "Allowed substitutions" under the "Distinct variable groups" list on the theorem web pages, for example axsep. This is an experimental feature and comments are welcome.
(3-Jan-2014) Version 0.102 of the metamath program produces more space-efficient compressed proofs (still compatible with the specification in Appendix B of the Metamath book) using an algorithm
suggested by Mario Carneiro. See 'help save proof' in the program. Also, mmj2 now generates proofs in the new format. The new mmj2 also has a mandatory update that fixes a bug related to the new
format; you must update your mmj2 copy to use it with the latest set.mm.
(23-Dec-2013) Mario Carneiro has updated many older definitions to use the maps-to notation. If you have difficulty updating your local mathbox, contact him or me for assistance.
(1-Nov-2013) 'undo' and 'redo' commands were added to the Proof Assistant in metamath program version 0.07.99. See 'help undo' in the program.
(8-Oct-2013) Today's Notes entry describes some proof repair techniques.
(5-Oct-2013) Today's Notes entry explains some recent extensible structure improvements.
(8-Sep-2013) Mario Carneiro has revised the square root and sequence generator definitions. See today's Notes entry.
(3-Aug-2013) Mario Carneiro writes: "I finally found enough time to create a GitHub repository for development at https://github.com/digama0/mmj2. A permalink to the latest version plus source (akin
to mmj2.zip) is https://github.com/digama0/mmj2/zipball/, and the jar file on its own (mmj2.jar) is at https://github.com/digama0/mmj2/blob/master/mmj2jar/mmj2.jar?raw=true. Unfortunately there is no
easy way to automatically generate mmj2jar.zip, but this is available as part of the zip distribution for mmj2.zip. History tracking will be handled by the repository now. Do you have old versions of
the mmj2 directory? I could add them as historical commits if you do."
(18-Jun-2013) Mario Carneiro has done a major revision and cleanup of the construction of real and complex numbers. In particular, rather than using equivalence classes as is customary for the
construction of the temporary rationals, he used only "reduced fractions", so that the use of the axiom of infinity is avoided until it becomes necessary for the construction of the temporary reals.
(18-May-2013) Mario Carneiro has added the ability to produce compressed proofs to mmj2. This is not an official release but can be downloaded here if you want to try it: mmj2.jar. If you have any
feedback, send it to me (NM), and I will forward it to Mario. (Disclaimer: this release has not been endorsed by Mel O'Cat. If anyone has been in contact with him, please let me know.)
(29-Mar-2013) Charles Greathouse reduced the size of our PNG symbol images using the pngout program.
(8-Mar-2013) Wolf Lammen has reorganized the theorems in the "Logical negation" section of set.mm into a more orderly, less scattered arrangement.
(27-Feb-2013) Scott Fenton has done a large cleanup of set.mm, eliminating *OLD references in 144 proofs. See the Notes entry for 27-Feb-2013.
(21-Feb-2013) *ATTENTION MATHBOX USERS* The order of hypotheses of many syl* theorems were changed, per a suggestion of Mario Carneiro. You need to update your local mathbox copy for compatibility
with the new set.mm, or I can do it for you if you wish. See the Notes entry for 21-Feb-2013.
(16-Feb-2013) Scott Fenton shortened the direct-from-axiom proofs of *3.1, *3.43, *4.4, *4.41, *4.5, *4.76, *4.83, *5.33, *5.35, *5.36, and meredith in the "Shortest known proofs of the propositional
calculus theorems from Principia Mathematica" (pmproofs.txt).
(27-Jan-2013) Scott Fenton writes, "I've updated Ralph Levien's mmverify.py. It's now a Python 3 program, and supports compressed proofs and file inclusion statements. This adds about fifty lines to
the original program. Enjoy!"
(10-Jan-2013) A new mathbox was added for Mario Carneiro, who has contributed a number of cardinality theorems without invoking the Axiom of Choice. This is nice work, and I will be using some of
these (those suffixed with "NEW") to replace the existing ones in the main part of set.mm that currently invoke AC unnecessarily.
(4-Jan-2013) As mentioned in the 19-Jun-2012 item below, Eric Schmidt discovered that the complex number axioms axaddcom (now addcom) and ax0id (now addid1) are redundant (schmidt-cnaxioms.pdf, .tex
). In addition, ax1id (now mulid1) can be weakened to ax1rid. Scott Fenton has now formalized this work, so that now there are 23 instead of 25 axioms for real and complex numbers in set.mm. The
Axioms for Complex Numbers page has been updated with these results. An interesting part of the proof, showing how commutativity of addition follows from other laws, is in addcomi.
(27-Nov-2012) The frequently-used theorems "an1s", "an1rs", "ancom13s", "ancom31s" were renamed to "an12s", "an32s", "an13s", "an31s" to conform to the convention for an12 etc.
(4-Nov-2012) The changes proposed in the Notes, renaming Grp to GrpOp etc., have been incorporated into set.mm. See the list of changes at the top of set.mm. If you want me to update your mathbox
with these changes, send it to me along with the version of set.mm that it works with.
(20-Sep-2012) Mel O'Cat updated https://us.metamath.org/ocat/mmj2/TESTmmj2jar.zip. See the README.TXT for a description of the new features.
(21-Aug-2012) Mel O'Cat has uploaded SearchOptionsMockup9.zip, a mockup for the new search screen in mmj2. See the README.txt file for instructions. He will welcome feedback via x178g243 at
(19-Jun-2012) Eric Schmidt has discovered that in our axioms for complex numbers, axaddcom and ax0id are redundant. (At some point these need to be formalized for set.mm.) He has written up these and
some other nice results, including some independence results for the axioms, in schmidt-cnaxioms.pdf (schmidt-cnaxioms.tex).
(23-Apr-2012) Frédéric Liné sent me a PDF (LaTeX source) developed with Lamport's pf2 package. He wrote: "I think it works well with Metamath since the proofs are in a tree form. I use it to have a
sketch of a proof. I get this way a better understanding of the proof and I can cut down its size. For instance, inpreima5 was reduced by 50% when I wrote the corresponding proof with pf2."
(5-Mar-2012) I added links to Wikiproofs and its recent changes in the "Wikis" list at the top of this page.
(12-Jan-2012) Thanks to William Hoza who sent me a ZFC T-shirt, and thanks to the ZFC models (courtesy of the Inaccessible Cardinals agency).
(24-Nov-2011) In metamath program version 0.07.71, the 'minimize_with' command by default now scans from bottom to top instead of top to bottom, since empirically this often (although not always)
results in a shorter proof. A top to bottom scan can be specified with a new qualifier '/reverse'. You can try both methods (starting from the same original proof, of course) and pick the shorter
(15-Oct-2011) From Mel O'Cat:
I just uploaded mmj2.zip containing the 1-Nov-2011 (20111101) release: https://us.metamath.org/ocat/mmj2/mmj2.zip https://us.metamath.org/ocat/mmj2/mmj2.md5
A few last minute tweaks:
1. I now bless double-click starting of mmj2.bat (MacMMJ2.command in Mac OS-X)! See mmj2\QuickStart.html
2. Much improved support of Mac OS-X systems. See mmj2\QuickStart.html
3. I tweaked the Command Line Argument Options report to
a) print every time;
b) print as much as possible even if there are errors in the command line arguments -- and the last line printed corresponds to the argument in error;
c) removed Y/N argument on the command line to enable/disable the report. this simplifies things.
4) Documentation revised, including the PATutorial.
See CHGLOG.TXT for list of all changes. Good luck. And thanks for all of your help!
(15-Sep-2011) MATHBOX USERS: I made a large number of label name changes to set.mm to improve naming consistency. There is a script at the top of the current set.mm that you can use to update your
mathbox or older set.mm. Or if you wish, I can do the update on your next mathbox submission - in that case, please include a .zip of the set.mm version you used.
(30-Aug-2011) Scott Fenton shortened the direct-from-axiom proofs of *3.33, *3.45, *4.36, and meredith in the "Shortest known proofs of the propositional calculus theorems from Principia Mathematica"
(21-Aug-2011) A post on reddit generated 60,000 hits (and a TOS violation notice from my provider...),
(18-Aug-2011) The Metamath Google Group has a discussion of my canonical conjunctions proposal. Any feedback directly to me (Norm Megill) is also welcome.
(4-Jul-2011) John Baker has provided (metamath_kindle.zip) "a modified version of [the] metamath.tex [Metamath] book source that is formatted for the Kindle. If you compile the document the resulting
PDF can be loaded into into a Kindle and easily read." (Update: the PDF file is now included also.)
(3-Jul-2011) Nested 'submit' calls are now allowed, in metamath program version 0.07.68. Thus you can create or modify a command file (script) from within a command file then 'submit' it. While
'submit' cannot pass arguments (nor are there plans to add this feature), you can 'substitute' strings in the 'submit' target file before calling it in order to emulate this.
(28-Jun-2011)The metamath program version 0.07.64 adds the '/include_mathboxes' qualifier to 'minimize_with'; by default, 'minimize_with *' will now skip checking user mathboxes. Since mathboxes
should be independent from each other, this will help prevent accidental cross-"contamination". Also, '/rewrap' was added to 'write source' to automatically wrap $a and $p comments so as to conform
to the current formatting conventions used in set.mm. This means you no longer have to be concerned about line length < 80 etc.
(19-Jun-2011) ATTENTION MATHBOX USERS: The wff variables et, ze, si, and rh are now global. This change was made primarily to resolve some conflicts between mathboxes, but it will also let you avoid
having to constantly redeclare these locally in the future. Unfortunately, this change can affect the $f hypothesis order, which can cause proofs referencing theorems that use these variables to
fail. All mathbox proofs currently in set.mm have been corrected for this, and you should refresh your local copy for further development of your mathbox. You can correct your proofs that are not in
set.mm as follows. Only the proofs that fail under the current set.mm (using version 0.07.62 or later of the metamath program) need to be modified.
To fix a proof that references earlier theorems using et, ze, si, and rh, do the following (using a hypothetical theorem 'abc' as an example): 'prove abc' (ignore error messages), 'delete floating',
'initialize all', 'unify all/interactive', 'improve all', 'save new_proof/compressed'. If your proof uses dummy variables, these must be reassigned manually.
To fix a proof that uses et, ze, si, and rh as local variables, make sure the proof is saved in 'compressed' format. Then delete the local declarations ($v and $f statements) and follow the same
steps above to correct the proof.
I apologize for the inconvenience. If you have trouble fixing your proofs, you can contact me for assistance.
Note: Versions of the metamath program before 0.07.62 did not flag an error when global variables were redeclared locally, as it should have according to the spec. This caused these spec violations
to go unnoticed in some older set.mm versions. The new error messages are in fact just informational and can be ignored when working with older set.mm versions.
(7-Jun-2011) The metamath program version 0.07.60 fixes a bug with the 'minimize_with' command found by Andrew Salmon.
(12-May-2010) Andrew Salmon shortened many proofs, shown above. For comparison, I have temporarily kept the old version, which is suffixed with OLD, such as oridmOLD for oridm.
(9-Dec-2010) Eric Schmidt has written a Metamath proof verifier in C++, called checkmm.cpp.
(3-Oct-2010) The following changes were made to the tokens in set.mm. The subset and proper subset symbol changes to C_ and C. were made to prevent defeating the parenthesis matching in Emacs. Other
changes were made so that all letters a-z and A-Z are now available for variable names. One-letter constants such as _V, _e, and _i are now shown on the web pages with Roman instead of italic font,
to disambiguate italic variable names. The new convention is that a prefix of _ indicates Roman font and a prefix of ~ indicates a script (curly) font. Thanks to Stefan Allan and Frédéric Liné for
discussions leading to this change.
│Old│New│ Description │
│C. │_C │binomial coefficient │
│E │_E │epsilon relation │
│e │_e │Euler's constant │
│I │_I │identity relation │
│i │_i │imaginary unit │
│V │_V │universal class │
│(_ │C_ │subset │
│(. │C. │proper subset │
│P~ │~P │power class │
│H~ │~H │Hilbert space │
(25-Sep-2010) The metamath program (version 0.07.54) now implements the current Metamath spec, so footnote 2 on p. 92 of the Metamath book can be ignored.
(24-Sep-2010) The metamath program (version 0.07.53) fixes bug 2106, reported by Michal Burger.
(14-Sep-2010) The metamath program (version 0.07.52) has a revamped LaTeX output with 'show statement xxx /tex', which produces the combined statement, description, and proof similar to the web page
generation. Also, 'show proof xxx /lemmon/renumber' now matches the web page step numbers. ('show proof xxx/renumber' still has the indented form conforming to the actual RPN proof, with slightly
different numbering.)
(9-Sep-2010) The metamath program (version 0.07.51) was updated with a modification by Stefan Allan that adds hyperlinks the the Ref column of proofs.
(12-Jun-2010) Scott Fenton contributed a D-proof (directly from axioms) of Meredith's single axiom (see the end of pmproofs.txt). A description of Meredith's axiom can be found in theorem meredith.
(11-Jun-2010) A new Metamath mirror was added in Austria, courtesy of Kinder-Enduro.
(28-Feb-2010) Raph Levien's Ghilbert project now has a new Ghilbert site and a Google Group.
(26-Jan-2010) Dmitri Vlasov writes, "I admire the simplicity and power of the metamath language, but still I see its great disadvantage - the proofs in metamath are completely non-manageable by
humans without proof assistants. Therefore I decided to develop another language, which would be a higher-level superstructure language towards metamath, and which will support human-readable/
writable proofs directly, without proof assistants. I call this language mdl (acronym for 'mathematics development language')." The latest version of Dmitri's translators from metamath to mdl and
back can be downloaded from http://mathdevlanguage.sourceforge.net/. Currently only Linux is supported, but Dmitri says is should not be difficult to port it to other platforms that have a g++
(11-Sep-2009) The metamath program (version 0.07.48) has been updated to enforce the whitespace requirement of the current spec.
(10-Sep-2009) Matthew Leitch has written an nice article, "How to write mathematics clearly", that briefly mentions Metamath. Overall it makes some excellent points. (I have written to him about a
few things I disagree with.)
(28-May-2009) AsteroidMeta is back on-line. Note the URL change.
(12-May-2009) Charles Greathouse wrote a Greasemonkey script to reformat the axiom list on Metamath web site proof pages. This is a beta version; he will appreciate feedback.
(11-May-2009) Stefan Allan modified the metamath program to add the command "show statement xxx /mnemonics", which produces the output file Mnemosyne.txt for use with the Mnemosyne project. The
current Metamath program download incorporates this command. Instructions: Create the file mnemosyne.txt with e.g. "show statement ax-* /mnemonics". In the Mnemosyne program, load the file by
choosing File->Import then file format "Q and A on separate lines". Notes: (1) Don't try to load all of set.mm, it will crash the program due to a bug in Mnemosyne. (2) On my computer, the arrows in
ax-1 don't display. Stefan reports that they do on his computer. (Both are Windows XP.)
(3-May-2009) Steven Baldasty wrote a Metamath syntax highlighting file for the gedit editor. Screenshot.
(1-May-2009) Users on a gaming forum discuss our 2+2=4 proof. Notable comments include "Ew math!" and "Whoever wrote this has absolutely no life."
(12-Mar-2009) Chris Capel has created a Javascript theorem viewer demo that (1) shows substitutions and (2) allows expanding and collapsing proof steps. You are invited to take a look and give him
feedback at his Metablog.
(28-Feb-2009) Chris Capel has written a Metamath proof verifier in C#, available at http://pdf23ds.net/bzr/MathEditor/Verifier/Verifier.cs and weighing in at 550 lines. Also, that same URL without
the file on it is a Bazaar repository.
(2-Dec-2008) A new section was added to the Deduction Theorem page, called Logic, Metalogic, Metametalogic, and Metametametalogic.
(24-Aug-2008) (From ocat): The 1-Aug-2008 version of mmj2 is ready (mmj2.zip), size = 1,534,041 bytes. This version contains the Theorem Loader enhancement which provides a "sandboxing" capability
for user theorems and dynamic update of new theorems to the Metamath database already loaded in memory by mmj2. Also, the new "mmj2 Service" feature enables calling mmj2 as a subroutine, or having
mmj2 call your program, and provides access to the mmj2 data structures and objects loaded in memory (i.e. get started writing those Jython programs!) See also mmj2 on AsteroidMeta.
(23-May-2008) Gérard Lang pointed me to Bob Solovay's note on AC and strongly inaccessible cardinals. One of the eventual goals for set.mm is to prove the Axiom of Choice from Grothendieck's axiom,
like Mizar does, and this note may be helpful for anyone wanting to attempt that. Separately, I also came across a history of the size reduction of grothprim (viewable in Firefox and some versions of
Internet Explorer).
(14-Apr-2008) A "/join" qualifier was added to the "search" command in the metamath program (version 0.07.37). This qualifier will join the $e hypotheses to the $a or $p for searching, so that math
tokens in the $e's can be matched as well. For example, "search *com* +v" produces no results, but "search *com* +v /join" yields commutative laws involving vector addition. Thanks to Stefan Allan
for suggesting this idea.
(8-Apr-2008) The 8,000th theorem, hlrel, was added to the Metamath Proof Explorer part of the database.
(2-Mar-2008) I added a small section to the end of the Deduction Theorem page.
(17-Feb-2008) ocat has uploaded the "1-Mar-2008" mmj2: mmj2.zip. See the description.
(16-Jan-2008) O'Cat has written mmj2 Proof Assistant Quick Tips.
(30-Dec-2007) "How to build a library of formalized mathematics".
(22-Dec-2007) The Metamath Proof Explorer was included in the top 30 science resources for 2007 by the University at Albany Science Library.
(17-Dec-2007) Metamath's Wikipedia entry says, "This article may require cleanup to meet Wikipedia's quality standards" (see its discussion page). Volunteers are welcome. :) (In the interest of
objectivity, I don't edit this entry.)
(20-Nov-2007) Jeff Hoffman created nicod.mm and posted it to the Google Metamath Group.
(19-Nov-2007) Reinder Verlinde suggested adding tooltips to the hyperlinks on the proof pages, which I did for proof step hyperlinks. Discussion.
(5-Nov-2007) A Usenet challenge. :)
(4-Aug-2007) I added a "Request for comments on proposed 'maps to' notation" at the bottom of the AsteroidMeta set.mm discussion page.
(21-Jun-2007) A preprint (PDF file) describing Kurt Maes' axiom of choice with 5 quantifiers, proved in set.mm as ackm.
(20-Jun-2007) The 7,000th theorem, ifpr, was added to the Metamath Proof Explorer part of the database.
(29-Apr-2007) Blog mentions of Metamath: here and here.
(21-Mar-2007) Paul Chapman is working on a new proof browser, which has highlighting that allows you to see the referenced theorem before and after the substitution was made. Here is a screenshot of
theorem 0nn0 and a screenshot of theorem 2p2e4.
(15-Mar-2007) A picture of Penny the cat guarding the us.metamath.org server and making the rounds.
(16-Feb-2007) For convenience, the program "drule.c" (pronounced "D-rule", not "drool") mentioned in pmproofs.txt can now be downloaded (drule.c) without having to ask me for it. The same disclaimer
applies: even though this program works and has no known bugs, it was not intended for general release. Read the comments at the top of the program for instructions.
(28-Jan-2007) Jason Orendorff set up a new mailing list for Metamath: http://groups.google.com/group/metamath.
(20-Jan-2007) Bob Solovay provided a revised version of his Metamath database for Peano arithmetic, peano.mm.
(2-Jan-2007) Raph Levien has set up a wiki called Barghest for the Ghilbert language and software.
(26-Dec-2006) I posted an explanation of theorem ecoprass on Usenet.
(2-Dec-2006) Berislav Žarnić translated the Metamath Solitaire applet to Croatian.
(26-Nov-2006) Dan Getz has created an RSS feed for new theorems as they appear on this page.
(6-Nov-2006) The first 3 paragraphs in Appendix 2: Note on the Axioms were rewritten to clarify the connection between Tarski's axiom system and Metamath.
(31-Oct-2006) ocat asked for a do-over due to a bug in mmj2 -- if you downloaded the mmj2.zip version dated 10/28/2006, then download the new version dated 10/30.
(29-Oct-2006) ocat has announced that the long-awaited 1-Nov-2006 release of mmj2 is available now.
The new "Unify+Get Hints" is quite useful, and any proof can be generated as follows. With "?" in the Hyp field and Ref field blank, select "Unify+Get Hints". Select a hint from the list and put it
in the Ref field. Edit any $n dummy variables to become the desired wffs. Rinse and repeat for the new proof steps generated, until the proof is done.
The new tutorial, mmj2PATutorial.bat, explains this in detail. One way to reduce or avoid dummy $n's is to fill in the Hyp field with a comma-separated list of any known hypothesis matches to earlier
proof steps, keeping a "?" in the list to indicate that the remaining hypotheses are unknown. Then "Unify+Get Hints" can be applied. The tutorial page \mmj2\data\mmp\PATutorial\Page405.mmp has an
Don't forget that the eimm export/import program lets you go back and forth between the mmj2 and the metamath program proof assistants, without exiting from either one, to exploit the best features
of each as required.
(21-Oct-2006) Martin Kiselkov has written a Metamath proof verifier in the Lua scripting language, called verify.lua. While it is not practical as an everyday verifier - he writes that it takes about
40 minutes to verify set.mm on a a Pentium 4 - it could be useful to someone learning Lua or Metamath, and importantly it provides another independent way of verifying the correctness of Metamath
proofs. His code looks like it is nicely structured and very readable. He is currently working on a faster version in C++.
(19-Oct-2006) New AsteroidMeta page by Raph, Distinctors_vs_binders.
(13-Oct-2006) I put a simple Metamath browser on my PDA (Palm Tungsten E) so that I don't have to lug around my laptop. Here is a screenshot. It isn't polished, but I'll provide the file +
instructions if anyone wants it.
(3-Oct-2006) A blog entry, Principia for Reverse Mathematics.
(28-Sep-2006) A blog entry, Metamath responds.
(26-Sep-2006) A blog entry, Metamath isn't hygienic.
(11-Aug-2006) A blog entry, Metamath and the Peano Induction Axiom.
(26-Jul-2006) A new open problem in predicate calculus was added.
(18-Jun-2006) The 6,000th theorem, mt4d, was added to the Metamath Proof Explorer part of the database.
(9-May-2006) Luca Ciciriello has upgraded the t2mf program, which is a C program used to create the MIDI files on the Metamath Music Page, so that it works on MacOS X. This is a nice accomplishment,
since the original program was written before C was standardized by ANSI and will not compile on modern compilers.
Unfortunately, the original program source states no copyright terms. The main author, Tim Thompson, has kindly agreed to release his code to public domain, but two other authors have also
contributed to the code, and so far I have been unable to contact them for copyright clearance. Therefore I cannot offer the MacOS X version for public download on this site until this is resolved.
Update 10-May-2006: Another author, M. Czeiszperger, has released his contribution to public domain.
If you are interested in Luca's modified source code, please contact me directly.
(18-Apr-2006) Incomplete proofs in progress can now be interchanged between the Metamath program's CLI Proof Assistant and mmj2's GUI Proof Assistant, using a new export-import program called eimm.
This can be done without exiting either proof assistant, so that the strengths of each approach can be exploited during proof development. See "Use Case 5a" and "Use Case 5b" at
(28-Mar-2006) Scott Fenton updated his second version of Metamath Solitaire (the one that uses external axioms). He writes: "I've switched to making it a standalone program, as it seems silly to have
an applet that can't be run in a web browser. Check the README file for further info." The download is mmsol-0.5.tar.gz.
(27-Mar-2006) Scott Fenton has updated the Metamath Solitaire Java applet to Java 1.5: (1) QSort has been stripped out: its functionality is in the Collections class that Sun ships; (2) all Vectors
have been replaced by ArrayLists; (3) generic types have been tossed in wherever they fit: this cuts back drastically on casting; and (4) any warnings Eclipse spouted out have been dealt with. I
haven't yet updated it officially, because I don't know if it will work with Microsoft's JVM in older versions of Internet Explorer. The current official version is compiled with Java 1.3, because it
won't work with Microsoft's JVM if it is compiled with Java 1.4. (As distasteful as that seems, I will get complaints from users if it doesn't work with Microsoft's JVM.) If anyone can verify that
Scott's new version runs on Microsoft's JVM, I would be grateful. Scott's new version is mm.java-1.5.gz; after uncompressing it, rename it to mm.java, use it to replace the existing mm.java file in
the Metamath Solitaire download, and recompile according to instructions in the mm.java comments.
Scott has also created a second version, mmsol-0.2.tar.gz, that reads the axioms from ASCII files, instead of having the axioms hard-coded in the program. This can be very useful if you want to play
with custom axioms, and you can also add a collection of starting theorems as "axioms" to work from. However, it must be run from the local directory with appletviewer, since the default Java
security model doesn't allow reading files from a browser. It works with the JDK 5 Update 6 Java download.
To compile (from Windows Command Prompt): C:\Program Files\Java\jdk1.5.0_06\bin\javac.exe mm.java
To run (from Windows Command Prompt): C:\Program Files\Java\jdk1.5.0_06\bin\appletviewer.exe mms.html
(21-Jan-2006) Juha Arpiainen proved the independence of axiom ax-11 from the others. This was published as an open problem in my 1995 paper (Remark 9.5 on PDF page 17). See Item 9a on the Workshop
Miscellany for his seven-line proof. See also the Asteroid Meta metamathMathQuestions page under the heading "Axiom of variable substitution: ax-11". Congratulations, Juha!
(20-Oct-2005) Juha Arpiainen is working on a proof verifier in Common Lisp called Bourbaki. Its proof language has its roots in Metamath, with the goal of providing a more powerful syntax and
definitional soundness checking. See its documentation and related discussion.
(17-Oct-2005) Marnix Klooster has written a Metamath proof verifier in Haskell, called Hmm. Also see his Announcement. The complete program (Hmm.hs, HmmImpl.hs, and HmmVerify.hs) has only 444 lines
of code, excluding comments and blank lines. It verifies compressed as well as regular proofs; moreover, it transparently verifies both per-spec compressed proofs and the flawed format he uncovered
(see comment below of 16-Oct-05).
(16-Oct-2005) Marnix Klooster noticed that for large proofs, the compressed proof format did not match the spec in the book. His algorithm to correct the problem has been put into the Metamath
program (version 0.07.6). The program still verifies older proofs with the incorrect format, but the user will be nagged to update them with 'save proof *'. In set.mm, 285 out of 6376 proofs are
affected. (The incorrect format did not affect proof correctness or verification, since the compression and decompression algorithms matched each other.)
(13-Sep-2005) Scott Fenton found an interesting axiom, ax46, which could be used to replace both ax-4 and ax-6.
(29-Jul-2005) Metamath was selected as site of the week by American Scientist Online.
(8-Jul-2005) Roy Longton has contributed 53 new theorems to the Quantum Logic Explorer. You can see them in the Theorem List starting at lem3.3.3lem1. He writes, "If you want, you can post an open
challenge to see if anyone can find shorter proofs of the theorems I submitted."
(10-May-2005) A Usenet post I posted about the infinite prime proof; another one about indexed unions.
(3-May-2005) The theorem divexpt is the 5,000th theorem added to the Metamath Proof Explorer database.
(12-Apr-2005) Raph Levien solved the open problem in item 16 on the Workshop Miscellany page and as a corollary proved that axiom ax-9 is independent from the other axioms of predicate calculus and
equality. This is the first such independence proof so far; a goal is to prove all of them independent (or to derive any redundant ones from the others).
(8-Mar-2005) I added a paragraph above our complex number axioms table, summarizing the construction and indicating where Dedekind cuts are defined. Thanks to Andrew Buhr for comments on this.
(16-Feb-2005) The Metamath Music Page is mentioned as a reference or resource for a university course called Math, Mind, and Music. .
(28-Jan-2005) Steven Cullinane parodied the Metamath Music Page in his blog.
(18-Jan-2005) Waldek Hebisch upgraded the Metamath program to run on the AMD64 64-bit processor.
(17-Jan-2005) A symbol list summary was added to the beginning of the Hilbert Space Explorer Home Page. Thanks to Mladen Pavicic for suggesting this.
(6-Jan-2005) Someone assembled an amazon.com list of some of the books in the Metamath Proof Explorer Bibliography.
(4-Jan-2005) The definition of ordinal exponentiation was decided on after this Usenet discussion.
(19-Dec-2004) A bit of trivia: my Erdös number is 2, as you can see from this list.
(20-Oct-2004) I started this Usenet discussion about the "reals are uncountable" proof (127 comments; last one on Nov. 12).
(12-Oct-2004) gch-kn shows the equivalence of the Generalized Continuum Hypothesis and Prof. Nambiar's Axiom of Combinatorial Sets. This proof answers his Open Problem 2 (PDF file).
(5-Aug-2004) I gave a talk on "Hilbert Lattice Equations" at the Argonne workshop.
(25-Jul-2004) The theorem nthruz is the 4,000th theorem added to the Metamath Proof Explorer database.
(27-May-2004) Josiah Burroughs contributed the proofs u1lemn1b, u1lem3var1, oi3oa3lem1, and oi3oa3 to the Quantum Logic Explorer database ql.mm.
(23-May-2004) Some minor typos found by Josh Purinton were corrected in the Metamath book. In addition, Josh simplified the definition of the closure of a pre-statement of a formal system in Appendix
(5-May-2004) Gregory Bush has found shorter proofs for 67 of the 193 propositional calculus theorems listed in Principia Mathematica, thus establishing 67 new records. (This was challenge #4 on the
open problems page.)
Copyright terms: Public domain W3C HTML validation [external]
|
{"url":"https://us.metamath.org/mpeuni/mmrecent.html","timestamp":"2024-11-13T07:57:41Z","content_type":"text/html","content_length":"302038","record_id":"<urn:uuid:7810ee82-fc47-4f08-b984-9162ba69d89b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00661.warc.gz"}
|
Comparison of AC losses, magnetic field/current distributions and critical currents of superconducting circular pancake coils and infinitely long stacks using coated conductors
A model is presented for calculating the AC losses, magnetic field/current density distribution and critical currents of a circular superconducting pancake coil. The assumption is that the magnetic
flux lines will lie parallel to the wide faces of tapes in the unpenetrated area of the coil. Instead of using an infinitely long stack to approximate the circular coil, this paper gives an exact
circular coil model using elliptic integrals. A new efficient numerical method is introduced to yield more accurate and fast computation. The computation results are in good agreement with the
assumptions. For a small value of the coil radius, there is an asymmetry along the coil radius direction. As the coil radius increases, this asymmetry will gradually decrease, and the AC losses and
penetration depth will increase, but the critical current will decrease. We find that if the internal radius is equal to the winding thickness, the infinitely long stack approximation overestimates
the loss by 10% and even if the internal radius is reduced to zero, the error is still only 60%. The infinitely long stack approximation is therefore adequate for most practical purposes. In
addition, the comparison result shows that the infinitely long stack approximation saves computation time significantly.
Dive into the research topics of 'Comparison of AC losses, magnetic field/current distributions and critical currents of superconducting circular pancake coils and infinitely long stacks using coated
conductors'. Together they form a unique fingerprint.
|
{"url":"https://researchportal.bath.ac.uk/en/publications/comparison-of-ac-losses-magnetic-fieldcurrent-distributions-and-c","timestamp":"2024-11-12T20:38:28Z","content_type":"text/html","content_length":"57468","record_id":"<urn:uuid:b6587a11-817f-4042-866d-80f583d830a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00580.warc.gz"}
|
Bulletin Updates: Articles | Mathematics | Allegheny College
Bulletin Updates
The Frederick and Marion Steen Mathematics Scholarship was established in honor of Frederick and Marion Steen by their children.
This prestigious, merit-based scholarship is awarded annually to a full-time student in their third year majoring at Allegheny College in the natural sciences. The purpose of this fund is to
partially cover senior year tuition expenses for one selected student per year.
According to the terms of the Scholarship, “the recipient should demonstrate the following character:
• strong understanding of and skills in the application of the principles of mathematics
• ability to communicate and enthuse others with the beauty of mathematics
• commitment to put mathematics to purpose in a teaching, engineering, or scientific profession.”
A natural sciences major who anticipates graduating in 2023 and wishes to be considered for this scholarship should submit their application via email to Professor Ellers (hellers@allegheny.edu), the
Chair of the Mathematics Department. Applications should contain the following:
1. a letter of interest that addresses the three criteria listed above, and
2. a pdf copy of your transcript. (Unofficial is OK.)
The deadline for completed applications is April 24, 2022. Applications will be reviewed by the Mathematics department faculty shortly thereafter.
Charles A. (Chuck) Cable
January 15, 1932 – September 16, 2021
Charles A. (Chuck) Cable, a mathematics educator and Professor Emeritus from Allegheny College, Meadville, Pennsylvania, died on September 16, 2021 following a 24 year battle with prostate cancer.
He was born on January 15, 1932 in Akeley, PA and is a son of Elton and Margaret (Fox) Cable. In 1955 he married Mabel E. Yeck.
Chuck graduated from Edinboro University of Pennsylvania in 1954 with a BS Degree in mathematics and he began teaching high school mathematics and physics in Interlaken, New York in the fall of
1954. His teaching was interrupted in January of 1955 when he was drafted into the U.S. Army where he served for two years at Fort Chaffee, Arkansas. After being honorably discharged in January
1957, Chuck taught at Tidioute High School in Tidioute, PA. He enjoyed teaching mathematics there for 1½ years. At this point he needed to obtain more college credits to become a permanently
certified teacher in the State of Pennsylvania. Normally the GI Bill would have provided the tuition for these credits but the GI Bill had ended in 1956 and was not reinstated until 1968.
Scholarships were being offered by National Science Foundation (NSF) and also General Electric (GE) and he applied. Chuck was surprised when he received two telegrams announcing that he was awarded
both the 1958 Summer GE Fellowship at Rensselaer Polytechnic Institute and a full year NSF at the University of North Carolina (UNC) starting in the fall of 1958. He earned a M.Ed. degree in
mathematics from UNC in 1959 and then held a position as Assistant Professor of Mathematics at Juniata College in Huntingdon, PA from 1959 until 1967.
At the beginning of a long but important journey in his life he attended the NSF Summer Fellowship at Bowdoin College in Brunswick, ME in 1962. Upon returning to Juniata College in the fall of 1962
he continued to enjoy teaching full time while attending evening classes at Penn State University. He received another NSF Summer Fellowship in 1964 and while on sabbatical leave in 1965 he passed
the written exams for the Ph.D. After receiving a one year NSF Faculty Fellowship in 1967 and a National Defense Act Fellowship in 1968 he graduated from Penn State University with a Ph.D. in
mathematics in 1969. His Ph.D. dissertation was titled “The Decomposition of Certain Group Rings” and was written under the supervision of Dr. Raymond Ayoub. In 1969 Chuck was appointed as an
Associate Professor of Mathematics at Allegheny College in Meadville, PA. He was promoted to the rank of Full Professor in 1975.
Shortly after starting at Allegheny College Chuck became chair of a young math department with the goal of making it one of the best in the college. He skillfully and successfully led the department
for 20 years. He was especially proud of his efforts to institute a Math Department Speakers Series in 1972 which invited nationally known mathematicians to give a week-long series of talks at
Allegheny College each Fall Term and Spring Term. These speakers also interacted with students in small group discussions and at mealtimes both on campus and at the Cable home. He also began an
annual publication of Math News which was distributed to all Math Alumni of the College.
Chuck was held in high esteem by his colleagues on the faculty of Allegheny College. He was elected to serve five three year terms on Faculty Council. He served as Chair of the Science Division and
a two year term as President of the local Chapter of the American Association of University Professors.
Dr. Cable was a member of the Mathematics Association of America (MAA) for sixty years at both the sectional and national level. He was Chairman of the Allegheny Mountain Section from 1973 to 1975.
He instituted a session for undergraduate students to present research talks at the annual Allegheny Mountain Section Meeting. The Allegheny Section was the first section to have student sessions.
In 1982 Chuck was elected Governor of the Allegheny Mountain Section. The idea of instituting student chapters was strongly supported by Chuck and he worked hard to convince the other Governors of
its value. The Student Chapter Program passed in 1984, the final year that he was a Governor. He served on the Committee for Student Chapters for the first six years of its existence. He is quoted
in the February 2005 Focus saying, “When my initial efforts to form Student Chapters were unsuccessful, I was quite disappointed and I gave up. However, several months later Paul Halmos urged me to
try again saying that sometimes it takes a while to get used to new ideas. I followed his suggestion and found that he was correct. The persistence eventually paid off.”
He took a sabbatical in 1986 spending a year at University of Colorado Denver where he joined a graph theory research group. He became involved in a project where he and his new colleagues
introduced niche graphs, an extension of competition graphs which arose from the study of food webs. Fortuitously it was presented by a coauthor as part of a talk at a special workshop at the
University of Minnesota on applications of graph theory to the biological and social sciences. As a result of this exposure and Chuck’s first paper in 1989, several mathematicians became interested
in niche graphs and wrote a variety of papers. Working with various combinations of five coauthors, Chuck published seven papers on the topic from 1989 to 2001. A particularly nice paper was a
complete characterization of niche graphs and mixed pair graphs of tournaments found by Chuck and his coauthors. Chuck retired in 1996, but he continued his graph theory research after the niche
graph papers and published three papers on king sets and quasi-kernels from 2005 – 2012.
He was honored to serve as one of the first group of Associate Editors of the MAA’s Focus. His outstanding service to the Allegheny Mountain Section was recognized in 2003 with the Section Service
Award. In January 2005 Chuck received a Meritorious Service Award in Mathematics from the MAA at a ceremony in Atlanta, GA for his service to the MAA at both the sectional and national levels.
Chuck enjoyed competing in various sports activities throughout his life. While he was a faculty member at Juniata College he played handball, basketball and tennis. During the years when he was a
professor at Allegheny College he engaged in tennis, racket ball, sailing and downhill skiing. After he retired he continued downhill skiing, sailing and playing tennis. He was admired for his
great sense of humor. Music played a very important part in his life. He played the trumpet for many years, sang in the Meadville, PA First Presbyterian Church Choir and in the barbershop chorus
at the Green Mountain Presbyterian Church of Lakewood, CO.
In addition to his loving wife, Mabel, of 65 years, he is survived by two children, Christopher Cable and his wife Nancy of Dallas, TX and Carolyn Blinsmon and her husband Brad of Denver CO; three
grandchildren Ryan Cable and his wife Isabel, Dr. Tracy Cable and her husband Barrett Davis, and Heather Blinsmon; one great grandchild David Cable. He is also survived by a twin brother Clair Cable
and his wife Monchaya, who reside in Russell, PA and Bangkok, Thailand and a niece and three nephews. He was preceded in death by his parents.
Memorial contributions may be made to the Charles and Mabel Cable Fund for Visiting Scholars and Speakers in support of the Department of Mathematics at Allegheny College, Institutional Advancement,
520 Main Street, Meadville, PA 16335.
This obituary was written by his wife, Mabel E. Cable and his colleague, Dr. Richard Lundgren, Professor Emeritus, University of Colorado Denver
1. News Release, The Mathematical Association of America, Washington, D.C., January 6, 2005, Five Mathematicians Honored with Meritorious Service Awards.
2. Joint Mathematics Meetings, Atlanta, GA, January 5-8, 2005, Program Booklet, Certificates of Meritorious Service, p.27-28.
3. MAA Focus, Vol.25, No.2, February 2005. Fernando Gouvea and Joe Gallian, Prizes and Awards at the Atlanta Joint Mathematics Meetings, Section Certificates of Meritorious Service, Charles Cable,
4. MAA Focus, Vol.4, No.61/November-December 1984, Charles Cable, U.S. Students Rank Below Students from Other Countries in International Study, p.1.
5. MAA Focus, Vol.36. No. 6, December 2015/January 2016, Kenneth A. Ross, Interview: Aperna Higgens, p. 19-20.
6. Personal Communication
The Frederick and Marion Steen Mathematics Scholarship was established in honor of Frederick and Marion Steen by their children.
This prestigious, merit-based scholarship is awarded annually to a full-time student in their junior year who is majoring at Allegheny College in the natural sciences and will partially cover senior
year tuition expenses.
According to the terms of the Scholarship, “the recipient should demonstrate the following character:
• strong understanding of and skills in the application of the principles of mathematics
• ability to communicate and enthuse others with the beauty of mathematics
• commitment to put mathematics to purpose in a teaching, engineering, or scientific profession.”
A junior natural sciences major who wishes to be considered for this scholarship should submit their application by email to Professor Lakins (tlakins@allegheny.edu), Chair of the Mathematics
Department. Applications should contain the following:
1. A letter of interest that addresses the three criteria listed above, and
2. A pdf copy of your WebAdvisor transcript.
The deadline for completed applications is April 21, 2021. Applications will be reviewed by the Mathematics department faculty shortly thereafter. Please contact Professor Lakins
(tlakins@allegheny.edu) if you have questions.
The Math department congratulates Math major and Economics minor Mica Hanish ’21, who was named NCAC Student-Athlete of the Week. As noted in the announcement:
Hanish, a mathematics major with a 4.00 cumulative GPA, was named a CoSIDA Academic All-America first-team selection last season, only the third individual in Allegheny women’s track & field and
cross country history to earn that honor. She is also a distinguished Allegheny Alden Scholar (GPA above 3.80) and is a member of Chi Alpha Sigma, the national student-athlete honor society, and
Pi Mu Epsilon, the national mathematics honor society, where she serves as treasurer of Allegheny’s chapter. In her cross country career, Hanish has garnered All-NCAC selections three times,
including two first-team honors and one second-team honor, while also earning three All-Region honors. On the track, she has earned All-NCAC accolades in the 3000-meter run, DMR, one-mile and
5000-meter run. In March, she was named All-Region for her performances in the mile and 3,000-meter run.
Allegheny College Mathematics Professor Dr. Tamara Lakins has received the 2020 Meritorious Service Award from the Mathematical Association of America (MAA).
Allegheny Professor of Mathematics Tamara Lakins.
The Certificate of Meritorious Service, announced in a video ceremony in July, was presented for service at the national level or for service to a section of the Mathematical Association of America.
There were five award recipients honored nationwide.
Lakins has been active in the Mathematical Association of America since arriving at Allegheny in 1995. During her first year at Allegheny, she participated in the national MAA professional
development program for new Ph.D.s, called Project NExT (New Experiences in Teaching).
“Many aspects of Project NExT greatly informed my teaching as a new professor at Allegheny,” said Lakins. “My continued relationship with MAA’s Project NExT, both nationally and in the local Section
NExT I helped cofound in 1999, has enabled me to stay connected with current pedagogical conversations. Participating, and eventually becoming a leader, in the local MAA section gave me a valuable
connection to the local mathematics community in western Pennsylvania and West Virginia. I have greatly valued and benefited from those friendships and professional relationships, and I was greatly
honored to receive a 2020 MAA Certificate of Meritorious Service.”
“The individuals we honor represent the spirit of community that is at the heart of the Mathematical Association of America,” said Michael Pearson, executive director of the MAA. “Their willingness
to give of their time, energy, and expertise to benefit their Sections and beyond serves as a fresh reminder of why the MAA remains so close to my heart.”
The Mathematical Association of America is the world’s largest community of mathematicians, students and enthusiasts. Its mission is to accelerate the understanding of the world through mathematics
because mathematics drives society and shapes lives. Learn more at maa.org.
The faculty and staff of the Mathematics department congratulate Math major Ryan Clydesdale and Math minor Olivia Krieger on being named co-valedictorians of the class of 2020. Ryan and Olivia,
although we weren’t able to celebrate you in person this year, we are very proud of your achievements.
Ryan Clydesdale, a Math major with a double minor in Chemistry and Economics, was also awarded the Frederick H. Steen Prize for Excellence in Mathematics and was a previous recipient of a Cornerstone
Research summer internship. A member of the men’s soccer team, Ryan also received the William Crawford Academic Merit Award in Athletics – Male, recognizing him as the male scholar-athlete with the
highest grade-point average.
Olivia Krieger, a Physics major with a double minor in Mathematics and Philosophy, was also awarded the Richard L. Brown Physics Prize. Olivia presented her research at the 2019 Conference for
Undergraduate Women in Physics, where she also won best poster in her category.
Will Crosby `21, Mica Hanish `21, and Megan Powell `21, students in Professor Tamara Lakins’ spring 2020 Mathematics Junior Seminar, chose the May 2020 winner of The College Mathematics Journal Next
Generation Prize. The prize was awarded to the journal article in the May 2020 issue that the students selected as the best for undergraduate mathematics students nationwide to read. The prize was
created to promote undergraduate students’ reading of The College Mathematics Journal (published by the Mathematical Association of America), and to encourage the writing of expository mathematics
that is student-accessible. The Allegheny College students were thanked in the May 2020 issue.
MATH 205 – Foundations of Mathematics
Instructor: Professor Lakins
An introduction to concepts encountered in the study of abstract mathematics. Topics covered include logic, mathematical proofs, set theory, relations, functions, mathematical induction, and
introductory number theory. The concepts of injectivity, subjectivity, and inverses are discussed as well as elementary computational tools such as the Division Algorithm and Euclid’s algorithm for
the greatest common divisor. Additional topics may include cardinality, combinatorics, graph theory, algebraic structure, the real number system, and concepts of mathematical analysis.
Prerequisite: MATH 152 or MATH 160 with a grade of C or better.
Distribution Requirements: ME, SP.
MATH 211 – Vector Calculus and Several Variable Integration
Instructor: Professor Carswell
A study of integration of functions of several variables, including the use of polar, cylindrical, and spherical coordinate systems; and vector calculus, including vector fields, line and surface
integrals, and the theorems of Green and Stokes.
Prerequisite: MATH 152 with a grade of C or better.
Distribution Requirements: QR.
May not be taken for credit if a grade of C or better in MATH 210 has already been received.
MATH 270 – Optimization and Approximation
Instructor: Professor Ellers
A study of optimization of functions of one variable and of several variables, including the Extreme Value Theorem and Lagrange multipliers; sequences and series; and Taylor approximation of
Prerequisite: MATH 152 with a grade of C or better.
Distribution Requirements: QR.
May not be taken for credit if a grade of C or better in MATH 170 has already been received.
MATH 280 – Ordinary Differential Equations
Instructor: Professor Carswell
An examination of methods of solving ordinary differential equations with emphasis on the existence and uniqueness of solutions of first order equations and second order linear equations. Topics may
include Laplace transforms, systems of linear differential equations, power series solutions, successive approximations, linear differential equations, and oscillation theory with applications to
chemistry and physics.
Prerequisite: MATH 152 or MATH 210 with a grade of C or better.
Distribution Requirements: SP.
MATH 325 – Algebraic Structures I
Instructor: Professor Werner
An introduction to the notion of an algebraic structure concentrating on the simplest such structure, that of a group. Rings and fields are also discussed.
Prerequisite: MATH 205 and MATH 320, each with a grade of C or better.
Distribution Requirements: SP.
MATH 340 – Introduction to Analysis
Instructor: Professor Weir
An examination of the theory of calculus of a single variable. Topics include properties of the real numbers, topology of the real line, and a rigorous treatment of sequences, functions, limits,
continuity, differentiation and integration.
Prerequisite: MATH 205 with a grade of C or better, and a grade of C or better in one of the following courses: MATH 210, MATH 211, MATH 270, MATH 280.
Distribution Requirements: SP.
MATH 345 – Probability and Statistical Inference I
Instructor: Professor Lo Bello
A study of mathematical models, sample space probabilities, random variables, expectation, empirical and theoretical frequency distributions, moment generating functions, sampling theory, correlation
and regression.
Prerequisite: MATH 152 or MATH 210 with a grade of C or better.
Distribution Requirements: SP.
This is one of the possible mathematics courses that may be substituted for one of the required 300-level CMPSC courses in the Computer Science major.
MATH 370 – Graph Theory and Combinatorics **New Course**
Instructor: Professor Dodge
A study of finite graphs and combinatorics, covering enumeration of combinatorial structures, directed and undirected graphs, and recursive algorithms. Topics include trees, planarity, graph
coloring, Eulerian and Hamiltonian graphs, shortest path algorithms, the pigeonhole principle, permutations and combinations of finite sets and multisets, binomial and multinomial coefficients, and
the inclusion-exclusion principle.
Prerequisites: MATH 205 with a grade of C or better
Distribution Requirements: SP
This is one of the mathematics courses that may be substituted for one of the required 300-level CMPSC courses in the Computer Science major.
|
{"url":"https://sites.allegheny.edu/math/bulletin-updates/","timestamp":"2024-11-01T19:39:54Z","content_type":"text/html","content_length":"78301","record_id":"<urn:uuid:f46e1f4f-e57d-49e7-9e2d-35afe908164f>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00170.warc.gz"}
|
llvm-gcc-4.0/gcc/tree-ssa-dom.c - llvm-archive - Git at Google
/* SSA Dominator optimizations for trees
Copyright (C) 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
Contributed by Diego Novillo <dnovillo@redhat.com>
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GCC is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING. If not, write to
the Free Software Foundation, 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA. */
#include "config.h"
#include "system.h"
#include "coretypes.h"
#include "tm.h"
#include "tree.h"
#include "flags.h"
#include "rtl.h"
#include "tm_p.h"
#include "ggc.h"
#include "basic-block.h"
/* APPLE LOCAL 4538899 mainline */
#include "cfgloop.h"
#include "output.h"
#include "errors.h"
#include "expr.h"
#include "function.h"
#include "diagnostic.h"
#include "timevar.h"
/* APPLE LOCAL lno */
#include "cfgloop.h"
#include "tree-dump.h"
#include "tree-flow.h"
#include "domwalk.h"
#include "real.h"
#include "tree-pass.h"
#include "tree-ssa-propagate.h"
#include "langhooks.h"
/* This file implements optimizations on the dominator tree. */
/* Structure for recording edge equivalences as well as any pending
edge redirections during the dominator optimizer.
Computing and storing the edge equivalences instead of creating
them on-demand can save significant amounts of time, particularly
for pathological cases involving switch statements.
These structures live for a single iteration of the dominator
optimizer in the edge's AUX field. At the end of an iteration we
free each of these structures and update the AUX field to point
to any requested redirection target (the code for updating the
CFG and SSA graph for edge redirection expects redirection edge
targets to be in the AUX field for each edge. */
struct edge_info
/* If this edge creates a simple equivalence, the LHS and RHS of
the equivalence will be stored here. */
tree lhs;
tree rhs;
/* Traversing an edge may also indicate one or more particular conditions
are true or false. The number of recorded conditions can vary, but
can be determined by the condition's code. So we have an array
and its maximum index rather than use a varray. */
tree *cond_equivalences;
unsigned int max_cond_equivalences;
/* If we can thread this edge this field records the new target. */
edge redirection_target;
/* Hash table with expressions made available during the renaming process.
When an assignment of the form X_i = EXPR is found, the statement is
stored in this table. If the same expression EXPR is later found on the
RHS of another statement, it is replaced with X_i (thus performing
global redundancy elimination). Similarly as we pass through conditionals
we record the conditional itself as having either a true or false value
in this table. */
static htab_t avail_exprs;
/* Stack of available expressions in AVAIL_EXPRs. Each block pushes any
expressions it enters into the hash table along with a marker entry
(null). When we finish processing the block, we pop off entries and
remove the expressions from the global hash table until we hit the
marker. */
static VEC(tree_on_heap) *avail_exprs_stack;
/* Stack of trees used to restore the global currdefs to its original
state after completing optimization of a block and its dominator children.
An SSA_NAME indicates that the current definition of the underlying
variable should be set to the given SSA_NAME.
A _DECL node indicates that the underlying variable has no current
A NULL node is used to mark the last node associated with the
current block. */
static VEC(tree_on_heap) *block_defs_stack;
/* Stack of statements we need to rescan during finalization for newly
exposed variables.
Statement rescanning must occur after the current block's available
expressions are removed from AVAIL_EXPRS. Else we may change the
hash code for an expression and be unable to find/remove it from
AVAIL_EXPRS. */
static VEC(tree_on_heap) *stmts_to_rescan;
/* Structure for entries in the expression hash table.
This requires more memory for the hash table entries, but allows us
to avoid creating silly tree nodes and annotations for conditionals,
eliminates 2 global hash tables and two block local varrays.
It also allows us to reduce the number of hash table lookups we
have to perform in lookup_avail_expr and finally it allows us to
significantly reduce the number of calls into the hashing routine
itself. */
struct expr_hash_elt
/* The value (lhs) of this expression. */
tree lhs;
/* The expression (rhs) we want to record. */
tree rhs;
/* The annotation if this element corresponds to a statement. */
stmt_ann_t ann;
/* The hash value for RHS/ann. */
hashval_t hash;
/* Stack of dest,src pairs that need to be restored during finalization.
A NULL entry is used to mark the end of pairs which need to be
restored during finalization of this block. */
static VEC(tree_on_heap) *const_and_copies_stack;
/* Bitmap of SSA_NAMEs known to have a nonzero value, even if we do not
know their exact value. */
static bitmap nonzero_vars;
/* Stack of SSA_NAMEs which need their NONZERO_VARS property cleared
when the current block is finalized.
A NULL entry is used to mark the end of names needing their
entry in NONZERO_VARS cleared during finalization of this block. */
static VEC(tree_on_heap) *nonzero_vars_stack;
/* Track whether or not we have changed the control flow graph. */
static bool cfg_altered;
/* Bitmap of blocks that have had EH statements cleaned. We should
remove their dead edges eventually. */
static bitmap need_eh_cleanup;
/* Statistics for dominator optimizations. */
struct opt_stats_d
long num_stmts;
long num_exprs_considered;
long num_re;
static struct opt_stats_d opt_stats;
/* Value range propagation record. Each time we encounter a conditional
of the form SSA_NAME COND CONST we create a new vrp_element to record
how the condition affects the possible values SSA_NAME may have.
Each record contains the condition tested (COND), and the range of
values the variable may legitimately have if COND is true. Note the
range of values may be a smaller range than COND specifies if we have
recorded other ranges for this variable. Each record also contains the
block in which the range was recorded for invalidation purposes.
Note that the current known range is computed lazily. This allows us
to avoid the overhead of computing ranges which are never queried.
When we encounter a conditional, we look for records which constrain
the SSA_NAME used in the condition. In some cases those records allow
us to determine the condition's result at compile time. In other cases
they may allow us to simplify the condition.
We also use value ranges to do things like transform signed div/mod
operations into unsigned div/mod or to simplify ABS_EXPRs.
Simple experiments have shown these optimizations to not be all that
useful on switch statements (much to my surprise). So switch statement
optimizations are not performed.
Note carefully we do not propagate information through each statement
in the block. i.e., if we know variable X has a value defined of
[0, 25] and we encounter Y = X + 1, we do not track a value range
for Y (which would be [1, 26] if we cared). Similarly we do not
constrain values as we encounter narrowing typecasts, etc. */
struct vrp_element
/* The highest and lowest values the variable in COND may contain when
COND is true. Note this may not necessarily be the same values
tested by COND if the same variable was used in earlier conditionals.
Note this is computed lazily and thus can be NULL indicating that
the values have not been computed yet. */
tree low;
tree high;
/* The actual conditional we recorded. This is needed since we compute
ranges lazily. */
tree cond;
/* The basic block where this record was created. We use this to determine
when to remove records. */
basic_block bb;
/* A hash table holding value range records (VRP_ELEMENTs) for a given
SSA_NAME. We used to use a varray indexed by SSA_NAME_VERSION, but
that gets awful wasteful, particularly since the density objects
with useful information is very low. */
static htab_t vrp_data;
/* An entry in the VRP_DATA hash table. We record the variable and a
varray of VRP_ELEMENT records associated with that variable. */
struct vrp_hash_elt
tree var;
varray_type records;
/* Array of variables which have their values constrained by operations
in this basic block. We use this during finalization to know
which variables need their VRP data updated. */
/* Stack of SSA_NAMEs which had their values constrained by operations
in this basic block. During finalization of this block we use this
list to determine which variables need their VRP data updated.
A NULL entry marks the end of the SSA_NAMEs associated with this block. */
static VEC(tree_on_heap) *vrp_variables_stack;
struct eq_expr_value
tree src;
tree dst;
/* Local functions. */
static void optimize_stmt (struct dom_walk_data *,
basic_block bb,
static tree lookup_avail_expr (tree, bool);
static hashval_t vrp_hash (const void *);
static int vrp_eq (const void *, const void *);
static hashval_t avail_expr_hash (const void *);
static hashval_t real_avail_expr_hash (const void *);
static int avail_expr_eq (const void *, const void *);
static void htab_statistics (FILE *, htab_t);
static void record_cond (tree, tree);
static void record_const_or_copy (tree, tree);
static void record_equality (tree, tree);
static tree update_rhs_and_lookup_avail_expr (tree, tree, bool);
static tree simplify_rhs_and_lookup_avail_expr (struct dom_walk_data *,
tree, int);
static tree simplify_cond_and_lookup_avail_expr (tree, stmt_ann_t, int);
static tree simplify_switch_and_lookup_avail_expr (tree, int);
static tree find_equivalent_equality_comparison (tree);
static void record_range (tree, basic_block);
static bool extract_range_from_cond (tree, tree *, tree *, int *);
static void record_equivalences_from_phis (basic_block);
static void record_equivalences_from_incoming_edge (basic_block);
static bool eliminate_redundant_computations (struct dom_walk_data *,
tree, stmt_ann_t);
static void record_equivalences_from_stmt (tree, int, stmt_ann_t);
static void thread_across_edge (struct dom_walk_data *, edge);
static void dom_opt_finalize_block (struct dom_walk_data *, basic_block);
static void dom_opt_initialize_block (struct dom_walk_data *, basic_block);
static void propagate_to_outgoing_edges (struct dom_walk_data *, basic_block);
static void remove_local_expressions_from_table (void);
static void restore_vars_to_original_value (void);
static void restore_currdefs_to_original_value (void);
static void register_definitions_for_stmt (tree);
static edge single_incoming_edge_ignoring_loop_edges (basic_block);
static void restore_nonzero_vars_to_original_value (void);
static inline bool unsafe_associative_fp_binop (tree);
/* Local version of fold that doesn't introduce cruft. */
static tree
local_fold (tree t)
t = fold (t);
/* Strip away useless type conversions. Both the NON_LVALUE_EXPR that
may have been added by fold, and "useless" type conversions that might
now be apparent due to propagation. */
STRIP_USELESS_TYPE_CONVERSION (t);
return t;
/* Allocate an EDGE_INFO for edge E and attach it to E.
Return the new EDGE_INFO structure. */
static struct edge_info *
allocate_edge_info (edge e)
struct edge_info *edge_info;
edge_info = xcalloc (1, sizeof (struct edge_info));
e->aux = edge_info;
return edge_info;
/* Free all EDGE_INFO structures associated with edges in the CFG.
If a particular edge can be threaded, copy the redirection
target from the EDGE_INFO structure into the edge's AUX field
as required by code to update the CFG and SSA graph for
jump threading. */
static void
free_all_edge_infos (void)
basic_block bb;
edge_iterator ei;
edge e;
FOR_EACH_BB (bb)
FOR_EACH_EDGE (e, ei, bb->preds)
struct edge_info *edge_info = e->aux;
if (edge_info)
e->aux = edge_info->redirection_target;
if (edge_info->cond_equivalences)
free (edge_info->cond_equivalences);
free (edge_info);
/* Jump threading, redundancy elimination and const/copy propagation.
This pass may expose new symbols that need to be renamed into SSA. For
every new symbol exposed, its corresponding bit will be set in
VARS_TO_RENAME. */
static void
tree_ssa_dominator_optimize (void)
struct dom_walk_data walk_data;
/* APPLE LOCAL lno */
struct loops *loops;
unsigned int i;
/* APPLE LOCAL begin lno */
/* Compute the natural loops. */
loops = loop_optimizer_init (NULL);
/* APPLE LOCAL end lno */
memset (&opt_stats, 0, sizeof (opt_stats));
for (i = 0; i < num_referenced_vars; i++)
var_ann (referenced_var (i))->current_def = NULL;
/* Create our hash tables. */
avail_exprs = htab_create (1024, real_avail_expr_hash, avail_expr_eq, free);
vrp_data = htab_create (ceil_log2 (num_ssa_names), vrp_hash, vrp_eq, free);
avail_exprs_stack = VEC_alloc (tree_on_heap, 20);
block_defs_stack = VEC_alloc (tree_on_heap, 20);
const_and_copies_stack = VEC_alloc (tree_on_heap, 20);
nonzero_vars_stack = VEC_alloc (tree_on_heap, 20);
vrp_variables_stack = VEC_alloc (tree_on_heap, 20);
stmts_to_rescan = VEC_alloc (tree_on_heap, 20);
nonzero_vars = BITMAP_ALLOC (NULL);
need_eh_cleanup = BITMAP_ALLOC (NULL);
/* Setup callbacks for the generic dominator tree walker. */
walk_data.walk_stmts_backward = false;
walk_data.dom_direction = CDI_DOMINATORS;
walk_data.initialize_block_local_data = NULL;
walk_data.before_dom_children_before_stmts = dom_opt_initialize_block;
walk_data.before_dom_children_walk_stmts = optimize_stmt;
walk_data.before_dom_children_after_stmts = propagate_to_outgoing_edges;
walk_data.after_dom_children_before_stmts = NULL;
walk_data.after_dom_children_walk_stmts = NULL;
walk_data.after_dom_children_after_stmts = dom_opt_finalize_block;
/* Right now we only attach a dummy COND_EXPR to the global data pointer.
When we attach more stuff we'll need to fill this out with a real
structure. */
walk_data.global_data = NULL;
walk_data.block_local_data_size = 0;
/* Now initialize the dominator walker. */
init_walk_dominator_tree (&walk_data);
calculate_dominance_info (CDI_DOMINATORS);
/* APPLE LOCAL begin 4538899 mainline */
/* We need to know which edges exit loops so that we can
aggressively thread through loop headers to an exit
edge. */
mark_loop_exit_edges ();
/* Clean up the CFG so that any forwarder blocks created by loop
canonicalization are removed. */
cleanup_tree_cfg ();
/* APPLE LOCAL end 4538899 mainline */
/* If we prove certain blocks are unreachable, then we want to
repeat the dominator optimization process as PHI nodes may
have turned into copies which allows better propagation of
values. So we repeat until we do not identify any new unreachable
blocks. */
/* Optimize the dominator tree. */
cfg_altered = false;
/* APPLE LOCAL begin 4538899 mainline */
calculate_dominance_info (CDI_DOMINATORS);
/* We need accurate information regarding back edges in the CFG
for jump threading. */
mark_dfs_back_edges ();
/* APPLE LOCAL end 4538899 mainline */
/* Recursively walk the dominator tree optimizing statements. */
walk_dominator_tree (&walk_data, ENTRY_BLOCK_PTR);
/* If we exposed any new variables, go ahead and put them into
SSA form now, before we handle jump threading. This simplifies
interactions between rewriting of _DECL nodes into SSA form
and rewriting SSA_NAME nodes into SSA form after block
duplication and CFG manipulation. */
if (!bitmap_empty_p (vars_to_rename))
rewrite_into_ssa (false);
bitmap_clear (vars_to_rename);
free_all_edge_infos ();
/* Thread jumps, creating duplicate blocks as needed. */
cfg_altered = thread_through_all_blocks ();
/* Removal of statements may make some EH edges dead. Purge
such edges from the CFG as needed. */
if (!bitmap_empty_p (need_eh_cleanup))
cfg_altered |= tree_purge_all_dead_eh_edges (need_eh_cleanup);
bitmap_zero (need_eh_cleanup);
/* APPLE LOCAL begin mainline 4538899 */
if (cfg_altered)
free_dominance_info (CDI_DOMINATORS);
/* APPLE LOCAL end mainline 4538899 */
cfg_altered = cleanup_tree_cfg ();
/* APPLE LOCAL begin mainline 4538899 */
if (rediscover_loops_after_threading)
/* Rerun basic loop analysis to discover any newly
created loops and update the set of exit edges. */
rediscover_loops_after_threading = false;
mark_loop_exit_edges ();
/* Remove any forwarder blocks inserted by loop
header canonicalization. */
cleanup_tree_cfg ();
/* APPLE LOCAL end mainline 4538899 */
calculate_dominance_info (CDI_DOMINATORS);
rewrite_ssa_into_ssa ();
/* Reinitialize the various tables. */
bitmap_clear (nonzero_vars);
htab_empty (avail_exprs);
htab_empty (vrp_data);
for (i = 0; i < num_referenced_vars; i++)
var_ann (referenced_var (i))->current_def = NULL;
/* Finally, remove everything except invariants in SSA_NAME_VALUE.
This must be done before we iterate as we might have a
reference to an SSA_NAME which was removed by the call to
Long term we will be able to let everything in SSA_NAME_VALUE
persist. However, for now, we know this is the safe thing to do. */
for (i = 0; i < num_ssa_names; i++)
tree name = ssa_name (i);
tree value;
if (!name)
value = SSA_NAME_VALUE (name);
if (value && !is_gimple_min_invariant (value))
SSA_NAME_VALUE (name) = NULL;
while (optimize > 1 && cfg_altered);
/* APPLE LOCAL begin lno */
loop_optimizer_finalize (loops, NULL);
/* APPLE LOCAL end lno */
/* Debugging dumps. */
if (dump_file && (dump_flags & TDF_STATS))
dump_dominator_optimization_stats (dump_file);
/* We emptied the hash table earlier, now delete it completely. */
htab_delete (avail_exprs);
htab_delete (vrp_data);
/* It is not necessary to clear CURRDEFS, REDIRECTION_EDGES, VRP_DATA,
CONST_AND_COPIES, and NONZERO_VARS as they all get cleared at the bottom
of the do-while loop above. */
/* And finalize the dominator walker. */
fini_walk_dominator_tree (&walk_data);
/* Free nonzero_vars. */
BITMAP_FREE (nonzero_vars);
BITMAP_FREE (need_eh_cleanup);
VEC_free (tree_on_heap, block_defs_stack);
VEC_free (tree_on_heap, avail_exprs_stack);
VEC_free (tree_on_heap, const_and_copies_stack);
VEC_free (tree_on_heap, nonzero_vars_stack);
VEC_free (tree_on_heap, vrp_variables_stack);
VEC_free (tree_on_heap, stmts_to_rescan);
static bool
gate_dominator (void)
return flag_tree_dom != 0;
struct tree_opt_pass pass_dominator =
"dom", /* name */
gate_dominator, /* gate */
tree_ssa_dominator_optimize, /* execute */
NULL, /* sub */
NULL, /* next */
0, /* static_pass_number */
TV_TREE_SSA_DOMINATOR_OPTS, /* tv_id */
PROP_cfg | PROP_ssa | PROP_alias, /* properties_required */
0, /* properties_provided */
0, /* properties_destroyed */
0, /* todo_flags_start */
TODO_dump_func | TODO_rename_vars
| TODO_verify_ssa, /* todo_flags_finish */
0 /* letter */
/* We are exiting BB, see if the target block begins with a conditional
jump which has a known value when reached via BB. */
static void
thread_across_edge (struct dom_walk_data *walk_data, edge e)
block_stmt_iterator bsi;
tree stmt = NULL;
tree phi;
/* Each PHI creates a temporary equivalence, record them. */
for (phi = phi_nodes (e->dest); phi; phi = PHI_CHAIN (phi))
tree src = PHI_ARG_DEF_FROM_EDGE (phi, e);
tree dst = PHI_RESULT (phi);
/* If the desired argument is not the same as this PHI's result
and it is set by a PHI in this block, then we can not thread
through this block. */
if (src != dst
&& TREE_CODE (src) == SSA_NAME
&& TREE_CODE (SSA_NAME_DEF_STMT (src)) == PHI_NODE
&& bb_for_stmt (SSA_NAME_DEF_STMT (src)) == e->dest)
record_const_or_copy (dst, src);
register_new_def (dst, &block_defs_stack);
for (bsi = bsi_start (e->dest); ! bsi_end_p (bsi); bsi_next (&bsi))
tree lhs, cached_lhs;
stmt = bsi_stmt (bsi);
/* Ignore empty statements and labels. */
if (IS_EMPTY_STMT (stmt) || TREE_CODE (stmt) == LABEL_EXPR)
/* If this is not a MODIFY_EXPR which sets an SSA_NAME to a new
value, then stop our search here. Ideally when we stop a
search we stop on a COND_EXPR or SWITCH_EXPR. */
if (TREE_CODE (stmt) != MODIFY_EXPR
|| TREE_CODE (TREE_OPERAND (stmt, 0)) != SSA_NAME)
/* At this point we have a statement which assigns an RHS to an
SSA_VAR on the LHS. We want to prove that the RHS is already
available and that its value is held in the current definition
of the LHS -- meaning that this assignment is a NOP when
reached via edge E. */
if (TREE_CODE (TREE_OPERAND (stmt, 1)) == SSA_NAME)
cached_lhs = TREE_OPERAND (stmt, 1);
cached_lhs = lookup_avail_expr (stmt, false);
lhs = TREE_OPERAND (stmt, 0);
/* This can happen if we thread around to the start of a loop. */
if (lhs == cached_lhs)
/* If we did not find RHS in the hash table, then try again after
temporarily const/copy propagating the operands. */
if (!cached_lhs)
/* Copy the operands. */
stmt_ann_t ann = stmt_ann (stmt);
use_optype uses = USE_OPS (ann);
vuse_optype vuses = VUSE_OPS (ann);
tree *uses_copy = xmalloc (NUM_USES (uses) * sizeof (tree));
tree *vuses_copy = xmalloc (NUM_VUSES (vuses) * sizeof (tree));
unsigned int i;
/* Make a copy of the uses into USES_COPY, then cprop into
the use operands. */
for (i = 0; i < NUM_USES (uses); i++)
tree tmp = NULL;
uses_copy[i] = USE_OP (uses, i);
if (TREE_CODE (USE_OP (uses, i)) == SSA_NAME)
tmp = SSA_NAME_VALUE (USE_OP (uses, i));
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
SET_USE_OP (uses, i, tmp);
/* Similarly for virtual uses. */
for (i = 0; i < NUM_VUSES (vuses); i++)
tree tmp = NULL;
vuses_copy[i] = VUSE_OP (vuses, i);
if (TREE_CODE (VUSE_OP (vuses, i)) == SSA_NAME)
tmp = SSA_NAME_VALUE (VUSE_OP (vuses, i));
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
SET_VUSE_OP (vuses, i, tmp);
/* Try to lookup the new expression. */
cached_lhs = lookup_avail_expr (stmt, false);
/* Restore the statement's original uses/defs. */
for (i = 0; i < NUM_USES (uses); i++)
SET_USE_OP (uses, i, uses_copy[i]);
for (i = 0; i < NUM_VUSES (vuses); i++)
SET_VUSE_OP (vuses, i, vuses_copy[i]);
free (uses_copy);
free (vuses_copy);
/* If we still did not find the expression in the hash table,
then we can not ignore this statement. */
if (! cached_lhs)
/* If the expression in the hash table was not assigned to an
SSA_NAME, then we can not ignore this statement. */
if (TREE_CODE (cached_lhs) != SSA_NAME)
/* If we have different underlying variables, then we can not
ignore this statement. */
if (SSA_NAME_VAR (cached_lhs) != SSA_NAME_VAR (lhs))
/* If CACHED_LHS does not represent the current value of the underlying
variable in CACHED_LHS/LHS, then we can not ignore this statement. */
if (var_ann (SSA_NAME_VAR (lhs))->current_def != cached_lhs)
/* If we got here, then we can ignore this statement and continue
walking through the statements in the block looking for a threadable
We want to record an equivalence lhs = cache_lhs so that if
the result of this statement is used later we can copy propagate
suitably. */
record_const_or_copy (lhs, cached_lhs);
register_new_def (lhs, &block_defs_stack);
/* If we stopped at a COND_EXPR or SWITCH_EXPR, then see if we know which
arm will be taken. */
if (stmt
&& (TREE_CODE (stmt) == COND_EXPR
|| TREE_CODE (stmt) == SWITCH_EXPR))
tree cond, cached_lhs;
/* Now temporarily cprop the operands and try to find the resulting
expression in the hash tables. */
if (TREE_CODE (stmt) == COND_EXPR)
cond = COND_EXPR_COND (stmt);
cond = SWITCH_COND (stmt);
if (COMPARISON_CLASS_P (cond))
tree dummy_cond, op0, op1;
enum tree_code cond_code;
op0 = TREE_OPERAND (cond, 0);
op1 = TREE_OPERAND (cond, 1);
cond_code = TREE_CODE (cond);
/* Get the current value of both operands. */
if (TREE_CODE (op0) == SSA_NAME)
tree tmp = SSA_NAME_VALUE (op0);
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
op0 = tmp;
if (TREE_CODE (op1) == SSA_NAME)
tree tmp = SSA_NAME_VALUE (op1);
if (tmp && TREE_CODE (tmp) != VALUE_HANDLE)
op1 = tmp;
/* Stuff the operator and operands into our dummy conditional
expression, creating the dummy conditional if necessary. */
dummy_cond = walk_data->global_data;
if (! dummy_cond)
dummy_cond = build (cond_code, boolean_type_node, op0, op1);
dummy_cond = build (COND_EXPR, void_type_node,
dummy_cond, NULL, NULL);
walk_data->global_data = dummy_cond;
TREE_SET_CODE (COND_EXPR_COND (dummy_cond), cond_code);
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 0) = op0;
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 1) = op1;
/* If the conditional folds to an invariant, then we are done,
otherwise look it up in the hash tables. */
cached_lhs = local_fold (COND_EXPR_COND (dummy_cond));
if (! is_gimple_min_invariant (cached_lhs))
cached_lhs = lookup_avail_expr (dummy_cond, false);
if (!cached_lhs || ! is_gimple_min_invariant (cached_lhs))
cached_lhs = simplify_cond_and_lookup_avail_expr (dummy_cond,
/* We can have conditionals which just test the state of a
variable rather than use a relational operator. These are
simpler to handle. */
else if (TREE_CODE (cond) == SSA_NAME)
cached_lhs = cond;
cached_lhs = SSA_NAME_VALUE (cached_lhs);
if (cached_lhs && ! is_gimple_min_invariant (cached_lhs))
cached_lhs = 0;
cached_lhs = lookup_avail_expr (stmt, false);
if (cached_lhs)
edge taken_edge = find_taken_edge (e->dest, cached_lhs);
basic_block dest = (taken_edge ? taken_edge->dest : NULL);
if (dest == e->dest)
/* If we have a known destination for the conditional, then
we can perform this optimization, which saves at least one
conditional jump each time it applies since we get to
bypass the conditional at our original destination. */
if (dest)
struct edge_info *edge_info;
update_bb_profile_for_threading (e->dest, EDGE_FREQUENCY (e),
e->count, taken_edge);
if (e->aux)
edge_info = e->aux;
edge_info = allocate_edge_info (e);
edge_info->redirection_target = taken_edge;
bb_ann (e->dest)->incoming_edge_threaded = true;
/* Initialize local stacks for this optimizer and record equivalences
upon entry to BB. Equivalences can come from the edge traversed to
reach BB or they may come from PHI nodes at the start of BB. */
static void
dom_opt_initialize_block (struct dom_walk_data *walk_data ATTRIBUTE_UNUSED,
basic_block bb)
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "\n\nOptimizing block #%d\n\n", bb->index);
/* Push a marker on the stacks of local information so that we know how
far to unwind when we finalize this block. */
VEC_safe_push (tree_on_heap, avail_exprs_stack, NULL_TREE);
VEC_safe_push (tree_on_heap, block_defs_stack, NULL_TREE);
VEC_safe_push (tree_on_heap, const_and_copies_stack, NULL_TREE);
VEC_safe_push (tree_on_heap, nonzero_vars_stack, NULL_TREE);
VEC_safe_push (tree_on_heap, vrp_variables_stack, NULL_TREE);
record_equivalences_from_incoming_edge (bb);
/* PHI nodes can create equivalences too. */
record_equivalences_from_phis (bb);
/* Given an expression EXPR (a relational expression or a statement),
initialize the hash table element pointed by by ELEMENT. */
static void
initialize_hash_element (tree expr, tree lhs, struct expr_hash_elt *element)
/* Hash table elements may be based on conditional expressions or statements.
For the former case, we have no annotation and we want to hash the
conditional expression. In the latter case we have an annotation and
we want to record the expression the statement evaluates. */
if (COMPARISON_CLASS_P (expr) || TREE_CODE (expr) == TRUTH_NOT_EXPR)
element->ann = NULL;
element->rhs = expr;
else if (TREE_CODE (expr) == COND_EXPR)
element->ann = stmt_ann (expr);
element->rhs = COND_EXPR_COND (expr);
else if (TREE_CODE (expr) == SWITCH_EXPR)
element->ann = stmt_ann (expr);
element->rhs = SWITCH_COND (expr);
else if (TREE_CODE (expr) == RETURN_EXPR && TREE_OPERAND (expr, 0))
element->ann = stmt_ann (expr);
element->rhs = TREE_OPERAND (TREE_OPERAND (expr, 0), 1);
element->ann = stmt_ann (expr);
element->rhs = TREE_OPERAND (expr, 1);
element->lhs = lhs;
element->hash = avail_expr_hash (element);
/* Remove all the expressions in LOCALS from TABLE, stopping when there are
LIMIT entries left in LOCALs. */
static void
remove_local_expressions_from_table (void)
/* Remove all the expressions made available in this block. */
while (VEC_length (tree_on_heap, avail_exprs_stack) > 0)
struct expr_hash_elt element;
tree expr = VEC_pop (tree_on_heap, avail_exprs_stack);
if (expr == NULL_TREE)
initialize_hash_element (expr, NULL, &element);
htab_remove_elt_with_hash (avail_exprs, &element, element.hash);
/* Use the SSA_NAMES in LOCALS to restore TABLE to its original
state, stopping when there are LIMIT entries left in LOCALs. */
static void
restore_nonzero_vars_to_original_value (void)
while (VEC_length (tree_on_heap, nonzero_vars_stack) > 0)
tree name = VEC_pop (tree_on_heap, nonzero_vars_stack);
if (name == NULL)
bitmap_clear_bit (nonzero_vars, SSA_NAME_VERSION (name));
/* Use the source/dest pairs in CONST_AND_COPIES_STACK to restore
CONST_AND_COPIES to its original state, stopping when we hit a
NULL marker. */
static void
restore_vars_to_original_value (void)
while (VEC_length (tree_on_heap, const_and_copies_stack) > 0)
tree prev_value, dest;
dest = VEC_pop (tree_on_heap, const_and_copies_stack);
if (dest == NULL)
prev_value = VEC_pop (tree_on_heap, const_and_copies_stack);
SSA_NAME_VALUE (dest) = prev_value;
/* Similar to restore_vars_to_original_value, except that it restores
CURRDEFS to its original value. */
static void
restore_currdefs_to_original_value (void)
/* Restore CURRDEFS to its original state. */
while (VEC_length (tree_on_heap, block_defs_stack) > 0)
tree tmp = VEC_pop (tree_on_heap, block_defs_stack);
tree saved_def, var;
if (tmp == NULL_TREE)
/* If we recorded an SSA_NAME, then make the SSA_NAME the current
definition of its underlying variable. If we recorded anything
else, it must have been an _DECL node and its current reaching
definition must have been NULL. */
if (TREE_CODE (tmp) == SSA_NAME)
saved_def = tmp;
var = SSA_NAME_VAR (saved_def);
saved_def = NULL;
var = tmp;
var_ann (var)->current_def = saved_def;
/* We have finished processing the dominator children of BB, perform
any finalization actions in preparation for leaving this node in
the dominator tree. */
static void
dom_opt_finalize_block (struct dom_walk_data *walk_data, basic_block bb)
tree last;
/* If we are at a leaf node in the dominator tree, see if we can thread
the edge from BB through its successor.
Do this before we remove entries from our equivalence tables. */
if (EDGE_COUNT (bb->succs) == 1
&& (EDGE_SUCC (bb, 0)->flags & EDGE_ABNORMAL) == 0
&& (get_immediate_dominator (CDI_DOMINATORS, EDGE_SUCC (bb, 0)->dest) != bb
|| phi_nodes (EDGE_SUCC (bb, 0)->dest)))
thread_across_edge (walk_data, EDGE_SUCC (bb, 0));
else if ((last = last_stmt (bb))
&& TREE_CODE (last) == COND_EXPR
&& (COMPARISON_CLASS_P (COND_EXPR_COND (last))
|| TREE_CODE (COND_EXPR_COND (last)) == SSA_NAME)
&& EDGE_COUNT (bb->succs) == 2
&& (EDGE_SUCC (bb, 0)->flags & EDGE_ABNORMAL) == 0
&& (EDGE_SUCC (bb, 1)->flags & EDGE_ABNORMAL) == 0)
edge true_edge, false_edge;
extract_true_false_edges_from_block (bb, &true_edge, &false_edge);
/* If the THEN arm is the end of a dominator tree or has PHI nodes,
then try to thread through its edge. */
if (get_immediate_dominator (CDI_DOMINATORS, true_edge->dest) != bb
|| phi_nodes (true_edge->dest))
struct edge_info *edge_info;
unsigned int i;
/* Push a marker onto the available expression stack so that we
unwind any expressions related to the TRUE arm before processing
the false arm below. */
VEC_safe_push (tree_on_heap, avail_exprs_stack, NULL_TREE);
VEC_safe_push (tree_on_heap, block_defs_stack, NULL_TREE);
VEC_safe_push (tree_on_heap, const_and_copies_stack, NULL_TREE);
edge_info = true_edge->aux;
/* If we have info associated with this edge, record it into
our equivalency tables. */
if (edge_info)
tree *cond_equivalences = edge_info->cond_equivalences;
tree lhs = edge_info->lhs;
tree rhs = edge_info->rhs;
/* If we have a simple NAME = VALUE equivalency record it.
Until the jump threading selection code improves, only
do this if both the name and value are SSA_NAMEs with
the same underlying variable to avoid missing threading
opportunities. */
if (lhs
&& TREE_CODE (COND_EXPR_COND (last)) == SSA_NAME
&& TREE_CODE (edge_info->rhs) == SSA_NAME
&& SSA_NAME_VAR (lhs) == SSA_NAME_VAR (rhs))
record_const_or_copy (lhs, rhs);
/* If we have 0 = COND or 1 = COND equivalences, record them
into our expression hash tables. */
if (cond_equivalences)
for (i = 0; i < edge_info->max_cond_equivalences; i += 2)
tree expr = cond_equivalences[i];
tree value = cond_equivalences[i + 1];
record_cond (expr, value);
/* Now thread the edge. */
thread_across_edge (walk_data, true_edge);
/* And restore the various tables to their state before
we threaded this edge. */
remove_local_expressions_from_table ();
restore_vars_to_original_value ();
restore_currdefs_to_original_value ();
/* Similarly for the ELSE arm. */
if (get_immediate_dominator (CDI_DOMINATORS, false_edge->dest) != bb
|| phi_nodes (false_edge->dest))
struct edge_info *edge_info;
unsigned int i;
edge_info = false_edge->aux;
/* If we have info associated with this edge, record it into
our equivalency tables. */
if (edge_info)
tree *cond_equivalences = edge_info->cond_equivalences;
tree lhs = edge_info->lhs;
tree rhs = edge_info->rhs;
/* If we have a simple NAME = VALUE equivalency record it.
Until the jump threading selection code improves, only
do this if both the name and value are SSA_NAMEs with
the same underlying variable to avoid missing threading
opportunities. */
if (lhs
&& TREE_CODE (COND_EXPR_COND (last)) == SSA_NAME)
record_const_or_copy (lhs, rhs);
/* If we have 0 = COND or 1 = COND equivalences, record them
into our expression hash tables. */
if (cond_equivalences)
for (i = 0; i < edge_info->max_cond_equivalences; i += 2)
tree expr = cond_equivalences[i];
tree value = cond_equivalences[i + 1];
record_cond (expr, value);
thread_across_edge (walk_data, false_edge);
/* No need to remove local expressions from our tables
or restore vars to their original value as that will
be done immediately below. */
remove_local_expressions_from_table ();
restore_nonzero_vars_to_original_value ();
restore_vars_to_original_value ();
restore_currdefs_to_original_value ();
/* Remove VRP records associated with this basic block. They are no
longer valid.
To be efficient, we note which variables have had their values
constrained in this block. So walk over each variable in the
VRP_VARIABLEs array. */
while (VEC_length (tree_on_heap, vrp_variables_stack) > 0)
tree var = VEC_pop (tree_on_heap, vrp_variables_stack);
struct vrp_hash_elt vrp_hash_elt, *vrp_hash_elt_p;
void **slot;
/* Each variable has a stack of value range records. We want to
invalidate those associated with our basic block. So we walk
the array backwards popping off records associated with our
block. Once we hit a record not associated with our block
we are done. */
varray_type var_vrp_records;
if (var == NULL)
vrp_hash_elt.var = var;
vrp_hash_elt.records = NULL;
slot = htab_find_slot (vrp_data, &vrp_hash_elt, NO_INSERT);
vrp_hash_elt_p = (struct vrp_hash_elt *) *slot;
var_vrp_records = vrp_hash_elt_p->records;
while (VARRAY_ACTIVE_SIZE (var_vrp_records) > 0)
struct vrp_element *element
= (struct vrp_element *)VARRAY_TOP_GENERIC_PTR (var_vrp_records);
if (element->bb != bb)
VARRAY_POP (var_vrp_records);
/* If we queued any statements to rescan in this block, then
go ahead and rescan them now. */
while (VEC_length (tree_on_heap, stmts_to_rescan) > 0)
tree stmt = VEC_last (tree_on_heap, stmts_to_rescan);
basic_block stmt_bb = bb_for_stmt (stmt);
if (stmt_bb != bb)
VEC_pop (tree_on_heap, stmts_to_rescan);
mark_new_vars_to_rename (stmt, vars_to_rename);
/* PHI nodes can create equivalences too.
Ignoring any alternatives which are the same as the result, if
all the alternatives are equal, then the PHI node creates an
Additionally, if all the PHI alternatives are known to have a nonzero
value, then the result of this PHI is known to have a nonzero value,
even if we do not know its exact value. */
static void
record_equivalences_from_phis (basic_block bb)
tree phi;
for (phi = phi_nodes (bb); phi; phi = PHI_CHAIN (phi))
tree lhs = PHI_RESULT (phi);
tree rhs = NULL;
int i;
for (i = 0; i < PHI_NUM_ARGS (phi); i++)
tree t = PHI_ARG_DEF (phi, i);
/* Ignore alternatives which are the same as our LHS. Since
LHS is a PHI_RESULT, it is known to be a SSA_NAME, so we
can simply compare pointers. */
if (lhs == t)
/* If we have not processed an alternative yet, then set
RHS to this alternative. */
if (rhs == NULL)
rhs = t;
/* If we have processed an alternative (stored in RHS), then
see if it is equal to this one. If it isn't, then stop
the search. */
else if (! operand_equal_for_phi_arg_p (rhs, t))
/* If we had no interesting alternatives, then all the RHS alternatives
must have been the same as LHS. */
if (!rhs)
rhs = lhs;
/* If we managed to iterate through each PHI alternative without
breaking out of the loop, then we have a PHI which may create
a useful equivalence. We do not need to record unwind data for
this, since this is a true assignment and not an equivalence
inferred from a comparison. All uses of this ssa name are dominated
by this assignment, so unwinding just costs time and space. */
if (i == PHI_NUM_ARGS (phi)
&& may_propagate_copy (lhs, rhs))
SSA_NAME_VALUE (lhs) = rhs;
/* Now see if we know anything about the nonzero property for the
result of this PHI. */
for (i = 0; i < PHI_NUM_ARGS (phi); i++)
if (!PHI_ARG_NONZERO (phi, i))
if (i == PHI_NUM_ARGS (phi))
bitmap_set_bit (nonzero_vars, SSA_NAME_VERSION (PHI_RESULT (phi)));
register_new_def (lhs, &block_defs_stack);
/* Ignoring loop backedges, if BB has precisely one incoming edge then
return that edge. Otherwise return NULL. */
static edge
single_incoming_edge_ignoring_loop_edges (basic_block bb)
edge retval = NULL;
edge e;
edge_iterator ei;
FOR_EACH_EDGE (e, ei, bb->preds)
/* A loop back edge can be identified by the destination of
the edge dominating the source of the edge. */
if (dominated_by_p (CDI_DOMINATORS, e->src, e->dest))
/* If we have already seen a non-loop edge, then we must have
multiple incoming non-loop edges and thus we return NULL. */
if (retval)
return NULL;
/* This is the first non-loop incoming edge we have found. Record
it. */
retval = e;
return retval;
/* Record any equivalences created by the incoming edge to BB. If BB
has more than one incoming edge, then no equivalence is created. */
static void
record_equivalences_from_incoming_edge (basic_block bb)
edge e;
basic_block parent;
struct edge_info *edge_info;
/* If our parent block ended with a control statement, then we may be
able to record some equivalences based on which outgoing edge from
the parent was followed. */
parent = get_immediate_dominator (CDI_DOMINATORS, bb);
e = single_incoming_edge_ignoring_loop_edges (bb);
/* If we had a single incoming edge from our parent block, then enter
any data associated with the edge into our tables. */
if (e && e->src == parent)
unsigned int i;
edge_info = e->aux;
if (edge_info)
tree lhs = edge_info->lhs;
tree rhs = edge_info->rhs;
tree *cond_equivalences = edge_info->cond_equivalences;
if (lhs)
record_equality (lhs, rhs);
if (cond_equivalences)
bool recorded_range = false;
for (i = 0; i < edge_info->max_cond_equivalences; i += 2)
tree expr = cond_equivalences[i];
tree value = cond_equivalences[i + 1];
record_cond (expr, value);
/* For the first true equivalence, record range
information. We only do this for the first
true equivalence as it should dominate any
later true equivalences. */
if (! recorded_range
&& COMPARISON_CLASS_P (expr)
&& value == boolean_true_node
&& TREE_CONSTANT (TREE_OPERAND (expr, 1)))
record_range (expr, bb);
recorded_range = true;
/* Dump SSA statistics on FILE. */
dump_dominator_optimization_stats (FILE *file)
long n_exprs;
fprintf (file, "Total number of statements: %6ld\n\n",
fprintf (file, "Exprs considered for dominator optimizations: %6ld\n",
n_exprs = opt_stats.num_exprs_considered;
if (n_exprs == 0)
n_exprs = 1;
fprintf (file, " Redundant expressions eliminated: %6ld (%.0f%%)\n",
opt_stats.num_re, PERCENT (opt_stats.num_re,
fprintf (file, "\nHash table statistics:\n");
fprintf (file, " avail_exprs: ");
htab_statistics (file, avail_exprs);
/* Dump SSA statistics on stderr. */
debug_dominator_optimization_stats (void)
dump_dominator_optimization_stats (stderr);
/* Dump statistics for the hash table HTAB. */
static void
htab_statistics (FILE *file, htab_t htab)
fprintf (file, "size %ld, %ld elements, %f collision/search ratio\n",
(long) htab_size (htab),
(long) htab_elements (htab),
htab_collisions (htab));
/* Record the fact that VAR has a nonzero value, though we may not know
its exact value. Note that if VAR is already known to have a nonzero
value, then we do nothing. */
static void
record_var_is_nonzero (tree var)
int indx = SSA_NAME_VERSION (var);
if (bitmap_bit_p (nonzero_vars, indx))
/* Mark it in the global table. */
bitmap_set_bit (nonzero_vars, indx);
/* Record this SSA_NAME so that we can reset the global table
when we leave this block. */
VEC_safe_push (tree_on_heap, nonzero_vars_stack, var);
/* Enter a statement into the true/false expression hash table indicating
that the condition COND has the value VALUE. */
static void
record_cond (tree cond, tree value)
struct expr_hash_elt *element = xmalloc (sizeof (struct expr_hash_elt));
void **slot;
initialize_hash_element (cond, value, element);
slot = htab_find_slot_with_hash (avail_exprs, (void *)element,
element->hash, INSERT);
if (*slot == NULL)
*slot = (void *) element;
VEC_safe_push (tree_on_heap, avail_exprs_stack, cond);
free (element);
/* Build a new conditional using NEW_CODE, OP0 and OP1 and store
the new conditional into *p, then store a boolean_true_node
into *(p + 1). */
static void
build_and_record_new_cond (enum tree_code new_code, tree op0, tree op1, tree *p)
*p = build2 (new_code, boolean_type_node, op0, op1);
*p = boolean_true_node;
/* Record that COND is true and INVERTED is false into the edge information
structure. Also record that any conditions dominated by COND are true
as well.
For example, if a < b is true, then a <= b must also be true. */
static void
record_conditions (struct edge_info *edge_info, tree cond, tree inverted)
tree op0, op1;
if (!COMPARISON_CLASS_P (cond))
op0 = TREE_OPERAND (cond, 0);
op1 = TREE_OPERAND (cond, 1);
switch (TREE_CODE (cond))
case LT_EXPR:
case GT_EXPR:
edge_info->max_cond_equivalences = 12;
edge_info->cond_equivalences = xmalloc (12 * sizeof (tree));
build_and_record_new_cond ((TREE_CODE (cond) == LT_EXPR
? LE_EXPR : GE_EXPR),
op0, op1, &edge_info->cond_equivalences[4]);
build_and_record_new_cond (ORDERED_EXPR, op0, op1,
build_and_record_new_cond (NE_EXPR, op0, op1,
build_and_record_new_cond (LTGT_EXPR, op0, op1,
case GE_EXPR:
case LE_EXPR:
edge_info->max_cond_equivalences = 6;
edge_info->cond_equivalences = xmalloc (6 * sizeof (tree));
build_and_record_new_cond (ORDERED_EXPR, op0, op1,
case EQ_EXPR:
edge_info->max_cond_equivalences = 10;
edge_info->cond_equivalences = xmalloc (10 * sizeof (tree));
build_and_record_new_cond (ORDERED_EXPR, op0, op1,
build_and_record_new_cond (LE_EXPR, op0, op1,
build_and_record_new_cond (GE_EXPR, op0, op1,
case UNORDERED_EXPR:
edge_info->max_cond_equivalences = 16;
edge_info->cond_equivalences = xmalloc (16 * sizeof (tree));
build_and_record_new_cond (NE_EXPR, op0, op1,
build_and_record_new_cond (UNLE_EXPR, op0, op1,
build_and_record_new_cond (UNGE_EXPR, op0, op1,
build_and_record_new_cond (UNEQ_EXPR, op0, op1,
build_and_record_new_cond (UNLT_EXPR, op0, op1,
build_and_record_new_cond (UNGT_EXPR, op0, op1,
case UNLT_EXPR:
case UNGT_EXPR:
edge_info->max_cond_equivalences = 8;
edge_info->cond_equivalences = xmalloc (8 * sizeof (tree));
build_and_record_new_cond ((TREE_CODE (cond) == UNLT_EXPR
? UNLE_EXPR : UNGE_EXPR),
op0, op1, &edge_info->cond_equivalences[4]);
build_and_record_new_cond (NE_EXPR, op0, op1,
case UNEQ_EXPR:
edge_info->max_cond_equivalences = 8;
edge_info->cond_equivalences = xmalloc (8 * sizeof (tree));
build_and_record_new_cond (UNLE_EXPR, op0, op1,
build_and_record_new_cond (UNGE_EXPR, op0, op1,
case LTGT_EXPR:
edge_info->max_cond_equivalences = 8;
edge_info->cond_equivalences = xmalloc (8 * sizeof (tree));
build_and_record_new_cond (NE_EXPR, op0, op1,
build_and_record_new_cond (ORDERED_EXPR, op0, op1,
edge_info->max_cond_equivalences = 4;
edge_info->cond_equivalences = xmalloc (4 * sizeof (tree));
/* Now store the original true and false conditions into the first
two slots. */
edge_info->cond_equivalences[0] = cond;
edge_info->cond_equivalences[1] = boolean_true_node;
edge_info->cond_equivalences[2] = inverted;
edge_info->cond_equivalences[3] = boolean_false_node;
/* A helper function for record_const_or_copy and record_equality.
Do the work of recording the value and undo info. */
static void
record_const_or_copy_1 (tree x, tree y, tree prev_x)
SSA_NAME_VALUE (x) = y;
VEC_safe_push (tree_on_heap, const_and_copies_stack, prev_x);
VEC_safe_push (tree_on_heap, const_and_copies_stack, x);
/* Return the loop depth of the basic block of the defining statement of X.
This number should not be treated as absolutely correct because the loop
information may not be completely up-to-date when dom runs. However, it
will be relatively correct, and as more passes are taught to keep loop info
up to date, the result will become more and more accurate. */
static int
loop_depth_of_name (tree x)
tree defstmt;
basic_block defbb;
/* If it's not an SSA_NAME, we have no clue where the definition is. */
if (TREE_CODE (x) != SSA_NAME)
return 0;
/* Otherwise return the loop depth of the defining statement's bb.
Note that there may not actually be a bb for this statement, if the
ssa_name is live on entry. */
defstmt = SSA_NAME_DEF_STMT (x);
defbb = bb_for_stmt (defstmt);
if (!defbb)
return 0;
return defbb->loop_depth;
/* Record that X is equal to Y in const_and_copies. Record undo
information in the block-local vector. */
static void
record_const_or_copy (tree x, tree y)
tree prev_x = SSA_NAME_VALUE (x);
if (TREE_CODE (y) == SSA_NAME)
tree tmp = SSA_NAME_VALUE (y);
if (tmp)
y = tmp;
record_const_or_copy_1 (x, y, prev_x);
/* Similarly, but assume that X and Y are the two operands of an EQ_EXPR.
This constrains the cases in which we may treat this as assignment. */
static void
record_equality (tree x, tree y)
tree prev_x = NULL, prev_y = NULL;
if (TREE_CODE (x) == SSA_NAME)
prev_x = SSA_NAME_VALUE (x);
if (TREE_CODE (y) == SSA_NAME)
prev_y = SSA_NAME_VALUE (y);
/* If one of the previous values is invariant, or invariant in more loops
(by depth), then use that.
Otherwise it doesn't matter which value we choose, just so
long as we canonicalize on one value. */
if (TREE_INVARIANT (y))
else if (TREE_INVARIANT (x) || (loop_depth_of_name (x) <= loop_depth_of_name (y)))
prev_x = x, x = y, y = prev_x, prev_x = prev_y;
else if (prev_x && TREE_INVARIANT (prev_x))
x = y, y = prev_x, prev_x = prev_y;
else if (prev_y && TREE_CODE (prev_y) != VALUE_HANDLE)
y = prev_y;
/* After the swapping, we must have one SSA_NAME. */
if (TREE_CODE (x) != SSA_NAME)
/* For IEEE, -0.0 == 0.0, so we don't necessarily know the sign of a
variable compared against zero. If we're honoring signed zeros,
then we cannot record this value unless we know that the value is
nonzero. */
if (HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (x)))
&& (TREE_CODE (y) != REAL_CST
|| REAL_VALUES_EQUAL (dconst0, TREE_REAL_CST (y))))
record_const_or_copy_1 (x, y, prev_x);
/* Return true, if it is ok to do folding of an associative expression.
EXP is the tree for the associative expression. */
static inline bool
unsafe_associative_fp_binop (tree exp)
enum tree_code code = TREE_CODE (exp);
return !(!flag_unsafe_math_optimizations
&& (code == MULT_EXPR || code == PLUS_EXPR
|| code == MINUS_EXPR)
&& FLOAT_TYPE_P (TREE_TYPE (exp)));
/* STMT is a MODIFY_EXPR for which we were unable to find RHS in the
hash tables. Try to simplify the RHS using whatever equivalences
we may have recorded.
If we are able to simplify the RHS, then lookup the simplified form in
the hash table and return the result. Otherwise return NULL. */
static tree
simplify_rhs_and_lookup_avail_expr (struct dom_walk_data *walk_data,
tree stmt, int insert)
tree rhs = TREE_OPERAND (stmt, 1);
enum tree_code rhs_code = TREE_CODE (rhs);
tree result = NULL;
/* If we have lhs = ~x, look and see if we earlier had x = ~y.
In which case we can change this statement to be lhs = y.
Which can then be copy propagated.
Similarly for negation. */
if ((rhs_code == BIT_NOT_EXPR || rhs_code == NEGATE_EXPR)
&& TREE_CODE (TREE_OPERAND (rhs, 0)) == SSA_NAME)
/* Get the definition statement for our RHS. */
tree rhs_def_stmt = SSA_NAME_DEF_STMT (TREE_OPERAND (rhs, 0));
/* See if the RHS_DEF_STMT has the same form as our statement. */
/* APPLE LOCAL begin lno */
if (TREE_CODE (rhs_def_stmt) == MODIFY_EXPR
&& TREE_CODE (TREE_OPERAND (rhs_def_stmt, 1)) == rhs_code
&& loop_containing_stmt (rhs_def_stmt) == loop_containing_stmt (stmt))
/* APPLE LOCAL end lno */
tree rhs_def_operand;
rhs_def_operand = TREE_OPERAND (TREE_OPERAND (rhs_def_stmt, 1), 0);
/* Verify that RHS_DEF_OPERAND is a suitable SSA variable. */
if (TREE_CODE (rhs_def_operand) == SSA_NAME
&& ! SSA_NAME_OCCURS_IN_ABNORMAL_PHI (rhs_def_operand))
result = update_rhs_and_lookup_avail_expr (stmt,
/* If we have z = (x OP C1), see if we earlier had x = y OP C2.
If OP is associative, create and fold (y OP C2) OP C1 which
should result in (y OP C3), use that as the RHS for the
assignment. Add minus to this, as we handle it specially below. */
if ((associative_tree_code (rhs_code) || rhs_code == MINUS_EXPR)
&& TREE_CODE (TREE_OPERAND (rhs, 0)) == SSA_NAME
&& is_gimple_min_invariant (TREE_OPERAND (rhs, 1)))
tree rhs_def_stmt = SSA_NAME_DEF_STMT (TREE_OPERAND (rhs, 0));
/* See if the RHS_DEF_STMT has the same form as our statement. */
/* APPLE LOCAL begin lno */
if (TREE_CODE (rhs_def_stmt) == MODIFY_EXPR
&& TREE_CODE (TREE_OPERAND (rhs_def_stmt, 1)) == rhs_code
&& loop_containing_stmt (rhs_def_stmt) == loop_containing_stmt (stmt))
/* APPLE LOCAL end lno */
tree rhs_def_rhs = TREE_OPERAND (rhs_def_stmt, 1);
enum tree_code rhs_def_code = TREE_CODE (rhs_def_rhs);
if ((rhs_code == rhs_def_code && unsafe_associative_fp_binop (rhs))
|| (rhs_code == PLUS_EXPR && rhs_def_code == MINUS_EXPR)
|| (rhs_code == MINUS_EXPR && rhs_def_code == PLUS_EXPR))
tree def_stmt_op0 = TREE_OPERAND (rhs_def_rhs, 0);
tree def_stmt_op1 = TREE_OPERAND (rhs_def_rhs, 1);
if (TREE_CODE (def_stmt_op0) == SSA_NAME
&& ! SSA_NAME_OCCURS_IN_ABNORMAL_PHI (def_stmt_op0)
&& is_gimple_min_invariant (def_stmt_op1))
tree outer_const = TREE_OPERAND (rhs, 1);
tree type = TREE_TYPE (TREE_OPERAND (stmt, 0));
tree t;
/* If we care about correct floating point results, then
don't fold x + c1 - c2. Note that we need to take both
the codes and the signs to figure this out. */
if (FLOAT_TYPE_P (type)
&& !flag_unsafe_math_optimizations
&& (rhs_def_code == PLUS_EXPR
|| rhs_def_code == MINUS_EXPR))
bool neg = false;
neg ^= (rhs_code == MINUS_EXPR);
neg ^= (rhs_def_code == MINUS_EXPR);
neg ^= real_isneg (TREE_REAL_CST_PTR (outer_const));
neg ^= real_isneg (TREE_REAL_CST_PTR (def_stmt_op1));
if (neg)
goto dont_fold_assoc;
/* Ho hum. So fold will only operate on the outermost
thingy that we give it, so we have to build the new
expression in two pieces. This requires that we handle
combinations of plus and minus. */
if (rhs_def_code != rhs_code)
if (rhs_def_code == MINUS_EXPR)
t = build (MINUS_EXPR, type, outer_const, def_stmt_op1);
t = build (MINUS_EXPR, type, def_stmt_op1, outer_const);
rhs_code = PLUS_EXPR;
else if (rhs_def_code == MINUS_EXPR)
t = build (PLUS_EXPR, type, def_stmt_op1, outer_const);
t = build (rhs_def_code, type, def_stmt_op1, outer_const);
t = local_fold (t);
t = build (rhs_code, type, def_stmt_op0, t);
t = local_fold (t);
/* If the result is a suitable looking gimple expression,
then use it instead of the original for STMT. */
if (TREE_CODE (t) == SSA_NAME
|| (UNARY_CLASS_P (t)
&& TREE_CODE (TREE_OPERAND (t, 0)) == SSA_NAME)
|| ((BINARY_CLASS_P (t) || COMPARISON_CLASS_P (t))
&& TREE_CODE (TREE_OPERAND (t, 0)) == SSA_NAME
&& is_gimple_val (TREE_OPERAND (t, 1))))
result = update_rhs_and_lookup_avail_expr (stmt, t, insert);
/* Transform TRUNC_DIV_EXPR and TRUNC_MOD_EXPR into RSHIFT_EXPR
and BIT_AND_EXPR respectively if the first operand is greater
than zero and the second operand is an exact power of two. */
if ((rhs_code == TRUNC_DIV_EXPR || rhs_code == TRUNC_MOD_EXPR)
&& INTEGRAL_TYPE_P (TREE_TYPE (TREE_OPERAND (rhs, 0)))
&& integer_pow2p (TREE_OPERAND (rhs, 1)))
tree val;
tree op = TREE_OPERAND (rhs, 0);
if (TYPE_UNSIGNED (TREE_TYPE (op)))
val = integer_one_node;
tree dummy_cond = walk_data->global_data;
if (! dummy_cond)
dummy_cond = build (GT_EXPR, boolean_type_node,
op, integer_zero_node);
dummy_cond = build (COND_EXPR, void_type_node,
dummy_cond, NULL, NULL);
walk_data->global_data = dummy_cond;
TREE_SET_CODE (COND_EXPR_COND (dummy_cond), GT_EXPR);
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 0) = op;
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 1)
= integer_zero_node;
val = simplify_cond_and_lookup_avail_expr (dummy_cond, NULL, false);
if (val && integer_onep (val))
tree t;
tree op0 = TREE_OPERAND (rhs, 0);
tree op1 = TREE_OPERAND (rhs, 1);
if (rhs_code == TRUNC_DIV_EXPR)
t = build (RSHIFT_EXPR, TREE_TYPE (op0), op0,
build_int_cst (NULL_TREE, tree_log2 (op1)));
t = build (BIT_AND_EXPR, TREE_TYPE (op0), op0,
local_fold (build (MINUS_EXPR, TREE_TYPE (op1),
op1, integer_one_node)));
result = update_rhs_and_lookup_avail_expr (stmt, t, insert);
/* Transform ABS (X) into X or -X as appropriate. */
if (rhs_code == ABS_EXPR
&& INTEGRAL_TYPE_P (TREE_TYPE (TREE_OPERAND (rhs, 0))))
tree val;
tree op = TREE_OPERAND (rhs, 0);
tree type = TREE_TYPE (op);
if (TYPE_UNSIGNED (type))
val = integer_zero_node;
tree dummy_cond = walk_data->global_data;
if (! dummy_cond)
dummy_cond = build (LE_EXPR, boolean_type_node,
op, integer_zero_node);
dummy_cond = build (COND_EXPR, void_type_node,
dummy_cond, NULL, NULL);
walk_data->global_data = dummy_cond;
TREE_SET_CODE (COND_EXPR_COND (dummy_cond), LE_EXPR);
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 0) = op;
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 1)
= build_int_cst (type, 0);
val = simplify_cond_and_lookup_avail_expr (dummy_cond, NULL, false);
if (!val)
TREE_SET_CODE (COND_EXPR_COND (dummy_cond), GE_EXPR);
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 0) = op;
TREE_OPERAND (COND_EXPR_COND (dummy_cond), 1)
= build_int_cst (type, 0);
val = simplify_cond_and_lookup_avail_expr (dummy_cond,
NULL, false);
if (val)
if (integer_zerop (val))
val = integer_one_node;
else if (integer_onep (val))
val = integer_zero_node;
if (val
&& (integer_onep (val) || integer_zerop (val)))
tree t;
if (integer_onep (val))
t = build1 (NEGATE_EXPR, TREE_TYPE (op), op);
t = op;
result = update_rhs_and_lookup_avail_expr (stmt, t, insert);
/* Optimize *"foo" into 'f'. This is done here rather than
in fold to avoid problems with stuff like &*"foo". */
if (TREE_CODE (rhs) == INDIRECT_REF || TREE_CODE (rhs) == ARRAY_REF)
tree t = fold_read_from_constant_string (rhs);
if (t)
result = update_rhs_and_lookup_avail_expr (stmt, t, insert);
return result;
/* COND is a condition of the form:
x == const or x != const
Look back to x's defining statement and see if x is defined as
x = (type) y;
If const is unchanged if we convert it to type, then we can build
the equivalent expression:
y == const or y != const
Which may allow further optimizations.
Return the equivalent comparison or NULL if no such equivalent comparison
was found. */
static tree
find_equivalent_equality_comparison (tree cond)
tree op0 = TREE_OPERAND (cond, 0);
tree op1 = TREE_OPERAND (cond, 1);
tree def_stmt = SSA_NAME_DEF_STMT (op0);
/* OP0 might have been a parameter, so first make sure it
was defined by a MODIFY_EXPR. */
if (def_stmt && TREE_CODE (def_stmt) == MODIFY_EXPR)
tree def_rhs = TREE_OPERAND (def_stmt, 1);
/* If either operand to the comparison is a pointer to
a function, then we can not apply this optimization
as some targets require function pointers to be
canonicalized and in this case this optimization would
eliminate a necessary canonicalization. */
if ((POINTER_TYPE_P (TREE_TYPE (op0))
&& TREE_CODE (TREE_TYPE (TREE_TYPE (op0))) == FUNCTION_TYPE)
|| (POINTER_TYPE_P (TREE_TYPE (op1))
&& TREE_CODE (TREE_TYPE (TREE_TYPE (op1))) == FUNCTION_TYPE))
return NULL;
/* Now make sure the RHS of the MODIFY_EXPR is a typecast. */
if ((TREE_CODE (def_rhs) == NOP_EXPR
|| TREE_CODE (def_rhs) == CONVERT_EXPR)
&& TREE_CODE (TREE_OPERAND (def_rhs, 0)) == SSA_NAME)
tree def_rhs_inner = TREE_OPERAND (def_rhs, 0);
tree def_rhs_inner_type = TREE_TYPE (def_rhs_inner);
tree new;
if (TYPE_PRECISION (def_rhs_inner_type)
> TYPE_PRECISION (TREE_TYPE (def_rhs)))
return NULL;
/* If the inner type of the conversion is a pointer to
a function, then we can not apply this optimization
as some targets require function pointers to be
canonicalized. This optimization would result in
canonicalization of the pointer when it was not originally
needed/intended. */
if (POINTER_TYPE_P (def_rhs_inner_type)
&& TREE_CODE (TREE_TYPE (def_rhs_inner_type)) == FUNCTION_TYPE)
return NULL;
/* What we want to prove is that if we convert OP1 to
the type of the object inside the NOP_EXPR that the
result is still equivalent to SRC.
If that is true, the build and return new equivalent
condition which uses the source of the typecast and the
new constant (which has only changed its type). */
new = build1 (TREE_CODE (def_rhs), def_rhs_inner_type, op1);
new = local_fold (new);
if (is_gimple_val (new) && tree_int_cst_equal (new, op1))
return build (TREE_CODE (cond), TREE_TYPE (cond),
def_rhs_inner, new);
return NULL;
/* STMT is a COND_EXPR for which we could not trivially determine its
result. This routine attempts to find equivalent forms of the
condition which we may be able to optimize better. It also
uses simple value range propagation to optimize conditionals. */
static tree
simplify_cond_and_lookup_avail_expr (tree stmt,
stmt_ann_t ann,
int insert)
tree cond = COND_EXPR_COND (stmt);
if (COMPARISON_CLASS_P (cond))
tree op0 = TREE_OPERAND (cond, 0);
tree op1 = TREE_OPERAND (cond, 1);
if (TREE_CODE (op0) == SSA_NAME && is_gimple_min_invariant (op1))
int limit;
tree low, high, cond_low, cond_high;
int lowequal, highequal, swapped, no_overlap, subset, cond_inverted;
varray_type vrp_records;
struct vrp_element *element;
struct vrp_hash_elt vrp_hash_elt, *vrp_hash_elt_p;
void **slot;
/* First see if we have test of an SSA_NAME against a constant
where the SSA_NAME is defined by an earlier typecast which
is irrelevant when performing tests against the given
constant. */
if (TREE_CODE (cond) == EQ_EXPR || TREE_CODE (cond) == NE_EXPR)
tree new_cond = find_equivalent_equality_comparison (cond);
if (new_cond)
/* Update the statement to use the new equivalent
condition. */
COND_EXPR_COND (stmt) = new_cond;
/* If this is not a real stmt, ann will be NULL and we
avoid processing the operands. */
if (ann)
modify_stmt (stmt);
/* Lookup the condition and return its known value if it
exists. */
new_cond = lookup_avail_expr (stmt, insert);
if (new_cond)
return new_cond;
/* The operands have changed, so update op0 and op1. */
op0 = TREE_OPERAND (cond, 0);
op1 = TREE_OPERAND (cond, 1);
/* Consult the value range records for this variable (if they exist)
to see if we can eliminate or simplify this conditional.
Note two tests are necessary to determine no records exist.
First we have to see if the virtual array exists, if it
exists, then we have to check its active size.
Also note the vast majority of conditionals are not testing
a variable which has had its range constrained by an earlier
conditional. So this filter avoids a lot of unnecessary work. */
vrp_hash_elt.var = op0;
vrp_hash_elt.records = NULL;
slot = htab_find_slot (vrp_data, &vrp_hash_elt, NO_INSERT);
if (slot == NULL)
return NULL;
vrp_hash_elt_p = (struct vrp_hash_elt *) *slot;
vrp_records = vrp_hash_elt_p->records;
if (vrp_records == NULL)
return NULL;
limit = VARRAY_ACTIVE_SIZE (vrp_records);
/* If we have no value range records for this variable, or we are
unable to extract a range for this condition, then there is
nothing to do. */
if (limit == 0
|| ! extract_range_from_cond (cond, &cond_high,
&cond_low, &cond_inverted))
return NULL;
/* We really want to avoid unnecessary computations of range
info. So all ranges are computed lazily; this avoids a
lot of unnecessary work. i.e., we record the conditional,
but do not process how it constrains the variable's
potential values until we know that processing the condition
could be helpful.
However, we do not want to have to walk a potentially long
list of ranges, nor do we want to compute a variable's
range more than once for a given path.
Luckily, each time we encounter a conditional that can not
be otherwise optimized we will end up here and we will
compute the necessary range information for the variable
used in this condition.
Thus you can conclude that there will never be more than one
conditional associated with a variable which has not been
processed. So we never need to merge more than one new
conditional into the current range.
These properties also help us avoid unnecessary work. */
= (struct vrp_element *)VARRAY_GENERIC_PTR (vrp_records, limit - 1);
if (element->high && element->low)
/* The last element has been processed, so there is no range
merging to do, we can simply use the high/low values
recorded in the last element. */
low = element->low;
high = element->high;
tree tmp_high, tmp_low;
int dummy;
/* The last element has not been processed. Process it now.
record_range should ensure for cond inverted is not set.
This call can only fail if cond is x < min or x > max,
which fold should have optimized into false.
If that doesn't happen, just pretend all values are
in the range. */
if (! extract_range_from_cond (element->cond, &tmp_high,
&tmp_low, &dummy))
gcc_unreachable ();
gcc_assert (dummy == 0);
/* If this is the only element, then no merging is necessary,
the high/low values from extract_range_from_cond are all
we need. */
if (limit == 1)
low = tmp_low;
high = tmp_high;
/* Get the high/low value from the previous element. */
struct vrp_element *prev
= (struct vrp_element *)VARRAY_GENERIC_PTR (vrp_records,
limit - 2);
low = prev->low;
high = prev->high;
/* Merge in this element's range with the range from the
previous element.
The low value for the merged range is the maximum of
the previous low value and the low value of this record.
Similarly the high value for the merged range is the
minimum of the previous high value and the high value of
this record. */
low = (tree_int_cst_compare (low, tmp_low) == 1
? low : tmp_low);
high = (tree_int_cst_compare (high, tmp_high) == -1
? high : tmp_high);
/* And record the computed range. */
element->low = low;
element->high = high;
/* After we have constrained this variable's potential values,
we try to determine the result of the given conditional.
To simplify later tests, first determine if the current
low value is the same low value as the conditional.
Similarly for the current high value and the high value
for the conditional. */
lowequal = tree_int_cst_equal (low, cond_low);
highequal = tree_int_cst_equal (high, cond_high);
if (lowequal && highequal)
return (cond_inverted ? boolean_false_node : boolean_true_node);
/* To simplify the overlap/subset tests below we may want
to swap the two ranges so that the larger of the two
ranges occurs "first". */
swapped = 0;
if (tree_int_cst_compare (low, cond_low) == 1
|| (lowequal
&& tree_int_cst_compare (cond_high, high) == 1))
tree temp;
swapped = 1;
temp = low;
low = cond_low;
cond_low = temp;
temp = high;
high = cond_high;
cond_high = temp;
/* Now determine if there is no overlap in the ranges
or if the second range is a subset of the first range. */
no_overlap = tree_int_cst_lt (high, cond_low);
subset = tree_int_cst_compare (cond_high, high) != 1;
/* If there was no overlap in the ranges, then this conditional
always has a false value (unless we had to invert this
conditional, in which case it always has a true value). */
if (no_overlap)
return (cond_inverted ? boolean_true_node : boolean_false_node);
/* If the current range is a subset of the condition's range,
then this conditional always has a true value (unless we
had to invert this conditional, in which case it always
has a true value). */
if (subset && swapped)
return (cond_inverted ? boolean_false_node : boolean_true_node);
/* We were unable to determine the result of the conditional.
However, we may be able to simplify the conditional. First
merge the ranges in the same manner as range merging above. */
low = tree_int_cst_compare (low, cond_low) == 1 ? low : cond_low;
high = tree_int_cst_compare (high, cond_high) == -1 ? high : cond_high;
/* If the range has converged to a single point, then turn this
into an equality comparison. */
if (TREE_CODE (cond) != EQ_EXPR
&& TREE_CODE (cond) != NE_EXPR
&& tree_int_cst_equal (low, high))
TREE_SET_CODE (cond, EQ_EXPR);
TREE_OPERAND (cond, 1) = high;
return 0;
/* STMT is a SWITCH_EXPR for which we could not trivially determine its
result. This routine attempts to find equivalent forms of the
condition which we may be able to optimize better. */
static tree
simplify_switch_and_lookup_avail_expr (tree stmt, int insert)
tree cond = SWITCH_COND (stmt);
tree def, to, ti;
/* The optimization that we really care about is removing unnecessary
casts. That will let us do much better in propagating the inferred
constant at the switch target. */
if (TREE_CODE (cond) == SSA_NAME)
def = SSA_NAME_DEF_STMT (cond);
if (TREE_CODE (def) == MODIFY_EXPR)
def = TREE_OPERAND (def, 1);
if (TREE_CODE (def) == NOP_EXPR)
int need_precision;
bool fail;
def = TREE_OPERAND (def, 0);
#ifdef ENABLE_CHECKING
/* ??? Why was Jeff testing this? We are gimple... */
gcc_assert (is_gimple_val (def));
to = TREE_TYPE (cond);
ti = TREE_TYPE (def);
/* If we have an extension that preserves value, then we
can copy the source value into the switch. */
need_precision = TYPE_PRECISION (ti);
fail = false;
if (TYPE_UNSIGNED (to) && !TYPE_UNSIGNED (ti))
fail = true;
else if (!TYPE_UNSIGNED (to) && TYPE_UNSIGNED (ti))
need_precision += 1;
if (TYPE_PRECISION (to) < need_precision)
fail = true;
if (!fail)
SWITCH_COND (stmt) = def;
modify_stmt (stmt);
return lookup_avail_expr (stmt, insert);
return 0;
/* CONST_AND_COPIES is a table which maps an SSA_NAME to the current
known value for that SSA_NAME (or NULL if no value is known).
NONZERO_VARS is the set SSA_NAMES known to have a nonzero value,
even if we don't know their precise value.
Propagate values from CONST_AND_COPIES and NONZERO_VARS into the PHI
nodes of the successors of BB. */
static void
cprop_into_successor_phis (basic_block bb, bitmap nonzero_vars)
edge e;
edge_iterator ei;
/* This can get rather expensive if the implementation is naive in
how it finds the phi alternative associated with a particular edge. */
FOR_EACH_EDGE (e, ei, bb->succs)
tree phi;
int indx;
/* If this is an abnormal edge, then we do not want to copy propagate
into the PHI alternative associated with this edge. */
if (e->flags & EDGE_ABNORMAL)
phi = phi_nodes (e->dest);
if (! phi)
indx = e->dest_idx;
for ( ; phi; phi = PHI_CHAIN (phi))
tree new;
use_operand_p orig_p;
tree orig;
/* The alternative may be associated with a constant, so verify
it is an SSA_NAME before doing anything with it. */
orig_p = PHI_ARG_DEF_PTR (phi, indx);
orig = USE_FROM_PTR (orig_p);
if (TREE_CODE (orig) != SSA_NAME)
/* If the alternative is known to have a nonzero value, record
that fact in the PHI node itself for future use. */
if (bitmap_bit_p (nonzero_vars, SSA_NAME_VERSION (orig)))
PHI_ARG_NONZERO (phi, indx) = true;
/* If we have *ORIG_P in our constant/copy table, then replace
ORIG_P with its value in our constant/copy table. */
new = SSA_NAME_VALUE (orig);
if (new
&& (TREE_CODE (new) == SSA_NAME
|| is_gimple_min_invariant (new))
&& may_propagate_copy (orig, new))
propagate_value (orig_p, new);
/* We have finished optimizing BB, record any information implied by
taking a specific outgoing edge from BB. */
static void
record_edge_info (basic_block bb)
block_stmt_iterator bsi = bsi_last (bb);
struct edge_info *edge_info;
if (! bsi_end_p (bsi))
tree stmt = bsi_stmt (bsi);
if (stmt && TREE_CODE (stmt) == SWITCH_EXPR)
tree cond = SWITCH_COND (stmt);
if (TREE_CODE (cond) == SSA_NAME)
tree labels = SWITCH_LABELS (stmt);
int i, n_labels = TREE_VEC_LENGTH (labels);
tree *info = xcalloc (n_basic_blocks, sizeof (tree));
edge e;
edge_iterator ei;
for (i = 0; i < n_labels; i++)
tree label = TREE_VEC_ELT (labels, i);
basic_block target_bb = label_to_block (CASE_LABEL (label));
if (CASE_HIGH (label)
|| !CASE_LOW (label)
|| info[target_bb->index])
info[target_bb->index] = error_mark_node;
info[target_bb->index] = label;
FOR_EACH_EDGE (e, ei, bb->succs)
basic_block target_bb = e->dest;
tree node = info[target_bb->index];
if (node != NULL && node != error_mark_node)
tree x = fold_convert (TREE_TYPE (cond), CASE_LOW (node));
edge_info = allocate_edge_info (e);
edge_info->lhs = cond;
edge_info->rhs = x;
free (info);
/* A COND_EXPR may create equivalences too. */
if (stmt && TREE_CODE (stmt) == COND_EXPR)
tree cond = COND_EXPR_COND (stmt);
edge true_edge;
edge false_edge;
extract_true_false_edges_from_block (bb, &true_edge, &false_edge);
/* If the conditional is a single variable 'X', record 'X = 1'
for the true edge and 'X = 0' on the false edge. */
if (SSA_VAR_P (cond))
struct edge_info *edge_info;
edge_info = allocate_edge_info (true_edge);
edge_info->lhs = cond;
edge_info->rhs = constant_boolean_node (1, TREE_TYPE (cond));
edge_info = allocate_edge_info (false_edge);
edge_info->lhs = cond;
edge_info->rhs = constant_boolean_node (0, TREE_TYPE (cond));
/* Equality tests may create one or two equivalences. */
else if (COMPARISON_CLASS_P (cond))
tree op0 = TREE_OPERAND (cond, 0);
tree op1 = TREE_OPERAND (cond, 1);
/* Special case comparing booleans against a constant as we
know the value of OP0 on both arms of the branch. i.e., we
can record an equivalence for OP0 rather than COND. */
if ((TREE_CODE (cond) == EQ_EXPR || TREE_CODE (cond) == NE_EXPR)
&& TREE_CODE (op0) == SSA_NAME
&& TREE_CODE (TREE_TYPE (op0)) == BOOLEAN_TYPE
&& is_gimple_min_invariant (op1))
if (TREE_CODE (cond) == EQ_EXPR)
edge_info = allocate_edge_info (true_edge);
edge_info->lhs = op0;
edge_info->rhs = (integer_zerop (op1)
? boolean_false_node
: boolean_true_node);
edge_info = allocate_edge_info (false_edge);
edge_info->lhs = op0;
edge_info->rhs = (integer_zerop (op1)
? boolean_true_node
: boolean_false_node);
edge_info = allocate_edge_info (true_edge);
edge_info->lhs = op0;
edge_info->rhs = (integer_zerop (op1)
? boolean_true_node
: boolean_false_node);
edge_info = allocate_edge_info (false_edge);
edge_info->lhs = op0;
edge_info->rhs = (integer_zerop (op1)
? boolean_false_node
: boolean_true_node);
else if (is_gimple_min_invariant (op0)
&& (TREE_CODE (op1) == SSA_NAME
|| is_gimple_min_invariant (op1)))
tree inverted = invert_truthvalue (cond);
struct edge_info *edge_info;
edge_info = allocate_edge_info (true_edge);
record_conditions (edge_info, cond, inverted);
if (TREE_CODE (cond) == EQ_EXPR)
edge_info->lhs = op1;
edge_info->rhs = op0;
edge_info = allocate_edge_info (false_edge);
record_conditions (edge_info, inverted, cond);
if (TREE_CODE (cond) == NE_EXPR)
edge_info->lhs = op1;
edge_info->rhs = op0;
else if (TREE_CODE (op0) == SSA_NAME
&& (is_gimple_min_invariant (op1)
|| TREE_CODE (op1) == SSA_NAME))
tree inverted = invert_truthvalue (cond);
struct edge_info *edge_info;
edge_info = allocate_edge_info (true_edge);
record_conditions (edge_info, cond, inverted);
if (TREE_CODE (cond) == EQ_EXPR)
edge_info->lhs = op0;
edge_info->rhs = op1;
edge_info = allocate_edge_info (false_edge);
record_conditions (edge_info, inverted, cond);
if (TREE_CODE (cond) == NE_EXPR)
edge_info->lhs = op0;
edge_info->rhs = op1;
/* ??? TRUTH_NOT_EXPR can create an equivalence too. */
/* Propagate information from BB to its outgoing edges.
This can include equivalency information implied by control statements
at the end of BB and const/copy propagation into PHIs in BB's
successor blocks. */
static void
propagate_to_outgoing_edges (struct dom_walk_data *walk_data ATTRIBUTE_UNUSED,
basic_block bb)
record_edge_info (bb);
cprop_into_successor_phis (bb, nonzero_vars);
/* Search for redundant computations in STMT. If any are found, then
replace them with the variable holding the result of the computation.
If safe, record this expression into the available expression hash
table. */
static bool
eliminate_redundant_computations (struct dom_walk_data *walk_data,
tree stmt, stmt_ann_t ann)
v_may_def_optype v_may_defs = V_MAY_DEF_OPS (ann);
tree *expr_p, def = NULL_TREE;
bool insert = true;
tree cached_lhs;
bool retval = false;
if (TREE_CODE (stmt) == MODIFY_EXPR)
def = TREE_OPERAND (stmt, 0);
/* Certain expressions on the RHS can be optimized away, but can not
themselves be entered into the hash tables. */
if (ann->makes_aliased_stores
|| ! def
|| TREE_CODE (def) != SSA_NAME
|| SSA_NAME_OCCURS_IN_ABNORMAL_PHI (def)
|| NUM_V_MAY_DEFS (v_may_defs) != 0)
insert = false;
/* Check if the expression has been computed before. */
cached_lhs = lookup_avail_expr (stmt, insert);
/* If this is an assignment and the RHS was not in the hash table,
then try to simplify the RHS and lookup the new RHS in the
hash table. */
if (! cached_lhs && TREE_CODE (stmt) == MODIFY_EXPR)
cached_lhs = simplify_rhs_and_lookup_avail_expr (walk_data, stmt, insert);
/* Similarly if this is a COND_EXPR and we did not find its
expression in the hash table, simplify the condition and
try again. */
else if (! cached_lhs && TREE_CODE (stmt) == COND_EXPR)
cached_lhs = simplify_cond_and_lookup_avail_expr (stmt, ann, insert);
/* Similarly for a SWITCH_EXPR. */
else if (!cached_lhs && TREE_CODE (stmt) == SWITCH_EXPR)
cached_lhs = simplify_switch_and_lookup_avail_expr (stmt, insert);
/* Get a pointer to the expression we are trying to optimize. */
if (TREE_CODE (stmt) == COND_EXPR)
expr_p = &COND_EXPR_COND (stmt);
else if (TREE_CODE (stmt) == SWITCH_EXPR)
expr_p = &SWITCH_COND (stmt);
else if (TREE_CODE (stmt) == RETURN_EXPR && TREE_OPERAND (stmt, 0))
expr_p = &TREE_OPERAND (TREE_OPERAND (stmt, 0), 1);
expr_p = &TREE_OPERAND (stmt, 1);
/* It is safe to ignore types here since we have already done
type checking in the hashing and equality routines. In fact
type checking here merely gets in the way of constant
propagation. Also, make sure that it is safe to propagate
CACHED_LHS into *EXPR_P. */
if (cached_lhs
&& (TREE_CODE (cached_lhs) != SSA_NAME
|| may_propagate_copy (*expr_p, cached_lhs)))
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, " Replaced redundant expr '");
print_generic_expr (dump_file, *expr_p, dump_flags);
fprintf (dump_file, "' with '");
print_generic_expr (dump_file, cached_lhs, dump_flags);
fprintf (dump_file, "'\n");
#if defined ENABLE_CHECKING
gcc_assert (TREE_CODE (cached_lhs) == SSA_NAME
|| is_gimple_min_invariant (cached_lhs));
if (TREE_CODE (cached_lhs) == ADDR_EXPR
|| (POINTER_TYPE_P (TREE_TYPE (*expr_p))
&& is_gimple_min_invariant (cached_lhs)))
retval = true;
propagate_tree_value (expr_p, cached_lhs);
modify_stmt (stmt);
return retval;
/* STMT, a MODIFY_EXPR, may create certain equivalences, in either
the available expressions table or the const_and_copies table.
Detect and record those equivalences. */
static void
record_equivalences_from_stmt (tree stmt,
int may_optimize_p,
stmt_ann_t ann)
tree lhs = TREE_OPERAND (stmt, 0);
enum tree_code lhs_code = TREE_CODE (lhs);
int i;
if (lhs_code == SSA_NAME)
tree rhs = TREE_OPERAND (stmt, 1);
/* Strip away any useless type conversions. */
STRIP_USELESS_TYPE_CONVERSION (rhs);
/* If the RHS of the assignment is a constant or another variable that
may be propagated, register it in the CONST_AND_COPIES table. We
do not need to record unwind data for this, since this is a true
assignment and not an equivalence inferred from a comparison. All
uses of this ssa name are dominated by this assignment, so unwinding
just costs time and space. */
if (may_optimize_p
&& (TREE_CODE (rhs) == SSA_NAME
|| is_gimple_min_invariant (rhs)))
SSA_NAME_VALUE (lhs) = rhs;
/* alloca never returns zero and the address of a non-weak symbol
is never zero. NOP_EXPRs and CONVERT_EXPRs can be completely
stripped as they do not affect this equivalence. */
while (TREE_CODE (rhs) == NOP_EXPR
|| TREE_CODE (rhs) == CONVERT_EXPR)
rhs = TREE_OPERAND (rhs, 0);
if (alloca_call_p (rhs)
|| (TREE_CODE (rhs) == ADDR_EXPR
&& DECL_P (TREE_OPERAND (rhs, 0))
&& ! DECL_WEAK (TREE_OPERAND (rhs, 0))))
record_var_is_nonzero (lhs);
/* IOR of any value with a nonzero value will result in a nonzero
value. Even if we do not know the exact result recording that
the result is nonzero is worth the effort. */
if (TREE_CODE (rhs) == BIT_IOR_EXPR
&& integer_nonzerop (TREE_OPERAND (rhs, 1)))
record_var_is_nonzero (lhs);
/* Look at both sides for pointer dereferences. If we find one, then
the pointer must be nonnull and we can enter that equivalence into
the hash tables. */
if (flag_delete_null_pointer_checks)
for (i = 0; i < 2; i++)
tree t = TREE_OPERAND (stmt, i);
/* Strip away any COMPONENT_REFs. */
while (TREE_CODE (t) == COMPONENT_REF)
t = TREE_OPERAND (t, 0);
/* Now see if this is a pointer dereference. */
if (INDIRECT_REF_P (t))
tree op = TREE_OPERAND (t, 0);
/* If the pointer is a SSA variable, then enter new
equivalences into the hash table. */
while (TREE_CODE (op) == SSA_NAME)
tree def = SSA_NAME_DEF_STMT (op);
record_var_is_nonzero (op);
/* And walk up the USE-DEF chains noting other SSA_NAMEs
which are known to have a nonzero value. */
if (def
&& TREE_CODE (def) == MODIFY_EXPR
&& TREE_CODE (TREE_OPERAND (def, 1)) == NOP_EXPR)
op = TREE_OPERAND (TREE_OPERAND (def, 1), 0);
/* A memory store, even an aliased store, creates a useful
equivalence. By exchanging the LHS and RHS, creating suitable
vops and recording the result in the available expression table,
we may be able to expose more redundant loads. */
if (!ann->has_volatile_ops
&& (TREE_CODE (TREE_OPERAND (stmt, 1)) == SSA_NAME
|| is_gimple_min_invariant (TREE_OPERAND (stmt, 1)))
&& !is_gimple_reg (lhs))
tree rhs = TREE_OPERAND (stmt, 1);
tree new;
/* FIXME: If the LHS of the assignment is a bitfield and the RHS
is a constant, we need to adjust the constant to fit into the
type of the LHS. If the LHS is a bitfield and the RHS is not
a constant, then we can not record any equivalences for this
statement since we would need to represent the widening or
narrowing of RHS. This fixes gcc.c-torture/execute/921016-1.c
and should not be necessary if GCC represented bitfields
properly. */
if (lhs_code == COMPONENT_REF
&& DECL_BIT_FIELD (TREE_OPERAND (lhs, 1)))
if (TREE_CONSTANT (rhs))
rhs = widen_bitfield (rhs, TREE_OPERAND (lhs, 1), lhs);
rhs = NULL;
/* If the value overflowed, then we can not use this equivalence. */
if (rhs && ! is_gimple_min_invariant (rhs))
rhs = NULL;
if (rhs)
/* Build a new statement with the RHS and LHS exchanged. */
new = build (MODIFY_EXPR, TREE_TYPE (stmt), rhs, lhs);
create_ssa_artficial_load_stmt (&(ann->operands), new);
/* Finally enter the statement into the available expression
table. */
lookup_avail_expr (new, true);
/* Replace *OP_P in STMT with any known equivalent value for *OP_P from
CONST_AND_COPIES. */
static bool
cprop_operand (tree stmt, use_operand_p op_p)
bool may_have_exposed_new_symbols = false;
tree val;
tree op = USE_FROM_PTR (op_p);
/* If the operand has a known constant value or it is known to be a
copy of some other variable, use the value or copy stored in
CONST_AND_COPIES. */
val = SSA_NAME_VALUE (op);
if (val && TREE_CODE (val) != VALUE_HANDLE)
tree op_type, val_type;
/* Do not change the base variable in the virtual operand
tables. That would make it impossible to reconstruct
the renamed virtual operand if we later modify this
statement. Also only allow the new value to be an SSA_NAME
for propagation into virtual operands. */
if (!is_gimple_reg (op)
&& (get_virtual_var (val) != get_virtual_var (op)
|| TREE_CODE (val) != SSA_NAME))
return false;
/* Do not replace hard register operands in asm statements. */
if (TREE_CODE (stmt) == ASM_EXPR
&& !may_propagate_copy_into_asm (op))
return false;
/* Get the toplevel type of each operand. */
op_type = TREE_TYPE (op);
val_type = TREE_TYPE (val);
/* While both types are pointers, get the type of the object
pointed to. */
while (POINTER_TYPE_P (op_type) && POINTER_TYPE_P (val_type))
op_type = TREE_TYPE (op_type);
val_type = TREE_TYPE (val_type);
/* Make sure underlying types match before propagating a constant by
converting the constant to the proper type. Note that convert may
return a non-gimple expression, in which case we ignore this
propagation opportunity. */
if (TREE_CODE (val) != SSA_NAME)
if (!lang_hooks.types_compatible_p (op_type, val_type))
val = fold_convert (TREE_TYPE (op), val);
if (!is_gimple_min_invariant (val))
return false;
/* Certain operands are not allowed to be copy propagated due
to their interaction with exception handling and some GCC
extensions. */
else if (!may_propagate_copy (op, val))
return false;
/* Do not propagate copies if the propagated value is at a deeper loop
depth than the propagatee. Otherwise, this may move loop variant
variables outside of their loops and prevent coalescing
opportunities. If the value was loop invariant, it will be hoisted
by LICM and exposed for copy propagation. */
if (loop_depth_of_name (val) > loop_depth_of_name (op))
return false;
/* Dump details. */
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, " Replaced '");
print_generic_expr (dump_file, op, dump_flags);
fprintf (dump_file, "' with %s '",
(TREE_CODE (val) != SSA_NAME ? "constant" : "variable"));
print_generic_expr (dump_file, val, dump_flags);
fprintf (dump_file, "'\n");
/* If VAL is an ADDR_EXPR or a constant of pointer type, note
that we may have exposed a new symbol for SSA renaming. */
if (TREE_CODE (val) == ADDR_EXPR
|| (POINTER_TYPE_P (TREE_TYPE (op))
&& is_gimple_min_invariant (val)))
may_have_exposed_new_symbols = true;
propagate_value (op_p, val);
/* And note that we modified this statement. This is now
safe, even if we changed virtual operands since we will
rescan the statement and rewrite its operands again. */
modify_stmt (stmt);
return may_have_exposed_new_symbols;
/* CONST_AND_COPIES is a table which maps an SSA_NAME to the current
known value for that SSA_NAME (or NULL if no value is known).
Propagate values from CONST_AND_COPIES into the uses, vuses and
v_may_def_ops of STMT. */
static bool
cprop_into_stmt (tree stmt)
bool may_have_exposed_new_symbols = false;
use_operand_p op_p;
ssa_op_iter iter;
tree rhs;
FOR_EACH_SSA_USE_OPERAND (op_p, stmt, iter, SSA_OP_ALL_USES)
if (TREE_CODE (USE_FROM_PTR (op_p)) == SSA_NAME)
may_have_exposed_new_symbols |= cprop_operand (stmt, op_p);
if (may_have_exposed_new_symbols)
rhs = get_rhs (stmt);
if (rhs && TREE_CODE (rhs) == ADDR_EXPR)
recompute_tree_invarant_for_addr_expr (rhs);
return may_have_exposed_new_symbols;
/* Optimize the statement pointed by iterator SI.
We try to perform some simplistic global redundancy elimination and
constant propagation:
1- To detect global redundancy, we keep track of expressions that have
been computed in this block and its dominators. If we find that the
same expression is computed more than once, we eliminate repeated
computations by using the target of the first one.
2- Constant values and copy assignments. This is used to do very
simplistic constant and copy propagation. When a constant or copy
assignment is found, we map the value on the RHS of the assignment to
the variable in the LHS in the CONST_AND_COPIES table. */
static void
optimize_stmt (struct dom_walk_data *walk_data, basic_block bb,
block_stmt_iterator si)
stmt_ann_t ann;
tree stmt;
bool may_optimize_p;
bool may_have_exposed_new_symbols = false;
stmt = bsi_stmt (si);
get_stmt_operands (stmt);
ann = stmt_ann (stmt);
may_have_exposed_new_symbols = false;
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, "Optimizing statement ");
print_generic_stmt (dump_file, stmt, TDF_SLIM);
/* Const/copy propagate into USES, VUSES and the RHS of V_MAY_DEFs. */
may_have_exposed_new_symbols = cprop_into_stmt (stmt);
/* If the statement has been modified with constant replacements,
fold its RHS before checking for redundant computations. */
if (ann->modified)
/* Try to fold the statement making sure that STMT is kept
up to date. */
if (fold_stmt (bsi_stmt_ptr (si)))
stmt = bsi_stmt (si);
ann = stmt_ann (stmt);
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, " Folded to: ");
print_generic_stmt (dump_file, stmt, TDF_SLIM);
/* Constant/copy propagation above may change the set of
virtual operands associated with this statement. Folding
may remove the need for some virtual operands.
Indicate we will need to rescan and rewrite the statement. */
may_have_exposed_new_symbols = true;
/* Check for redundant computations. Do this optimization only
for assignments that have no volatile ops and conditionals. */
may_optimize_p = (!ann->has_volatile_ops
&& ((TREE_CODE (stmt) == RETURN_EXPR
&& TREE_OPERAND (stmt, 0)
&& TREE_CODE (TREE_OPERAND (stmt, 0)) == MODIFY_EXPR
&& ! (TREE_SIDE_EFFECTS
(TREE_OPERAND (TREE_OPERAND (stmt, 0), 1))))
|| (TREE_CODE (stmt) == MODIFY_EXPR
&& ! TREE_SIDE_EFFECTS (TREE_OPERAND (stmt, 1)))
|| TREE_CODE (stmt) == COND_EXPR
|| TREE_CODE (stmt) == SWITCH_EXPR));
if (may_optimize_p)
|= eliminate_redundant_computations (walk_data, stmt, ann);
/* Record any additional equivalences created by this statement. */
if (TREE_CODE (stmt) == MODIFY_EXPR)
record_equivalences_from_stmt (stmt,
register_definitions_for_stmt (stmt);
/* If STMT is a COND_EXPR and it was modified, then we may know
where it goes. If that is the case, then mark the CFG as altered.
This will cause us to later call remove_unreachable_blocks and
cleanup_tree_cfg when it is safe to do so. It is not safe to
clean things up here since removal of edges and such can trigger
the removal of PHI nodes, which in turn can release SSA_NAMEs to
the manager.
That's all fine and good, except that once SSA_NAMEs are released
to the manager, we must not call create_ssa_name until all references
to released SSA_NAMEs have been eliminated.
All references to the deleted SSA_NAMEs can not be eliminated until
we remove unreachable blocks.
We can not remove unreachable blocks until after we have completed
any queued jump threading.
We can not complete any queued jump threads until we have taken
appropriate variables out of SSA form. Taking variables out of
SSA form can call create_ssa_name and thus we lose.
Ultimately I suspect we're going to need to change the interface
into the SSA_NAME manager. */
if (ann->modified)
tree val = NULL;
if (TREE_CODE (stmt) == COND_EXPR)
val = COND_EXPR_COND (stmt);
else if (TREE_CODE (stmt) == SWITCH_EXPR)
val = SWITCH_COND (stmt);
if (val && TREE_CODE (val) == INTEGER_CST && find_taken_edge (bb, val))
cfg_altered = true;
/* If we simplified a statement in such a way as to be shown that it
cannot trap, update the eh information and the cfg to match. */
if (maybe_clean_eh_stmt (stmt))
bitmap_set_bit (need_eh_cleanup, bb->index);
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, " Flagged to clear EH edges.\n");
if (may_have_exposed_new_symbols)
VEC_safe_push (tree_on_heap, stmts_to_rescan, bsi_stmt (si));
/* Replace the RHS of STMT with NEW_RHS. If RHS can be found in the
available expression hashtable, then return the LHS from the hash
If INSERT is true, then we also update the available expression
hash table to account for the changes made to STMT. */
static tree
update_rhs_and_lookup_avail_expr (tree stmt, tree new_rhs, bool insert)
tree cached_lhs = NULL;
/* Remove the old entry from the hash table. */
if (insert)
struct expr_hash_elt element;
initialize_hash_element (stmt, NULL, &element);
htab_remove_elt_with_hash (avail_exprs, &element, element.hash);
/* Now update the RHS of the assignment. */
TREE_OPERAND (stmt, 1) = new_rhs;
/* Now lookup the updated statement in the hash table. */
cached_lhs = lookup_avail_expr (stmt, insert);
/* We have now called lookup_avail_expr twice with two different
versions of this same statement, once in optimize_stmt, once here.
We know the call in optimize_stmt did not find an existing entry
in the hash table, so a new entry was created. At the same time
this statement was pushed onto the AVAIL_EXPRS_STACK vector.
If this call failed to find an existing entry on the hash table,
then the new version of this statement was entered into the
hash table. And this statement was pushed onto BLOCK_AVAIL_EXPR
for the second time. So there are two copies on BLOCK_AVAIL_EXPRs
If this call succeeded, we still have one copy of this statement
on the BLOCK_AVAIL_EXPRs vector.
For both cases, we need to pop the most recent entry off the
BLOCK_AVAIL_EXPRs vector. For the case where we never found this
statement in the hash tables, that will leave precisely one
copy of this statement on BLOCK_AVAIL_EXPRs. For the case where
we found a copy of this statement in the second hash table lookup
we want _no_ copies of this statement in BLOCK_AVAIL_EXPRs. */
if (insert)
VEC_pop (tree_on_heap, avail_exprs_stack);
/* And make sure we record the fact that we modified this
statement. */
modify_stmt (stmt);
return cached_lhs;
/* Search for an existing instance of STMT in the AVAIL_EXPRS table. If
found, return its LHS. Otherwise insert STMT in the table and return
Also, when an expression is first inserted in the AVAIL_EXPRS table, it
is also added to the stack pointed by BLOCK_AVAIL_EXPRS_P, so that they
can be removed when we finish processing this block and its children.
NOTE: This function assumes that STMT is a MODIFY_EXPR node that
contains no CALL_EXPR on its RHS and makes no volatile nor
aliased references. */
static tree
lookup_avail_expr (tree stmt, bool insert)
void **slot;
tree lhs;
tree temp;
struct expr_hash_elt *element = xmalloc (sizeof (struct expr_hash_elt));
lhs = TREE_CODE (stmt) == MODIFY_EXPR ? TREE_OPERAND (stmt, 0) : NULL;
initialize_hash_element (stmt, lhs, element);
/* Don't bother remembering constant assignments and copy operations.
Constants and copy operations are handled by the constant/copy propagator
in optimize_stmt. */
if (TREE_CODE (element->rhs) == SSA_NAME
|| is_gimple_min_invariant (element->rhs))
free (element);
return NULL_TREE;
/* If this is an equality test against zero, see if we have recorded a
nonzero value for the variable in question. */
if ((TREE_CODE (element->rhs) == EQ_EXPR
|| TREE_CODE (element->rhs) == NE_EXPR)
&& TREE_CODE (TREE_OPERAND (element->rhs, 0)) == SSA_NAME
&& integer_zerop (TREE_OPERAND (element->rhs, 1)))
int indx = SSA_NAME_VERSION (TREE_OPERAND (element->rhs, 0));
if (bitmap_bit_p (nonzero_vars, indx))
tree t = element->rhs;
free (element);
if (TREE_CODE (t) == EQ_EXPR)
return boolean_false_node;
return boolean_true_node;
/* Finally try to find the expression in the main expression hash table. */
slot = htab_find_slot_with_hash (avail_exprs, element, element->hash,
(insert ? INSERT : NO_INSERT));
if (slot == NULL)
free (element);
return NULL_TREE;
if (*slot == NULL)
*slot = (void *) element;
VEC_safe_push (tree_on_heap, avail_exprs_stack,
stmt ? stmt : element->rhs);
return NULL_TREE;
/* Extract the LHS of the assignment so that it can be used as the current
definition of another variable. */
lhs = ((struct expr_hash_elt *)*slot)->lhs;
/* See if the LHS appears in the CONST_AND_COPIES table. If it does, then
use the value from the const_and_copies table. */
if (TREE_CODE (lhs) == SSA_NAME)
temp = SSA_NAME_VALUE (lhs);
if (temp && TREE_CODE (temp) != VALUE_HANDLE)
lhs = temp;
free (element);
return lhs;
/* Given a condition COND, record into HI_P, LO_P and INVERTED_P the
range of values that result in the conditional having a true value.
Return true if we are successful in extracting a range from COND and
false if we are unsuccessful. */
static bool
extract_range_from_cond (tree cond, tree *hi_p, tree *lo_p, int *inverted_p)
tree op1 = TREE_OPERAND (cond, 1);
tree high, low, type;
int inverted;
type = TREE_TYPE (op1);
/* Experiments have shown that it's rarely, if ever useful to
record ranges for enumerations. Presumably this is due to
the fact that they're rarely used directly. They are typically
cast into an integer type and used that way. */
if (TREE_CODE (type) != INTEGER_TYPE
/* We don't know how to deal with types with variable bounds. */
|| TREE_CODE (TYPE_MIN_VALUE (type)) != INTEGER_CST
|| TREE_CODE (TYPE_MAX_VALUE (type)) != INTEGER_CST)
return 0;
switch (TREE_CODE (cond))
case EQ_EXPR:
|
{"url":"https://llvm.googlesource.com/llvm-archive/+/48649d2c83b557841c9e5c978d9ab5af13cb52e5/llvm-gcc-4.0/gcc/tree-ssa-dom.c","timestamp":"2024-11-14T11:24:42Z","content_type":"text/html","content_length":"1049306","record_id":"<urn:uuid:6c127c80-4788-470d-b3bd-9c4cbff954db>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00606.warc.gz"}
|
Median for aggregating grades
Is there any easy way to use the median rather than the mean/average as the aggregation for a group of grades?
Median is not even one of the functions available if you add a formula item to the gradebook, so I cannot add a formula or calculated column to use.
I just want to get to the point I can say here are X number of homework assignments. Find the median homework grade use that in the final grade calculation.
• Hello @Eugene.S.817 ,
Thank you for your question. One way to get the median of a grade item is to export the Grades Statistics and add a column to calculate the median. Use the formula <=median(x1:x5)> to calculate
the median.
Here is some more information and tips on how to utilize Grade Statistics:
Please let me know if this is helpful!
• No, I don't think this was helpful.
(1) The article you link is for quiz statistics and is not related to the gradebook which was the focus of my question.
(2) Your proposed solution involves exporting the data and using external software such as Microsoft Excel or GoogleSheets to find the median; my questions was about having D2L Brightspace
performing this computation natively.
This described solution is particularly unhelpful if
(a) students are using the final grade calculation as an estimate of their overall performance in the course. This means only a mean can be used to determine final grades and graders cannot use
the median score for grading students and have them have a realistic estimate—-not without grader constantly downloading their grades performing their own calculations externally and updating a
different field elsewhere in a gradebook. This means the final grade calculation will almost never be an accurate reflection of student performance as it will be lagging behind the instructors
ability to manually update a field.
(b) the institution exports final grades from the final grade calculation of the gradebook to record/report elsewhere, for reasons similar to those outlined in (a).
We all learn in (likely elementary) school about different measures of central tendency: mean, median, and mode. Each has different strengths and weaknesses. The mean is not always the best tool
for assessment. This is further outlined in Joe Feldman's Grading for Equity. There are instances where grading with the median is the most equitable and helps avoid systemic bias in our
evaluation of students. It seems like an odd omission that the median should be impossible for educators to use on this platform.
|
{"url":"https://community.d2l.com/brightspace/discussion/5124/median-for-aggregating-grades","timestamp":"2024-11-09T07:24:50Z","content_type":"text/html","content_length":"300628","record_id":"<urn:uuid:77238da1-9776-4744-8773-de410cbb2f20>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00588.warc.gz"}
|
Top Linear Algebra for Machine Learning Courses Online [2024] | Coursera
Filter by
The language used throughout the course, in both instruction and assessments.
Build job-relevant skills in under 2 hours with hands-on tutorials.
Learn from top instructors with graded assignments, videos, and discussion forums.
Learn a new tool or skill in an interactive, hands-on environment.
Get in-depth knowledge of a subject by completing a series of courses and projects.
Earn career credentials from industry leaders that demonstrate your expertise.
Earn career credentials while taking courses that count towards your Master’s degree.
Earn your Bachelor’s or Master’s degree online for a fraction of the cost of in-person learning.
Earn a university-issued career credential in a flexible, interactive format.
Graduate level learning within reach.
Explore the Linear Algebra for Machine Learning Course Catalog
Skills you'll gain: Algebra, Linear Algebra, Machine Learning
Skills you'll gain: Algebra, Linear Algebra, Mathematics, Python Programming, Computer Programming, Machine Learning Algorithms, Problem Solving, Algorithms, Computer Programming Tools, Machine
Skills you'll gain: Machine Learning, Calculus, Differential Equations, Mathematics, Machine Learning Algorithms, Regression, Algebra, Algorithms, Artificial Neural Networks, General Statistics,
Linear Algebra, Probability & Statistics, Statistical Analysis
Skills you'll gain: Algebra, Linear Algebra, Mathematics, Machine Learning, Mathematical Theory & Analysis, Computer Programming, Python Programming, Machine Learning Algorithms, Calculus,
Computational Logic, Algorithms, Differential Equations, Applied Mathematics, Problem Solving, Statistical Analysis, Data Visualization, Dimensionality Reduction, Computer Programming Tools,
Skills you'll gain: Machine Learning, Calculus, Differential Equations, Mathematics, Machine Learning Algorithms, Regression, Algebra, Algorithms, Artificial Neural Networks
• Skills you'll gain: Algorithms, Human Learning, Machine Learning, Machine Learning Algorithms, Applied Machine Learning, Machine Learning Software, Statistical Machine Learning, General
Statistics, Probability & Statistics, Regression
• Skills you'll gain: Algebra, Calculus, Linear Algebra, Mathematics, Differential Equations, Machine Learning, Machine Learning Algorithms, Mathematical Theory & Analysis, Computational Logic,
Data Visualization, Regression
• Skills you'll gain: Machine Learning, Machine Learning Algorithms, Applied Machine Learning, Algorithms, Deep Learning, Machine Learning Software, Artificial Neural Networks, Human Learning,
Python Programming, Regression, Statistical Machine Learning, Mathematics, Tensorflow, Critical Thinking, Network Model, Training, Reinforcement Learning
• Skills you'll gain: Machine Learning, Applied Machine Learning, Leadership and Management
• Skills you'll gain: Machine Learning
Searches related to linear algebra for machine learning
In summary, here are 10 of our most popular linear algebra for machine learning courses
|
{"url":"https://www.coursera.org/courses?query=linear%20algebra%20for%20machine%20learning","timestamp":"2024-11-12T13:19:47Z","content_type":"text/html","content_length":"811674","record_id":"<urn:uuid:33cbaf99-ca69-43a3-a043-52b87591be44>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00161.warc.gz"}
|
Circle Graph Template
Circle Graph Template - Web don’t waste time with complicated software. These templates can be completely customized using venngage’s infographic maker. Download circle diagrams for powerpoint
presentations with incredible styles and effects. This lesson will define a circle. On the left side is the text areas,.
Web don’t waste time with complicated software. Web explore professionally designed circle templates you can customize and share easily from canva. Web circular diagrams in the form of mind maps are
created using venngage’s smart diagram editor. Web diagram chart abstract circle design circle background grunge circle free vector grunge circle circle arrow circle arrow vector free. Download
circle diagrams for powerpoint presentations with incredible styles and effects. These templates can be completely customized using venngage’s infographic maker. Web a circle graph is the graph of an
equation which forms a circle.
5+ Circle Graph template room
Using these circular diagram templates, you can. Web get awesome circular diagram templates for presentation, report, or paperwork. Web get awesome circular diagram templates for presentation,
report, or paperwork. On the left side is the text areas,. Web a circle graph is the graph of an equation which forms a circle. These templates can be.
32+ Circle Template
Web explore professionally designed graphs templates you can customize and share easily from canva. Choose from a large variety of circular diagrams,. Pie graphs can be tricky to make by hand when
the sections aren't easy fractions of the pie. Using these circular diagram templates, you can. Check them out below and get started on.
The awesome Free Printable Circle Graph Paper Colona.rsd7 In Blank
However, they are best used for displaying data when. On the left side is the text areas,. Web circular diagrams in the form of mind maps are created using venngage’s smart diagram editor. Web don’t
waste time with complicated software. Choose from a large variety of circular diagrams,. Web try the adobe express remixable graph.
Graph clipart blank, Graph blank Transparent FREE for download on
However, they are best used for displaying data when. Web in this post, you’ll find a collection of circle infographic templates that are ideal for anything, from visualizing data to diagramming
structures. Web explore professionally designed circle templates you can customize and share easily from canva. This lesson will define a circle. Choose from a.
Polar Circle Graph Paper Templates at
Pie graphs can be tricky to make by hand when the sections aren't easy fractions of the pie. Web a circle graph is the graph of an equation which forms a circle. This lesson will define a circle.
Download circle diagrams for powerpoint presentations with incredible styles and effects. Web circle graphs are popular because.
Circle chart template with 6 options Royalty Free Vector
Web pie chart practice worksheets circle graphs show the relative sizes of different categories in a population. Download circle diagrams for powerpoint presentations with incredible styles and
effects. Pie graphs can be tricky to make by hand when the sections aren't easy fractions of the pie. Web circle graphs are popular because they provide a.
Circular Graph Collection of 6 vector circle chart templates 7
These templates can be completely customized using venngage’s infographic maker. Web in this post, you’ll find a collection of circle infographic templates that are ideal for anything, from
visualizing data to diagramming structures. Web don’t waste time with complicated software. Web a circle graph is the graph of an equation which forms a circle. Web.
Circle Graph PowerPoint Template and Keynote Slide Slidebazaar
Choose from a large variety of circular diagrams,. However, they are best used for displaying data when. Web circular diagrams in the form of mind maps are created using venngage’s smart diagram
editor. Using these circular diagram templates, you can. This lesson will define a circle. Check them out below and get started on your.
Blank Pie Charts MathsFaculty
Web get awesome circular diagram templates for presentation, report, or paperwork. Using these circular diagram templates, you can. Check them out below and get started on your own circle infographic
design. Pie graphs can be tricky to make by hand when the sections aren't easy fractions of the pie. Web circle graphs are popular because.
Circle Graph Template DOCX Etsy
On the left side is the text areas,. Web circle graphs are popular because they provide a visual presentation of the whole and its parts. These templates can be completely customized using venngage’s
infographic maker. Choose from a large variety of circular diagrams,. Web circle diagram templates for powerpoint & google slides. However, they are.
Circle Graph Template Web circle diagram templates for powerpoint & google slides. This lesson will define a circle. Web explore professionally designed circle templates you can customize and share
easily from canva. Web try the adobe express remixable graph templates to help you easily create your own design online in minutes. Web circular diagrams in the form of mind maps are created using
venngage’s smart diagram editor.
Web A Circle Graph Is A Circular Chart Divided Into Sections That Each Represent A Percentage Of The Total.
Web circular diagrams in the form of mind maps are created using venngage’s smart diagram editor. To do this we have a circle with radius r and centre (0, 0). Web try the adobe express remixable
graph templates to help you easily create your own design online in minutes. Web pie chart practice worksheets circle graphs show the relative sizes of different categories in a population.
Web Circle Diagram Templates For Powerpoint & Google Slides.
Web a circle graph is the graph of an equation which forms a circle. Web in this post, you’ll find a collection of circle infographic templates that are ideal for anything, from visualizing data to
diagramming structures. These templates can be completely customized using venngage’s infographic maker. Web explore professionally designed graphs templates you can customize and share easily from
Using These Circular Diagram Templates, You Can.
Web explore professionally designed circle templates you can customize and share easily from canva. Web circle graphs are popular because they provide a visual presentation of the whole and its
parts. Choose from a large variety of circular diagrams,. However, they are best used for displaying data when.
Choose From A Large Variety Of Circular Diagrams,.
Pie graphs can be tricky to make by hand when the sections aren't easy fractions of the pie. Check them out below and get started on your own circle infographic design. Download circle diagrams for
powerpoint presentations with incredible styles and effects. Web diagram chart abstract circle design circle background grunge circle free vector grunge circle circle arrow circle arrow vector free.
Circle Graph Template Related Post :
|
{"url":"https://projectactnow.org/print/circle-graph-template.html","timestamp":"2024-11-12T23:39:56Z","content_type":"application/xhtml+xml","content_length":"25477","record_id":"<urn:uuid:842e2fc9-3781-4f1d-8283-08344d089d86>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00189.warc.gz"}
|
Numbers With Repeated Digits
| CodingDrills
Numbers With Repeated Digits
Given a positive integer N, find the number of positive integers less than or equal to N that have at least one repeated digit.
Ada AI
I want to discuss a solution
What's wrong with my code?
How to use 'for loop' in javascript?
javascript (node 13.12.0)
|
{"url":"https://www.codingdrills.com/practice/numbers-with-repeated-digits","timestamp":"2024-11-05T18:50:51Z","content_type":"text/html","content_length":"12829","record_id":"<urn:uuid:32bb97da-6a40-488f-91f9-35eb959c6a2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00714.warc.gz"}
|
Applies a tactic to an interval of terms from a term obtained by repeated application of Category.comp.
slice is a conv tactic; if the current focus is a composition of several morphisms, slice a b reassociates as needed, and zooms in on the a-th through b-th morphisms. Thus if the current focus is (a
≫ b) ≫ ((c ≫ d) ≫ e), then slice 2 3 zooms to b ≫ c.
• One or more equations did not get rendered due to their size.
Instances For
• rewrites the target expression using Category.assoc.
• uses congr to split off the first a-1 terms and rotates to a-th (last) term
• counts the number k of rewrites as it uses ←Category.assoc to bring the target to left associated form; from the first step this is the total number of remaining terms from C
• it now splits off b-a terms from target using congr leaving the desired subterm
• finally, it rewrites it once more using Category.assoc to bring it to right-associated normal form
• One or more equations did not get rendered due to their size.
Instances For
slice is implemented by evalSlice.
• One or more equations did not get rendered due to their size.
Instances For
slice_lhs a b => tac zooms to the left hand side, uses associativity for categorical composition as needed, zooms in on the a-th through b-th morphisms, and invokes tac.
• One or more equations did not get rendered due to their size.
Instances For
slice_rhs a b => tac zooms to the right hand side, uses associativity for categorical composition as needed, zooms in on the a-th through b-th morphisms, and invokes tac.
• One or more equations did not get rendered due to their size.
Instances For
|
{"url":"https://leanprover-community.github.io/mathlib4_docs/Mathlib/Tactic/CategoryTheory/Slice.html","timestamp":"2024-11-13T01:12:32Z","content_type":"text/html","content_length":"11530","record_id":"<urn:uuid:9586a108-3dba-4ff9-a29d-13af1b820da4>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00274.warc.gz"}
|
Need a little help with fixing AAC tactics
I have a QED blow up issue with AAC-tactics. As far as I understand it is in maintenance mode (community maintained), so I wanted to fix it myself. From what I have learned in writing several
reflexive tactics, what is needed is a lazy eval in the type of decision_thm defined here:
and in Coq:
with a handful of symbols in the delta list (all symbols used in the eval function defined - see:
This eval should make the unification task trivial during both, tactic application and QED time, since then the type of decision_thm should match the goal exactly.
I guess it would take me some time to figure out how to do the lazy eval in the type of the theorem in OCaml, so if someone could just tell me the magic line, I would appreciate it.
P.S.: I know how to introduce symbols from Coq into OCaml, but I would need an example on how such an external symbol is fed into the lazy delta list.
formally, I'm the maintainer of AAC Tactics, but I see the scope of my maintenance work as:
• making sure it works with Coq master
• doing releases compatible with new Coq releases
• ensuring the existing functionality is preserved
Enhancements like the ones Michael is proposing are welcome as PRs and will be merged if they are compatible with the above goals.
not sure I understand, isn't that what the convert_concl does?
ie by_aac_reflexivity does something like vm_cast (r (eval t) (eval t')); apply (decision_thm ...); vm_compute; reflexivity
what do you want it to do instead?
@Gaëtan Gilbert : the cast goes wild on the arguments of the relation (see the example I posted i the issue). What I do in all my reflexive tactics is an explicit lazy eval to make the term
structurally identical to the goal, so that the unification is trivial. If one unifies arbitrary terms, there is always a chance it goes wild. As far as I can tell an explicit evaluation (with delta
list) is not done, which can be seen by the fact that e.g. eval_aux is not declared external, und thus not available at the OCaml level.
@Karl Palmskog : this is perfectly fine. I understand "community maintained" as "if you see a chance to fix an issue as user of a development, at least try it".
You have the goal, why do you need to evaluate something to get it?
The statement of derives is eval u == eval v. Here u and v are reified versions of the original goal and eval converts this reified form back to the original Coq term. The eval function is not
trivial, and the unification simply can blow up. So what I tend to do in such cases is to explicitly evaluate the eval function, so that the type of the correctness lemma is identical to the goal,
not just unifiable.
See e.g. my Rosetta stone example:
If I leave away this cbv (I would use lazy now), this tactic very easily blows up at QED time. With the cbv it is safe.
(In this case modulo the fact that I evaluate some Z functions, as said in the example to do it properly one would have to copy these. But the AAC tactics look clean in this respect).
that does cbv in hyp, I thought that didn't change the proof term, how does it impact Qed?
I am not so sure what happens if one does the evaluation before the instantiation, but this tactic does work. I need to double check how exactly I do it in production tactics. Possibly one has to do
something slightly more contrived, like first generalize / revert the posed theorem, then evaluate in the goal and then introduce it again.
@Gaëtan Gilbert : you are right, the example is bad. In production code I do this (Ltac2):
let theorem_id:=fresh_name_not_in_local_hyps @__solve_goal in
let theorem := constr:(ExprProp_check_Implies_ExprProp_interpret $ast $ctx $ctx_proof) in
let theorem_type := Constr.type theorem in
let theorem_type_eval := Std.eval_cbv (redflags_delta_only_all (interpretation_symbols())) theorem_type in
pose ($theorem_id:=$theorem);
let thereom_inst:=constr_of_ident theorem_id in
my_repleace_refl theorem_type theorem_type_eval theorem_id;
apply $thereom_inst;
Ltac2 my_repleace_refl (a : constr) (b : constr) (hyp : ident):=
(* ltac1:( a b id |- replace a with b in id by (idtac "POS1" b; time (abstract reflexivity))) (Ltac1.of_constr a) (Ltac1.of_constr b) (Ltac1.of_ident hyp). *)
ltac1:( a b id |- replace a with b in id by reflexivity) (Ltac1.of_constr a) (Ltac1.of_constr b) (Ltac1.of_ident hyp).
(Maybe there is meanwhile a better way of doing this in Ltac2)
Thinking about this, the odd things is that I prove the equivalence of the evaluated proof statement and the unevaluated one also with reflexivity - this should be the same as what happens if I don't
do this at all. Possibly the order (what is left and right in unification) matters here.
What I can say is that I had QED problems with my tactic as well and the QED issues are gone with the above code. My tactic works in a very similar way.
I wonder if pose ($theorem_id:=$theorem:$theorem_type_eval) would work ...
@Gaëtan Gilbert : do you have some hints / pointers on how to write the eval and replace from the above Ltac2 code in an OCaml tactic?
OK, I guess it is time to go through Yves' OCaml plugin tutorial then ...
replace a with b in id by tac is Equality.replace_in_clause_maybe_by a b cl (Some tac) where cl is a https://github.com/coq/coq/blob/deaf8493b2bb10de01101423188c655c397a09f2/pretyping/locus.mli#
L41-L56 saying in id
(something like { concl_occs = NoOccurrences; onhyps = Some [(AllOccurrences, id), InHyp] })
since you know the terms involved you could also try to directly generate the terms involved, ie doing something like
pose proof (@eq_ind_r _ a ?P id b eq_refl) as id'; clear id; rename id' into id
where ?P is the generalization of the type of id over a (and you can decide if you prefer to use @eq_refl _ a or @eq_refl _ b)
@Gaëtan Gilbert : thanks for the pointers - just in time!! I meanwhile figured out how to collect the delta list for the eval lazy and got this working. It is a bit tricky because aac-tactics does
quite a few let defines in various places and I need to collect these let symbols (and then convert them to the correct type). I am learning ...
I think I meanwhile learned enough about writing Tactics in OCaml to fix AAC tactics. I have one question, though. The original code has these lines (see):
let convert_to = mkApp (r, [| mkApp (eval,[| t |]); mkApp (eval, [|t'|])|]) in
let convert = Tactics.convert_concl ~cast:true ~check:true convert_to Constr.VMcast in
let apply_tac = Tactics.apply decision_thm in
Here t and t' are the reified terms. As far as I understand this tries to convert the original (unreified) goal to the uninterpreted reified goal r (eval t) (eval t') - where r is usually = - using a
VMcast. The problem is that one needs to (partially) reduce convert_to to get to the goal. Doesn't this try to (fully) reduce the goal to get to convert_to? As far as I can see, this would be quite a
disaster and leave a much harder unification task to the apply following next . Of course this depends on the terms on which the AAC operator is applied, but even a (a+b)%Z doesn't look very nice
after full evaluation.
It’s unclear what kind of evars could appear here, from the comment above. But I suppose that the conversion R (eval (quoted_t)) (eval quoted_u) to the goal R t u should be cheap
It’s unclear what kind of evars could appear here
most definitions involved have some type class arguments, but I don't see why these should not be instantiated at this point. I am not aware that there are other evars around.
But I suppose that the conversion R (eval (quoted_t)) (eval quoted_u) to the goal R t u should be cheap
Well it is what QED dies on in rather trivial examples. Somehow the unification can get lost in the inner structure of the arguments to the AAC operator. See the issue example which is essentially
x+0=x where + is replaced with some VST memory operator and x with a not to complicated VST memory predicate. QED cannot go wild on the premise, because this is fully reduces to Eq = Eq, and there is
nothing else in the proof term.
But my main quest is what the Tactics.convert_concl ~cast:true ~check:true convert_to Constr.VMcast does here. Does it a full VM evaluation of the goal? I would think I can find not to complicated
examples, where a full evaluation of an equality leads to astronomic (or worse) run times.
convert_concl uses Reductionops.infer_conv (kernel conversion as a evar aware api) regardless of which cast kind it produces
@Gaëtan Gilbert : sorry, but this doesn't help me much - I don't know what Reductionops.infer_conv does exactly either and the documentation mostly mentions universe constraints, which I don't think
are a problem here.
I expect that Tactics.convert_concl ~cast:true ~check:true convert_to Constr.VMcast converts the goal to convert_to in some way and leaves some hints for the kernel. I would like to understand how
exactly it does the conversion and what hint it leaves for the kernel. There is not that much documentation for the details (or I can't find it). Especially I wonder if the Constr.VMcast means either
at tactic execution time or at QED time a VM compute is done, and if so one which term and when.
A vm_compute (or any form of full compute) on anything which involves user supplied terms, would for sure lead to computational blow up in many cases.
IMHO the proper way of doing this is to do a full compute on the Eq = Eqpremise of decision_thm and do full reduction but with only the symbols used on the eval function in the delta list on the
conclusion of decision_thm and then apply it to the goal.
it does something like match goal with |- ?g => refine (_ :? g); change ty end (where :? is whatever cast kind was given)
so there is vm computation on both sides at Qed but not at tactic time
I would like to understand how exactly it does the conversion
it's the standard lazy based kernel conversion
Thanks - this was clear!
To recap (please correct me when wrong):
• AAC tactics handles AAC operators applied to arbitrary terms
• these arbitrary terms are contained in the original goal and in the evaluated reified terms
• the Tactics.convert_concl .... Constr.VMcast does a vm compute at QED time on the original goal and the evaluated reified terms
• as a result this blows up at QED time if a vm_compute on any of the involved terms blows up
I see, agreed Michael. Isn't there a "safe" way to avoid the conversions of the arbitrary terms using set/pattern and clearbody's? I suppose other reflexive tactics might do that
@Matthieu Sozeau : there are a few problems with abstracting the user terms e.g. hiding them behind opaque definitions or, as you suggested, using clearbody in relevant subgoals. One problem is that
this is slowish - not really slow but it can easily take longer than the rest of the reflexive tactic. The other problem is that this also depends on the involved operators. Some operators fully
reduce to quite a mess even with opaque variables as arguments. Of course one could also abstract the operators ...
As I said, my preferred solution is to use lazy with an explicit delta list which contains only the symbols used in the eval function (or interpretation function as I tend to call it) and this way
make the statement of the decision theorem and the goal identical.
Last updated: Oct 13 2024 at 01:02 UTC
|
{"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/Need.20a.20little.20help.20with.20fixing.20AAC.20tactics.html","timestamp":"2024-11-10T18:17:24Z","content_type":"text/html","content_length":"39523","record_id":"<urn:uuid:1cb8d621-24fb-4432-80fc-87c0feb98954>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00639.warc.gz"}
|
Numerical Literacy
This page was originally authored by Denise Flick (2010).
This page has been revised by Janet Barker (2011).
The stop-motion animation video was added by Meril Rasmussen (2016).
Numerical literacy, or numeracy, is currently a topic of great educational concern. Provincial curriculum developers across the country, and throughout the world, are working to address the issue of
numeracy. In general, the term numeracy can be used to describe two different, but related, areas of ability:
1. The ability to use basic math skills and interpret data in daily life and work.
2. The ability to engage in mathematical discussion and knowledge building at a higher level.
The first issue is of immediate concern to educators while the second issue has a more limited scope within the mathematical community.
What, might you ask, is the difference between numerical literacy (numeracy) and mathematics? Dave Vanbergyk 2009-2010 president of the British Columbia Association of Mathemtatics Teachers, BCAMT,
spoke to the difference. "Mathematics is body of knowledge that if not used can be lost. Numeracy is a set if proficiencies that once gained are forever with you”. (D. Vanbergyk, personal
communication, October 1, 2009)
The United Kingdom's Department for Children, Schools and Families defines numeracy in their National Strategy documents as follows:
"Numeracy is a proficiency which is developed mainly in mathematics, but also in other subjects. It is more than an ability to do basic arithmetic. It involves developing confidence and
competence with numbers and measures. It requires understanding of the number system, a repertoire of mathematical techniques, and an inclination and ability to solve quantitative or spatial
problems in a range of contexts. Numeracy also demands understanding of the ways in which data are gathered by counting and measuring, and presented in graphs, diagrams, charts and tables."
Department for Education and Skills (UK)
Just as literacy is much more than decoding, spelling, and grammar so too is numeracy much more than math facts and computation. Just as we wish to immerse our children in a world of literacy in
which they can develop skills and attitudes that enable them to understand, learn, appreciate, and communicate so too do we want to have the attitudes and skills that enable them to make sense of,
feel successful in, see the beauty in, and pursue opportunities in the world of numeracy. - D. Flick ^[1]
Why is Numeracy Important?
• Proficiency in numeracy is related to successful completion of high school and successful transition to post-secondary education and the work force.^[2]^[3]
• Numeracy is necessary to be able to interpret and verify information in the media: Results from political polls, survey data, and statistical "facts" cited in commercial advertisements are just a
few examples.
• Numeracy is imporant in making educated choices in health care - for example, understanding the probabilities behind false positive or false negative test results. A study published in Annals of
Internal Medicine reported that "Numeracy was strongly related to gauging the benefit of mammography" and that "Higher numeracy scores were associated with greater accuracy in applying risk
reduction information" as it applied to mammography.^[4]
• Numeracy is important for understanding mortgage rates, investments and loans.
• Numeracy is found in the skills needed to make measurements for home repair or renovation, such as determining how much paint to buy to paint a room.
• Numeracy is a requirement in a wide variety of employment opportunities.
Attention to numeracy is well-documented throughout curriculum in Canada and globally. In British Columbia, the Premier’s Technology Council addresses numeracy in its publication A Vision for 21st
Century Education. Queensland, Australia, addresses numeracy in Numeracy – Lifelong Confidence with Mathematics. Many other provinces and jurisdictions around the world have similar documents
available online to indicate how they are dealing with the issues surrounding numeracy in their educational systems. In their plan for improving literacy and numeracy in Ireland, the Department of
Education and Skills states that "The skills of literacy and numeracy equip young people to make the most of learning opportunities, to take up satisfying careers and to contribute to and participate
fully in all aspects of our culture and society". ^[5]
Innumeracy was a term developed to convey a person's inability to make sense of numbers. Basically innumeracy refers to a lack of numeracy. Innumeracy was coined by cognitive scientist Douglas R
Hofstadter in the early nineteen eightees. Dr. Hofstader wrote several Metamagical Thema columns for Scientific American.
Later that decade mathematician John Allen Paulos published his book Innumeracy: Mathematical Illiteracy and Its Consequences.^[6] In this book, many numeracy concepts were explored and explained in
an entertaining and meaningful manner. The book was written with the hope of drawing attention and creating action to address the issue. David Letterman, who himself has self confessed difficulty
with numbers, interviewed John Allen Paulos. Innumeracy interestingly, was a national best seller.
Causes of Innumeracy
Innumeracy, like illiteracy, has many causes. As with all types of learning, culture plays a pivotal role in innumeracy. In his book, Mindstorms, Seymour Papert refers to the concept of
“mathophobia”. Papert states that, “The mathophobia endemic in contemporary culture blocks many people from learning anything they recognize as “math,” although they have no trouble with mathematical
knowledge they do not perceive as such."^[7] He goes on to state that difficulty with mathematics in school is often the first step in perceiving oneself as either mathematically capable or not. What
may have started as a lack of understanding, opportunity or experience may become internalized as a personality characteristic and becomes a block to future learning. In essence, students have
difficulty learning math simply because they think they cannot.
In his article, Counting Past 10: Numeracy versus Literacy, Drew Cayman sums up North American society's attitude towards mathematics. He states that:
"Numeracy, at heart, is a cultural value. As Americans, we do not typically pride ourselves on understanding our mortgage contracts or multiplying in our heads, and this is not a source of shame.
We readily admit “I’m not very good at math,” but would be silent if we had similar troubles with reading. There are late night infomercials for “Hooked on Phonics,” but not “Hooked on
Fractions.” We feed children alphabet soup.",^[8]
Math Anxiety
A concept related to mathophobia is “math anxiety”. Math anxiety differs from mathophobia in that it is not a result of inability with numbers but is a result of emotional distress when working with
numbers. The anxiety may then lead to innumeracy. The problem of math anxiety "becomes acute when the person most afraid of numbers and equations is standing in front of the classroom trying to teach
the subject."^[9](Campbell, 2006) If a teacher suffers from math anxiety, the teacher's fears affect how math is approached in the classroom, which in turns affects the students in the classroom.
This issue is addressed in the article, How Does the Notion of Numeracy Affect Teaching?^[10]
A High School Student's Math Anxiety Success Story:"Hi. My name is Carly. And I have math anxiety."]
Improving Numeracy from an Educational Perspective
Many of the proposed new curricula guidelines and online resources dedicated to improving numeracy share two common themes. First, in order for numeracy to improve, mathematics education needs to
move away from isolated, rote learning to problem-based learning where problems are situated in realistic scenarios. That is not to say that there is no place for mastery or rote learning of certain
topics to facilitate the problem solving, just that mastery of individual topics out of context leads to concepts that are learned and then lost or that students are unable to transfer to real life
situations. In The Common Curriculum Framework for Grades 10 – 12 Mathematics, problem-solving based learning requires students to encounter problems that they do not know how to solve. Valuable
problem-solving “requires students to use prior learnings in new ways and contexts.”^[11] These problems must also be presented so that they are culturally relevant and of inherent interest to the
Along with the other Western provinces, Alberta has adopted the Western and Northern Canadian Protocol curriculum for mathematics. Alberta Education states that, “Students ...come to classrooms with
varying knowledge, life experiences and backgrounds. A key component in successfully developing numeracy is making connections to these backgrounds and experiences.”^[12]
The second common theme appearing in curriculum around the world is that numeracy is an issue that crosses all curriculum areas. The Premier’s Technology Council states that, “In some jurisdictions,
numeracy discussions have previously focused on computational skills, but it is increasingly believed that if numeracy is to improve student’s use of mathematics in life, numeracy education cannot be
restricted to math classes but should be implemented across the overall curriculum.” ^[13]
Here are just a few examples of how numeracy is important and can be integrated throughout many branches of the curriculum:
│ Subject │ Examples of Numeracy Skills │
│Social Studies │Reading graphs and maps, understanding population trends │
│Science │Interpeting graphs, manipulating formulae │
│Home Economics │Budgets, altering recipes, altering patterns, dress designing │
│Trades │Fractions in measurement, angles for building stairs │
│Accounting │Creating spreadsheets, manipulating data, balancing accounts │
│Physical Education│Laying out playing fields (measurements), using angles of incidence and reflection for making shots in hockey and billiards, laying out play patterns in football, keeping score│
When designing educational media related to numeracy, designers should consider how aspects of problem-based learning and cross-curriculum activities could be incorporated into their designs.
Problem-based learning generally requires collaboration and working in groups which would create a need for specific interaction affordances within the design medium.
Numeracy and Upper Level Mathematics
While numeracy is important for every member of society across all jobs and lifestyles, numeracy is particularly important for that much smaller segment of the population that goes on to study
formal, post-secondary mathematics. The Western and Northern Canadian Protocol states that there are two types of problem solving in mathematics – contextual problems and mathematical problems. A
contextual problem is based on a real-life problem, such as determining the volume of paint required to paint a room. A mathematical problem is purely mathematical, for example proving the
Pythagorean Theorem. While the techniques arising from a mathematical problem may be used in a contextual problem, the goals are quite different. A contextual problem needs a practical answer, a
mathematical problem is purely theoretical and these types problems form the basis of much of higher mathematics.
Great Math Reads
Burns, M. (1998). Math: Facing an American Phobia. Sausalito CA: Math Solutions Publications.
Ma, L. (1999). Knowing and Teaching Elementary Mathematics. Mahaw NJ: Lawrence Erlbaum Associates.
Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books.
Sousa, D. A. (2008). How the Brain Learns Mathematics. Thousand Oaks CA: Corwin Press.
Tobias, S. (1993). Overcoming Math Anxiety. New York NY: Norton Publications.
Lockhart, P. (date unknown). A Mathematician's Lament
Additional Information and Related Sites
BBC - Schools Ages 4-11 - Numeracy
BBC - Schools Ages 11-16 - Maths
Numeracy - a list of sites from Grande Prairie School District
Trends in International Mathematics and Science Study
British Columbia Association of Mathematics Teachers - Numeracy
Sir Ken Robinson on Education (TED Video)
Problem-Based Learning and Math Education - ETEC 510 Design Wiki
Stop Motion Animation Video
Number Sense (2016) by Meril Rasmussen. Duration: 2:26. On the importance of number sense at the elementary level.
|
{"url":"https://wiki.ubc.ca/MET:Numerical_Literacy","timestamp":"2024-11-09T04:27:57Z","content_type":"text/html","content_length":"50022","record_id":"<urn:uuid:69f32b7e-7fe3-4d7b-bd36-f116412fbdd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00195.warc.gz"}
|
Improved precision of hydraulic conductance measurements using a Cochard rotor in two different centrifuges
Improved precision of hydraulic conductance measurements using a Cochard rotor in two different centrifuges
Received : 15 August 2014; Published : 20 October 2014
An improved way ofcalculating hydraulic conductance (K) in a Cochard cavitron is described. UsuallyK is determined by measuring how fast water levels equilibrate between two reservoirs while a stem
spins in a centrifuge. A regression of log meniscus position versus time was used to calculate K and this regression method was compared to the old technique that takes the average of discrete
values. Results of a hybrid Populus 84K shows that the relative error of the new approach is significantly lower than the old technique by 4~5 times. The new computational method results in a
relative error less than 0.5% or 0.3% from 8 or 12 points of measurement, respectively. The improved precision of K measurement also requires accurate assessment of stem temperature because
temperature changes K by 2.4% ^oC^-1. A computational algorithm for estimating stem temperature stability in a cavitron rotor was derived. This algorithm provides information on how long it takes
stem temperature to be known to within an error of ±0.1^oC.
The cavitron technique has been used for many years to measure vulnerability curves (VCs) of woody stems (Cochard 2002, Cochard, Damour et al. 2005). VCs have been widely viewed as a good measure of
the drought resistance of woody plants (Cochard et al. 2013). Increasing drought will induce cavitation of water held in stem conduits (vessels or tracheids). A cavitation event occurs when water
columns break under tension (=minus the pressure of water in xylem). A cavitated vessel first fills with water vapor then eventually fills with air-bubbles at atmospheric pressure because of Henry’s
law of gas solubility at water/air interfaces. The time required for equilibrium of air-pressure depends on the rate of air penetration into the recently-cavitated vessel lumen via diffusion in the
liquid phase (Fick’s Law).
We have been conducting experiments that address the tempo of air bubble formation in recently cavitated vessels. The equilibrium for air bubble formation is defined by Henry’s law, from which we can
predict that eventually the air pressure inside a cavitated vessel must equal ambient atmospheric pressure. While doing initial experiments we realized we needed to improve the precision of hydraulic
conductance (K) measurement in stems spinning in a Cochard cavitron in order to study how long it takes air bubbles to form. Current measurements are reproducible to about 2% (see for example Cai et
al 2014) over short periods of time (minutes). Over longer periods of time (1-2 h) K can also be influenced by the change of temperature in the centrifuge while the rotor is spinning due to
non-stable temperature control in some centrifuges. If a spinning stem experiences a temperature change of ± 1 ^oC then this will cause a change in measured K of ±2.4% because of the affect of
temperature on the viscosity of water near room temperature.
Materials and Methods
Hydraulic conductance, K, can be measured in a cavitron at both high negative pressure and low (sub-atmospheric) pressure. A multi-point measurement of K can be completed in 20 to 60 seconds and
hence we can assume K is constant during the measurement. In the traditional method, K is measured several times over small time and volume increments by observing two menisci: one that is stationary
and the other that moves towards the stationary meniscus a distance x away relative to the stationary meniscus. During the measurement the meniscus moves a distance Δx in a cuvette in a time interval
Δt and the flow rate is evaluated from F = rA[w]Δx/Δt (kg s^-1) where r is the density of water, and A[w] is the cross sectional area of water in the cuvette. At the same time the pressure drop
causing flow is computed from ΔP = 0.5ρω^2(2Rx̄-x̄^2) where x̄ is the value of x at the midpoint of Δx. In this equation ω is the angular velocity of the rotor spinning, and R = the maximum radius from
the axis of rotation to the lower stationary meniscus. In Cochard’s original equations a lower case ‘r’ is used in place of x. The value of K is calculated from
The problem with Eq. (1) is that the time and distance intervals are rather small and hence the standard deviation on repeated measurements of Δx/Δt is rather large (around 10%).
One of us (YW) noticed during experiments that x (the distance of the moving meniscus from stationary meniscus) declines exponentially with time towards 0 at constant ω, i.e., a plot of ln(x)
declines linearly with t. This behavior follows from Eq. (1) written in differential format (dx/dt = Δx/Δt) when we neglect x^2, which is small compared to 2Rx (typically x < 3 versus 2R = 254 mm).
So we tried measuring the slope, m, of a plot of ln(x) versus t and computed K from the solution of differential Eq. (1) ignoring x2. The solution of Eq(1) ignorning x^2 is
Hence a plot of the ln(x/x[o]) versus t will yield a slope, m, from which we can compute K
The linear regression provides only one value for the slope, but a standard regression analysis can be used to compute the standard error of the slope, m. Preliminary experiments demonstrated to us
that the computation of K from Equation (2) had a much smaller SE than from repeated measurements of Eq. (1). So we started looking for the exact solution to differential equation resulting from Eq.
The solution provided by (RB) is:
where t[0] is equated to zero, i.e., the time when we begin to record the movement of the upper meniscus (at x0), and t is the time when we record the passage of the meniscus at position x[0]. The
second natural log term in Eq.(4) contributes 1 to 2% to the slope depending on the range of x. Hence our initial use of just ln(x[o]/x) = constant –ln(x) was approximately correct but the exact
solution is preferable. Theoretically t should be proportional to
y = m = x vs t, we compute y vs t and from the regression we got the slope, m, to calculate K as shown in Eq. (2B) above.
Experiments for computation of K
Test experiments were conducted on clonal hybrid Populus 84K shoots (Populus alba × Populus glandulosa). Branches were cut from the trees growing on the campus of Northwest A&F University. Segments
of 0.274 m long and 6-7 mm in basal diameter were cut from harvested branches (> 0.5 m long) under water, and were fully flushed with 10 mM KCl under 200 kPa pressure for 30 min.
A Cochard cavitron was used to test our improved method, in which we used a 20X or 40X microscope to observe the water level changes. The difference between two holes on the two reservoirs was 6.5 mm
or 3.8 mm depending on the microscope used. Some measurements were performed on a cavitron rotor in a Beckmann Coulter model Allegra X-22R centrifuge and the temperature-dependent experiments were
done in a Xiang Yi model H2100R centrifuge because it had better temperature regulation.
We measured vulnerability curves to see whether hydraulic conductance obtained by two methods (regression and traditional) were the same at different tensions. We collected typically 11 points of x
versus t and obtained the means and SE of the slope by the regression method and the mean and SE by the traditional method using 10 values of Δx and Δt and equation (1) to compute K and then computed
the traditional mean and SE.
Since the two methods manipulate the same data set, it is essential to recognize that the methods being compared are differentiated by a purely computational difference except in one way. In the old
method Δx and Δt (Eq. 1) are usually determined 6 to 11 times by refilling the cuvette 2 or 3 times during the sequence of measurements and applying Eq. (1) 6 to 11 times. In this study the cuvette
is filled only once and 8 to 11 measurements of Δx and Δt are made, which provides data sets that can be evaluated by use of Eq. (1) 8 to 11 times and this is compared to a regression using Eq. (4).
Since the time required for the meniscus to move a fixed distance, Δx, increases as the experiment progresses the value of Δt also increases, so successive measurements of K using Eq. (1) become more
accurate as the measurements progress. This should make the ratio of (standard error)/(mean K) smaller hence giving a more accurate mean K. Despite this trend we will demonstrate that the ratio
decreases faster by the regression-computational method.
Temperature tests: an algorithm to estimate T[stem]
Enhanced precision of measurement of K is of little value if the stem temperature, T[stem], is unknown and variable because K is proportional of 1/η where η is the viscosity of water. The value of 1/
η changes about 2.4% ^oC^-1 near 20 ^oC. The temperature depedence of 1/η ranges from 3.3 to 1.95 % ^oC^-1 for temperatures from 0 to 40 ^oC, respectively. Hence T[stem] must be known or controlled
to within ±0.1 ^oC to achieve ±0.24% accuracy in the computation of percent loss of conductance (PLC) which needs to be determined by repeated measurements of K at the same T[stem] but differing
tensions during the construction of a vulnerability curve.
The approach we took was to make use of the fact that a stem is a defacto thermometer when repeated measurements of K are made at low tension, i.e., when only temperature affects K. Most centrifuges
have refrigeration (or a heat pump that can both heat and cool) and a thermostatic way to set and control temperature. But thermostatic control of temperature is never completely precise. In order to
assess the impact of fluctuations of temperature inside a centrifuge on T[stem], we programmed large changes in temperature and monitored changes in K vs time while measuring the air temperature
inside the centrifuge with a temperature sensor near the rotor. Many types of temperature sensors are capable of ±0.1 ^oC resolution after calibration; these include thermistors, thermocouples and
LM335 chips; we used the latter. But air temperature will not reflect the likely temperature of the spinning stem. Hence we tried to devise a computational algorithm to compute T[stem ]adequately to
predict the observed changes in K. Then this algorithm was used to assess the ability of different centrifuges to control T[stem].
We tested two kinds of computational algorithms: The first was a running mean of the air-temperature in the centrifuge. We monitored air-temperature every 5 seconds, and we computed the running mean
air temperature for various length of time. The second algorithm was a first-order rate reaction equation where the change in stem temperature at any time interval is given by
ΔT[stem]= αΔt(T[air,i]-T[stem,i]),
where α = the ‘heat transfer’ rate constant and Δt is the time step. Hence if we know T[stem] at time t=0 (T[stem,0]) then the stem temperature at later time (T[stem,t]) after n measuring intervals
is given by
In the first algorithm nothing is said about the initial stem temperature (T[stem,0]) but it is assumed that if we hold air temperature constant long enough then a running mean of some time-period
can be found to predict T[stem] within ±0.1 ^oC and this running mean length can be determined experimentally. In the second algorithm we do not need to know T[stem,0] but if we hold the temperature
constant long enough then the sum in Eq.(6) will make the value of T[stem,t] converge on the real T[stem] within ±0.1 ^oC. Experiments were done to see if the first or second algorithm or a
combination of the two algorithms can be used to predict T[stem] when T[air] is dynamically changing.
Results and Discussion
Figure 1 shows the values of K versus tension in Populus clone 84K which is typical of a vulnerability curve measured in July because in July the stem contains current-year and previous-year vessels
which differ in ‘P[50]’ because of frost fatigue (unpublished results). Similar data have been obtained on >10 branches. Data for K were computed by both methods: regression (Eq. 4) and traditional
(mean of discrete measurement in Eq. (1)). We concluded that hydraulic conductance acquired by the two methods were the same but the regression method was more precise. The residuals were randomly
distributed indicating that the regression approach can give unbiased values that were independent of hydraulic conductance. The difference between K measured by the regression versus traditional
methods was less than 0.3% but the precision of the regression method was superior. The error in (SE/Mean) of the regression method was smaller than the traditional method by a factor of 4~5 times as
shown in Fig. 1C, which means that the regression method was 4 to 5 times more ‘precise’.
Figure 2 shows four examples of the regression fitting of y versus t, ^2 for the four fittings were all > 0.999 and were typically >0.9999 except at the lowest K-values, and the residuals were random
in Figure 2B and a little shifted in Figure 2D. Residuals in Figure 2D were not quite random because of the very low hydraulic conductance at the end of the vulnerability curve and because new
embolism may have been occurring during the progress.
Figure 1. This shows values of K* obtained from a regression (Eq. 4) method versus the value of K obtained by the traditional method (Eq. 1). Panel A: This shows a plot of K (closed circle) and K^*
during the measurement of a vulnerability curve. Panel B: This shows the linear regression of K and K*, the slope was nearly one (SE = 1.408E-3 and p = 1.8E-47); the intercept was not significantly
different from zero (SE =1.5E-7 and p = 0.9215). Panel C: This shows the comparison of relative error (SE/Mean) of the regression versus traditional method. Panel D: This shows the residual of the
linear regression in panel B.
Figure 2. This shows examples of linear regressions used to predict hydraulic conductance. Panel A: two examples at tension 0.088 and 0.127 MPa at the beginning of a vulnerability curve, and panel B:
the residuals of the linear regressions in panel A. Panel C: two examples at 2.207 and 2.387 MPa at the end of a vulnerability curve, and panel D: the residuals of the linear regression in panel C.
In panel A and C, [0] is the distance between two menisci at the beginning of the measurement, x is the distance of water levels at time t, R is the distance from rotational axis to the lowest water
level in the cuvette with the lowest hole.
Figure 3 shows how the error changed with the number of points used in a regression. To evaluate the impact of the number of points on SE/meanK (SE/K̄), we choose 5 to 11 points from every original
curve in the vulnerability curve to compare how SE/K̄ changed with N. At a given number of points, N, regression means and errors were calculated by least square, and SE/K̄ of traditional method were
computed by N was the number of recorded times or distances, so actually the total number of discrete hydraulic conductance used for average approach was N-1). Furthermore we compared the average of
the first, second and third part of the SE/means, which separately referred to the begining, middle and last part of the vulnerability curve.
Figure 3. This shows a comparison of the error relative to the mean (SE/mean) of the regression and traditional methods. Panel A: relative SE/mean versus number of measurements, the diamond refers to
the average value SE/mean computed from all the points in the VC in Fig1A.The circles represent the average SE/mean by regression method (averaged over the entire VC). Panel B: the ratio of two
relative errors versus the number of measurements. Open circles represent the ratio between regression and tradition methods of SE/Means from the whole curve; open squares, open triangles and cross
represent the ratio of two means at the first, second and third part of the vulnerability curve respectively. The smooth line is a polynomial fit for the open circle values.
Average SE/K̄ (traditional method) and SE/K̄^* (regression method) were plotted in Figure 3A, we can see that average SE/K̄*s were always lower than SE/K̄ and that standard errors of SE/K̄*s were always
lower than those of SE/K. Figure 3A showed that with the increase of N, both methods resulted in better precision and lower deviation of SE/Mean, but regression approach yielded an average SE/Mean
which converged to lower values faster than the traditional method as proven by the ratio in Figure 3B. Significantly, when we collected ≥ 11 points for regression, the error was less than 0.3%,
moreover, the time required to collect 11 points was < 30 s more than collecting 5 points.
Temperature regulation
Without refrigeration, the air temperature inside a centrifuge experienced by the rotor will increase with rotor speed because of the heat dissapation of the rotor motor. The proper physical design
of the heating-cooling systems of centrifuges is essential to achieve accurate temperature control. Critial design fators include the type of refrigeraton system used and the location of the
thermostatic temperature sensor. Temperature control by refrigeration is better if the system can both heat and cool (a reversible heat pump). If the refrigeration system can only be turned on and
off (standard refrigeration) then the responsiveness of the sytem is compromised in the heating phase because heat is derived only by passive heat flow from the rotor motor and from the ambient lab
temperature. The placement of the thermostatic temperature sensor is also critical to good temperature control, because if the sensor used by the thermostat is placed remotely from the rotor then the
rotor could be at quite a different temperature than the sensor location. Improper location of the thermostat-sensor will degrade the quality of temperature control even if refrigeration control is
of the highest quality.
We have recently taken delivery of a new cavitron system from Xiang Yi Instrument Co. (a modified model H2100R centrifuge), which was capable of controlling and setting air temperature to the nearest
0.1 ^oC. The heat pump in the Xiang Yi centrifuge has a ‘reversing valve’, hence the heat pump actively heats and cools whereas the Beckmann-Coulter Centrifuge (Allegra X-22R) only cools making the
air temperature fluctuate by ±2 ^oC when the thermostatic setting is unchanged at constant RPM (see Fig. 4A). Also there was often a large temperature gradient in the Beckmann-Coulter centrifuge
between the temperature sensor of the centrifuge and temperature independently measured near the rotor, and the temperature gradient increased with RPM (see Fig. 4B). We compensated for this
temperature gradient by progressively decreasing the thermostat setting as RPM increases according to the actual air temperature measured with our own temperature sensor placed near the rotor (see
Fig. 4C). The temperature gradient within the Xiang Yi is negligible because of the better placement of the thermostat temperature sensor. Hence the accurate control of temperature and viscosity is
solved by the more precise temperature control of the Xiang Yi cavitron.
Figure 4. This figure compares the ability of two centrifuges to control the air temperature near the rotor. Allegra = the Beckman-Coulter model Allegra X-22R and H2100R = the XiangYi model H2100R.
A. This shows the air temperature near the rotor while spinning at a constant 1000 RPM but changing the temperature set-point of the centrifuge. B. Air temperature near the rotor while keeping the
temperature set point at 25 ^oC while increasing RPM. C. Air temperature near the rotor in the Allegra while adjusting the temperature setting to compensate for heating due to RPM increase.
How T[stem] depends on the air temperature.
Because the value of T[stem] cannot be measured directly, the uncertainty of the stem temperature had a large impact on values of K, hence we performed an experiment to find an algorithm to compute T
[stem] even when T[air] inside the centrifuge changes dynamically with time. The experiment is illustrated in Fig. 5.
Figure 5 shows the tempo of change in centrifuge air temperature, 5-min running mean air temperature and K measured on a Populus 84K clone. The solid line is the experimentally imposed change in air
temperature. The solid circles show the response of the stem temperature.
The XiangYi Centrifuge was adjusted to three different constant temperature values of 25, 15 and 5 ^oC. The thermostat of the centrifuge could maintain a constant air temperature with a SD of ±0.11,
0.17 and 0.29 ^oC at the thermostat settings of 25, 15 and 5 ^oC, respectively. The 5 min running mean of that air temperature was a smooth line with small SE < ± 0.04 ^oC at all set-points. The stem
conductance, K, was measured repeatedly and declined slowly compared to the air-temperature values indicating that the T[stem] declined much more slowly than the air temperature. The tempo of decline
of K is primarily a function of T[stem], but it is not exactly exponential because the last half of the tempo has a half time of about 17 min but the first half was closer to 10 min.
Fig. 6 shows the correlation between several running mean air temperatures and K. The best correlation was obtained with a running mean air temperature between 30 and 35 minutes. But a much stronger
correlation was found using the first order rate model (Eq. 6, see methods) and a running mean value of 5 min for the air temperature. However the correlation was nearly as good using the actual air
temperature read at 5 s intervals (data not shown). The combined model (Fig. 5D) worked best because the stem was deep inside the rotor and there would be a small time delay (a few minutes) between
when the temperature changed in the air of the centrifuge and when the temperature changed in the metal next to the stem. The 5 min running mean air temperature seemed to simulate this time delay
adequately. The curve in Fig. 5D was fitted using the solver function in Excel which adjusted two parameters to obtain a best fit. The parameters were α and the initial T[stem] in Eq. (6). The best
fit values were α = 9.942E-4 s^-1 and T[stem,0 ]= 25.74 ^oC.
Figure 6 shows how models of T[stem] correlate with K. The three best running mean models (Eq. 5, see methods) are shown in A, B, and C for running means of 25, 30 and 35 min, respectively. Graph D
shows the fit for Eq. 6 (see methods), which was a first-order rate reaction model using a 5 min running mean air temperature as the input.
Figure 7. This shows our estimate of the error in estimation of T[stem] computed from the residuals in Fig. 5D. See text for details. A: is the temperature error as a function of T[stem] and D: is
the temperature error as a function of the speed of temperature change in the original experiment (Fig. 4). The T[stem] is probably more stable than indicated in by the y-axis values because some of
the error can be accounted for by the error in measuring K whereas the error values shown assume all errors are due to changes in 1/η.
It was surprising to us that T[stem] could be fitted to a first order thermal rate reaction theory, but it is common practice to ‘lump’ temperature changes in this way when the thermal diffusivity of
the substances surrounding an object is > the thermal diffusivity of the object. Thermal diffusivity is defined as D=h/ρC[p], here h is the thermal heat transfer coefficient, ρ is the density of the
material and C[p] is the heat capacity. When D is used to compute thermal equilibration, equations analogous to Fick’s law can be used to compute changes in T in place of concentration. The values of
D (m^2 s^-1) for the aluminum rotor, air and stem were ~6X10^-5, 1.9x0^-5 and 0.014x10^-5, respectively, (assuming stem D = the value of water). Hence the condition for the first order thermal rate
reaction approximation was met.
In order to achieve a K-measurement accuracy of ±0.3% we need to prove that temperature changes less than ±0.1 ^oC over the period of measurement of one K-value. However, if we want to measure a
highly-accurate vulnerability curve then we need to compute many values of percent loss of hydraulic conductivity, PLC = 100%(1-K/K[max]); in this case the temperature has to be constant to ±0.1 ^oC
for K[max] as well as all other K values measured over a period of 1 h or more. The residuals in Fig. 7 prove that if the air temperature inside the centrifuge changes dynamically the T[stem] is
known to only ±0.4 ^oC which results in an error of PLC of about ±1%. The control of T[stem] is better if air temperature does not change dynamically.
If highly accurate vulnerability curves are desired, then the temperature of the stem must to controlled to <±0.1 ^oC. If a Cochard rotor is used then this highly accurate temperature control has to
be maintained inside the centrifuge that spins the rotor. If a Sperry rotor is used then the temperature has to be controlled outside the centrifuge because K is measured in a conductivity apparatus.
Since an air-conditioned room-temperature usually fluctuates ± 1 or 2 ^oC, we would recommend keeping stems immersed in a constant temperature bath accurate to ±0.05 ^oC and waiting for thermal
equilibrium between the bath and stem before commencing the determination of K. If a Cochard rotor is used then all K-measurements are made while the rotor is spinning. In this case the selection of
a centrifuge with good air-temperature control is essential.
As illustrated in Fig. 4 the temperature regulation of the H2100R is superior to the Allegra X-22R. The Allegra X-22R has a 5-min running mean air temperature fluctuation of ±2 ^oC, as measured with
an independent LM335 temperature sensor mounted near the rotor. In addition a thermal gradient between the rotor and internal temperature sensor, used by the Allegra X-22R to control the
refrigeration, caused large deviations in mean air temperature as rotor RPM increased due to the heat generated by the rotor-motor. In contrast, the H2100R controlled the 5-min running mean
air-temperature within ±0.04 ^oC (except when there is a large step-increase in the thermostat setting), hence we are confident that T[stem] can be stabilized to within <±0.1 ^oC. To achieve this
level of temperature stability we recommend setting the H2100R equal to the mean lab air temperature and spinning the stem for at least 1 h prior to measuring the vulnerability curve to be sure that
the T[stem] has approached a stable temperature. If there is reason to believe the initial T[stem] differed from the mean lab temperature by more than 1 ^oC prior to placement in the centrifuge then
a 1.5 to 2 h equilibration period might be warranted prior to commencing the measurement of a vulnerability curve.
Having addressed the two main sources of error in the computation of K and PLC, we need to briefly review how other factor lead to a propagation of errors. For PLC the situation is simple because PLC
/100 = 1-K/K[max] =
In the case of PLC computation the slopes m and m[max] have the biggest errors whereas in modern centrifuges the anglular velocity, ω, has a relative error <0.04% for most of the vulnerability curve.
Since the error of m and m[max ]is about an order of magnitude more and equal, the overall error on PLC ≅ √2 x (the error of m or m[max]).
In contrast if the objective is to measure K in a cavitron rotor then the errors to be considered are based on the math of Eq. (2) and for K[h] (conductivity = K*L) there is the uncertainty of L to
be added. We leave it to the reader to estimate the likely uncertainty of A[w, ]R, and L, but we would estimate the combined uncertainty to be about 1.2% or more.
In conclusion, the method we developed increased the precision of hydraulic conductance by a factor of 4~5 times, resulting in an improved precision as measured by SE/mean when a regression was done
on 10 to 12 points which usually required less than 1 min per regression. Further improvements in the absolute precision of K measurement below ±1% would require improved methods of measuring and/or
cutting L, R, and stem diameter. Paying attention to temperature stability and precision of measurement of K will improve the precision of vulnerability curves. In our opinion minor deviations in the
shape of vulnerability curves may be correlated with xylem anatomy in the future so more recise mesaurement techiniques are needed. We recommend the methods used in this paper to anyone using a
Cochard cavitron to measure vulnerability cuves.
Acknowledgements : this research was made possible by a 5-year 1000-talents research grant awarded to MTT.
• Cai J, Li S, Zhang H, Zhang S, Tyree MT 2014. Recalcitrant vulnerability curves: methods of analysis and the concept of fibre bridges for enhanced cavitation resistance. Plant, Cell & Environment
37: 35-44 doi:10.1111/pce.12120
• Cochard H. 2002. A technique for measuring xylem hydraulic conductance under high negative pressures. Plant, Cell & Environment 25: 815-819 doi:10.1046/j.1365-3040.2002.00863.x
• Cochard H, Badel E, Herbette S, Delzon S, Choat B, Jansen S. 2013. Methods for measuring plant vulnerability to cavitation: a critical review. Journal of Experimental Botany 64: 4779-4791
• Cochard H, Damour G, Bodet C, Tharwat I, Poirier M, Améglio T. 2005. Evaluation of a new centrifuge technique for rapid generation of xylem vulnerability curves. Physiologia Plantarum 124:
410-418 doi:10.1111/j.1399-3054.2005.00526.x
• Tyree MT, Yang S. 1992. Hydraulic conductivity recovery versus water pressure in xylem of Acer saccharum. Plant Physiology 100: 669-676 doi:10.1104/pp.100.2.669
• Wang R, Zhang L, Zhang S, Cai J, Tyree MT. 2014. Water relations of Robinia pseudoacacia L.: do vessels cavitate and refill diurnally or are R-shaped curves invalid in Robinia? Plant, Cell &
Environment 37:2667-78 doi:10.1111/pce.12315
• Yang S, Tyree MT. 1992. A theoretical model of hydraulic conductivity recovery from embolism with comparison to experimental data on Acer saccharum. Plant, Cell & Environment15: 633-643
No supporting information for this article
|
{"url":"https://jplanthydro.org/article/view/33","timestamp":"2024-11-05T15:34:16Z","content_type":"text/html","content_length":"90693","record_id":"<urn:uuid:6b91e63d-86b1-4108-9cc4-624eacce8964>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00119.warc.gz"}
|
1,614 research outputs found
The dependence of torsion functors on their supporting ideals is investigated, especially in the case of monomial ideals of certain subrings of polynomial algebras over not necessarily Noetherian
rings. As an application it is shown how flatness of quasicoherent sheaves on toric schemes is related to graded local cohomology.Comment: updated reference
Let $k$ be a field. We consider triples $(V,U,T)$, where $V$ is a finite dimensional $k$-space, $U$ a subspace of $V$ and $T \:V \to V$ a linear operator with $T^n = 0$ for some $n$, and such that $T
(U) \subseteq U$. Thus, $T$ is a nilpotent operator on $V$, and $U$ is an invariant subspace with respect to $T$. We will discuss the question whether it is possible to classify these triples. These
triples $(V,U,T)$ are the objects of a category with the Krull-Remak-Schmidt property, thus it will be sufficient to deal with indecomposable triples. Obviously, the classification problem depends on
$n$, and it will turn out that the decisive case is $n=6.$ For $n < 6$, there are only finitely many isomorphism classes of indecomposables triples, whereas for $n > 6$ we deal with what is called
``wild'' representation type, so no complete classification can be expected. For $n=6$, we will exhibit a complete description of all the indecomposable triples.Comment: 55 pages, minor modification
in (0.1.3), to appear in: Journal fuer die reine und angewandte Mathemati
This paper is a review of results on generalized Harish-Chandra modules in the framework of cohomological induction. The main results, obtained during the last 10 years, concern the structure of the
fundamental series of $(\mathfrak{g},\mathfrak{k})-$modules, where $\mathfrak{g}$ is a semisimple Lie algebra and $\mathfrak{k}$ is an arbitrary algebraic reductive in $\mathfrak{g}$ subalgebra.
These results lead to a classification of simple $(\mathfrak{g},\mathfrak{k})-$modules of finite type with generic minimal $\mathfrak{k}-$types, which we state. We establish a new result about the
Fernando-Kac subalgebra of a fundamental series module. In addition, we pay special attention to the case when $\mathfrak{k}$ is an eligible $r-$subalgebra (see the definition in section 4) in which
we prove stronger versions of our main results. If $\mathfrak{k}$ is eligible, the fundamental series of $(\mathfrak{g},\mathfrak{k})-$modules yields a natural algebraic generalization of
Harish-Chandra's discrete series modules.Comment: Keywords : generalized Harish-Chandra module, (g,k)-module of finite type, minimal k-type, Fernando-Kac subalgebra, eligible subalgebra; Pages no. :
13; Bibliography : 21 item
We give a combinatorial interpretation of a Pieri formula for double Grothendieck polynomials in terms of an interval of the Bruhat order. Another description had been given by Lenart and Postnikov
in terms of chain enumerations. We use Lascoux's interpretation of a product of Grothendieck polynomials as a product of two kinds of generators of the 0-Hecke algebra, or sorting operators. In this
way we obtain a direct proof of the result of Lenart and Postnikov and then prove that the set of permutations occuring in the result is actually an interval of the Bruhat order.Comment: 27 page
It is shown that every commutative arithmetic ring $R$ has $lambda$-dimension $leq 3$. An example of a commutative Kaplansky ring with $lambda$-dimension 3 is given. If $R$ satisfies an additional
condition then $lambda$-dim($R$
We prove that Nichols algebras of irreducible Yetter-Drinfeld modules over classical Weyl groups $A \rtimes \mathbb S_n$ supported by $\mathbb S_n$ are infinite dimensional, except in three cases. We
give necessary and sufficient conditions for Nichols algebras of Yetter-Drinfeld modules over classical Weyl groups $A \rtimes \mathbb S_n$ supported by $A$ to be finite dimensional.Comment: Combined
with arXiv:0902.4748 plus substantial changes. To appear International Journal of Mathematic
Vafa-Witten theory is a twisted N=4 supersymmetric gauge theory whose partition functions are the generating functions of the Euler number of instanton moduli spaces. In this paper, we recall quantum
gauge theory with discrete electric and magnetic fluxes and review the main results of Vafa-Witten theory when the gauge group is simply laced. Based on the transformations of theta functions and
their appearance in the blow-up formulae, we propose explicit transformations of the partition functions under the Hecke group when the gauge group is non-simply laced. We provide various evidences
and consistency checks.Comment: 14 page
The row (resp. column) rank profile of a matrix describes the staircase shape of its row (resp. column) echelon form. In an ISSAC'13 paper, we proposed a recursive Gaussian elimination that can
compute simultaneously the row and column rank profiles of a matrix as well as those of all of its leading sub-matrices, in the same time as state of the art Gaussian elimination algorithms. Here we
first study the conditions making a Gaus-sian elimination algorithm reveal this information. Therefore, we propose the definition of a new matrix invariant, the rank profile matrix, summarizing all
information on the row and column rank profiles of all the leading sub-matrices. We also explore the conditions for a Gaussian elimination algorithm to compute all or part of this invariant, through
the corresponding PLUQ decomposition. As a consequence, we show that the classical iterative CUP decomposition algorithm can actually be adapted to compute the rank profile matrix. Used, in a Crout
variant, as a base-case to our ISSAC'13 implementation, it delivers a significant improvement in efficiency. Second, the row (resp. column) echelon form of a matrix are usually computed via different
dedicated triangular decompositions. We show here that, from some PLUQ decompositions, it is possible to recover the row and column echelon forms of a matrix and of any of its leading sub-matrices
thanks to an elementary post-processing algorithm
In the present paper we generalize the coproduct structure on nil Hecke rings introduced and studied by Kostant-Kumar to the context of an arbitrary algebraic oriented cohomology theory and its
associated formal group law. We then construct an algebraic model of the T-equivariant oriented cohomology of the variety of complete flags.Comment: 28 pages; minor revision of the previous versio
With the aid of the $6j$-symbol, we classify all uniserial modules of $\mathfrak{sl}(2)\ltimes \mathfrak{h}_{n}$, where $\mathfrak{h}_{n}$ is the Heisenberg Lie algebra of dimension $2n+1$.Comment:
Some references added, introduction expanded, title change
|
{"url":"https://core.ac.uk/search/?q=author%3A(Bourbaki%20N)","timestamp":"2024-11-03T22:58:55Z","content_type":"text/html","content_length":"192181","record_id":"<urn:uuid:a10eefd5-a30e-407f-ad3c-cc7a46523374>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00207.warc.gz"}
|
131π/90 radians
What is 131π/90 radians in degrees?
131π/90 radians equals 262°
Convert radians to degrees
131π/90^rad = 262 degrees
Step-by-Step Solution
Given that pi rad is equal to 180°, we can write the following radians to degrees conversion formula:
α in degrees = α in π radians × 180/π, OR
α° = α ^rad × 180/π
Plugging the given angle value, in radians, in the previous formula, we get:
α° = (131π/90 × 180/π) = 262 degrees.
Using our 'radians to degrees converter' above, you can find the exact value of 131π/90 radians in degrees or the value of any angle in degrees with solution steps.
|
{"url":"https://clickcalculators.com/radians-to-degrees/131%CF%80%7C90","timestamp":"2024-11-09T06:51:37Z","content_type":"text/html","content_length":"71574","record_id":"<urn:uuid:c4c1bf74-bddb-4422-84ef-d6c7f2d841bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00404.warc.gz"}
|
5 Tips On How To Teach Fractions
5 Tips On How To Teach Fractions
Teaching fractions is not an easy task! We know that it’s a topic that year after year, students struggle with. It’s like teaching about the distributive property or elapsed time … We dread when
those units come up in our curriculum guides. So what is it about the teaching of fractions that we can do differently so our students don’t ride the struggle bus? Let’s start by understanding…
Why are fractions so difficult?
When kids are in kindergarten they learn how to count 1, 2, 3. They also learn that 1 means 1 of something like… 1 apple or 1 block. They learn that each whole number represents a certain number of
objects. They also learn that as they count on, numbers get bigger. Basically, they learn a set of number rules.
Fractions, however, follow a different set of rules. For example, fractions don’t always mean the same thing… 1/2 of a pizza is not the same as 1/2 of 4 pizzas. Also, when the denominator in a
fraction increases, its value decreases.
NO WONDER OUR KIDDOS ARE CONFUSED! They’re trying to use what they know about whole numbers to solve problems involving fractions.
How can we help our students when teaching fractions?
By making a few simple changes in how we introduce fractions and helping our kiddos truly understand what a fraction represents, we can set them up for future success.
Hands on Practice
One of the best things we can do while teaching fractions is to give our students plenty of time to experiment with manipulatives or visual models. Fractions are such an abstract concept. For some
kids, visualizing parts of a whole can be difficult. When we use manipulatives, we make the concept more concrete.
I personally love to start my fraction unit by having students create their very own fraction bars. Or you can use plastic fraction bars similar to these. If you have a magnetic whiteboard, these
magnetic fraction bars and fraction circles would be perfect (found on Amazon). If you have a little more time or have students that struggle with fine motor skills, you can make these pool noodle
fraction bars.
Regardless of the manipulatives you choose, make sure you give your kiddos plenty of time to experiment. Also, many of us tend to stick to fraction bars because that’s what we’re most comfortable
with. But don’t forget to let them experiment with fraction circles, pattern blocks, and even cuisenaire rods.
Don’t have a set of fraction bars right now or have students working remotely? No problem, I’ve got you covered with printable fraction bars. Just sign up down below.
Understand the Parts of a Fraction
When teaching fractions, I like to begin with the denominator first. I tell my students that the bottom number tells us 2 things. It tells us how many equal parts our whole is divided into or how
many groups in a set. It also tells us what to call these parts. So a 6 in the denominator means that the whole has been broken into 6 equal parts. It also means that the parts are called sixths.
The top number, or numerator, is how many of those parts we have. So when we see the fraction 2/6 that means that we have 2 out of 6 parts or 2 out of 6 groups.
Equal Parts
When learning about fractions, it is imperative that students understand that the parts of a whole must be equal. This is especially important to keep in mind when they start comparing fractions.
I tell my students to think about a time when they asked a friend to share a cookie or a candy bar. What would they think if their friend gave them a tiny piece and didn’t cut the cookie into equal
shares? They probably wouldn’t like that! They’d probably even say that it wasn’t fair. Well, we need our students to realize that the same thing happens with fractions. In order to partition a
cookie, a shape, a pizza, or some other object or group, all the parts have to be equal.
When it comes to fractions, fairness is key!
Parts of a Whole
It’s very important that students understand that a fraction is used to represent parts of a whole or part of something such as a circle, a rectangle, or a pizza.
A whole can also refer to several items or a set. It’s common practice to teach these two topics separately which usually leads to confusion. However, if we start off from the beginning showing that
“a whole” can mean 1 thing or several things grouped together to make up a set, our students will develop a stronger conceptual understanding and be better prepared for when we move on to fractions
on a number line.
We shouldn’t be afraid to expose our kids to several ways of modeling fractions for fear that they might not understand.
Make Real Life Connections
Fractions are all around us… We just have to teach our children to look for them. For example, we use fractions to know how much of an ingredient to use when we bake or cook foods. Fractions are also
used when telling time. (30 min. is 1/2 an hour. Lunch is at a half past 11:00) Fractions are also used to determine discounts when things go on sale. (A store advertising a sale in which their items
are 1/2 off their original price.) Music involves fractions. (whole notes, quarter notes, eighth notes, etc…) Fractions are everywhere!
It’s important that we establish good fraction habits right from the moment we begin teaching fractions. If you’re looking for a few more ideas on teaching fractions check out my other post Fun with
Fractions. I use these fraction intervention booklets in my classroom for small groups. They help me assess and provide remediation for those needing a little extra support. I’ve even sent them home
with absent students so they don’t miss a beat.
These fraction task cards are also great for early finishers or those kiddos that need to be challenged.
Well, there you have it. A few tips and resources for teaching students to love and understand fractions. It doesn’t have to be too scary!
If you’d like to receive more freebies like the printable fraction bars up above, simply join our newsletter by entering your home email address down below.
No Comments
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Previous Story
Understanding Fractions On A Number Line
Next Story
How To Write a Paragraph
|
{"url":"https://moretime2teach.com/how-to-teach-fractions/","timestamp":"2024-11-07T01:18:30Z","content_type":"text/html","content_length":"104437","record_id":"<urn:uuid:56683843-2b2f-4430-b6dc-653a0fc2296a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00480.warc.gz"}
|
Round Off: Learn Definition, Facts and Examples
Introduction to Round Off
Rounding is the process of replacing a number with an approximation that has a shorter, simpler, or more accurate description. For instance, changing 23.4476 to 23.45, 312/937 to 1/3, or 2 to 1.414.
Rounding is frequently done to get a value that is simpler to report and explain than the original. For example, a quantity that was computed as 123,456 but is known to be accurate to within a few
hundred units is usually better stated as "about 123,500." On the other hand, rounding exact numbers will result in a small amount of round-off error being reported. When reporting many computations,
rounding is essentially unavoidable, especially when dividing two numbers in integer or fixed-point arithmetic, computing mathematical functions like sines, square roots, and logarithms, or using a
floating-point representation with a fixed number of significant digits. In a series of calculations, these rounding errors typically add up and, in some unfortunate circumstances, they might
represent the outcome meaninglessly.
Rules of Rounding Numbers
Round Off
When a number is rounded off, its value is maintained but is moved closer to the following number, simplifying the number. It is done for whole numbers as well as decimals at different places such as
hundreds, tens, tenths, etc. To maintain the significant figures, numbers are rounded off. Simply put, the number of figures that are known with some degree of reliability is the number of
significant figures in a result.
According to legend, the number 13.2 contains three significant figures. Digits greater than zero are always important. There are six significant digits in the number 3.14159. (all the numbers give
you useful information). As a result, 67 has two significant digits, while 67.3 has three.
How to Round off Decimal Numbers?
Rounding rules for decimal numbers are as follows:
• Determine the rounding digit and look at its right-hand side.
• If the digits on the right-hand side are less than 5, consider them equal to zero.
• If the digits on the right-hand side are greater than or equal to 5, then add +1 to that digit and consider all other digits as zero.
Decimal Round off
How to Round off Whole Numbers?
Rounding rules for whole numbers are as follows:
• To get an accurate final result, always choose the smaller place value.
• Look for the next smaller place which is towards the right of the number that is being rounded off. For example, if you are rounding off a digit from a tens place, look for a digit in one place.
• If the digit in the smallest place is less than 5, then the digit is left untouched. Any number of digits after that number becomes zero and this is known as rounding down.
• If the digit in the smallest place is greater than or equal to 5, then the digit is added with +1. Any digits after that number become zero and this is known as rounding up.
Whole Number Round Off
Rounding off to the Nearest Ten
• A good way of explaining this is to use a number line.
• If the unit of the number is less than five, the number needs to be rounded down.
• If the unit of the number is 5 or above, the number needs to be rounded up.
• So 32 would be rounded down to 30, 35 would be rounded up to 40 and 38 would also be rounded up to 40:
Number Line
Rounding off to the Nearest Hundred
• If the tens digit is less than 50 the number is rounded down.
• If the tens digit is 50 or more, the number is rounded up. (The unit digit can be ignored when rounding a three-digit number to the nearest 100.)
• So 834 would be rounded down to 800, 851 would be rounded up to 900 and 876 would be rounded up to 900:
Number Line From 800 to 900
Round-off errors of a tiny amount will be reported when accurate figures are rounded. Rounding is essentially inevitable when summarising a large number of calculations, particularly when dividing
two numbers in fixed-point or integer arithmetic, computing mathematical functions like sines, square roots, and logarithms, or using a floating-point representation with a fixed number of
significant digits. These rounding errors usually mount up in a succession of calculations and, in some terrible cases, they could render the result meaningless.
FAQs on Round Off - A Fun Learning
1. Explain the process of rounding the nearest tenth?
Let’s consider the number 0.64. To round off to the nearest significant number, consider tenths place and follow the steps as given below:
• Identify the digit present in the tenth place: 6
• Identify the next smallest place in the number: 4
If the smallest place digit is greater than or equal to 5, then round up the digit.
As the digit in the smallest digit is less than 5, the digit gets rounded down.
So the final number is 0.6.
2. How does rounding off work for whole numbers?
Rounding rules for whole numbers are as follows:
• To get an accurate final result, always choose the smaller place value.
• Look for the next smaller place, which is towards the right of the number that is being rounded off. For example, if you are rounding off a digit from the tens place, look for a digit in the one
• If the digit in the smallest place is less than 5, then the digit is left untouched. Any number of digits after that number becomes zero, and this is known as rounding down.
• If the digit in the smallest place is greater than or equal to 5, then the digit is added with +1. Any digits after that number become zero, and this is known as rounding up.
|
{"url":"https://www.vedantu.com/maths/round-off-numbers","timestamp":"2024-11-07T03:02:20Z","content_type":"text/html","content_length":"216458","record_id":"<urn:uuid:840b9c7c-3532-49b2-83e5-9ac98726be0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00717.warc.gz"}
|
IOQM 2024 Donut Mock Two
This is the second IOQM Mock test from MOMC Season 3! The problems for this test are taken from HMMT Feb 2024 and BMT 2012. As always, all credit and authorship of the problems belong to their
respective sources.
The MOMC application will be available soon in a separate blog post. So stay tuned for that! Also, please remember to fill out this very short form about the mock test! You can find its results here.
The results of the previous mock test form have been added to that blog now.
Before discussing the mock test, I want to talk about a few things. I've been frequently asked about providing solutions for the mock tests, and I would like to clarify that the sources mentioned in
the first paragraph of each blog do have the solutions. I prefer not to post the solutions along with the answer key because a) it encourages you to explore new resources, and b) I don't own the
problems, so directing you to their official sources is the right thing to do.
This might be a bit out of place for my blog, but I want to emphasise the importance of checking your work at the end of your mock test. I am convinced that the only reason capable aspirants fail is
due to silly errors. Many MOMC members solve 45-50 marks worth of problems but score only about 30. While 30 is probably enough to clear IOQM, there is no guarantee you will perform at least as well
on the actual exam day. I wouldn't have cleared IOQM had I not carefully checked my work. So I insist you to follow this advice as an unspoken rule.
Regarding the mock's difficulty, I asked for feedback on Donut 1 and learned that it was slightly easier compared to last year's IOQM. So, this mock is a bit harder. All the 2-markers are
non-routine, and all the 3 and 5-marker problems require considerable effort to solve. Most terrifying of all, the mock is filled to the brim with geometry. Nevertheless, the mock features a diverse
set of problems with interesting elements. Problem 9 is an IOQM classic, featuring all three main stars of geometry: altitude, angle bisector, and median. Similarly, I hope that solving the 5-marker
geometry problems will help you appreciate the beauty of HMMT Feb Geo!
Meanwhile, Problems 10 and 15, while still combinatorics, provide a break from counting. Problem 14, on the other hand, is a classic error-prone cube counting problem. Problem 29 is one of those
Olympiad-like combinatorial problems where proving the answer correct is much harder than reaching it. Regarding number theory, Problem 19 features the Euler totient function. While it's usually not
considered part of the IOQM syllabus, MTAI likes to throw surprises, so it's not a bad idea to include it in the mock. An IOQM mock is incomplete without lengthy bash problems. Thankfully, P11 and
P25 deliver just that!
These are the CODS difficulty ratings for the 5 marks problems:
P21 P22 P23 P24 P25 P26 P27 P28 P29 P30
Before closing the post, I want to thank each one of you for the incredible response to this year's first MOMC IOQM mock exam! Your enthusiasm is truly motivating. I am thrilled that you all are
enjoying them. Looking forward to continuing this journey together, I'll catch you in the next one!
|
{"url":"https://www.agamjeet.com/ioqm-mock-donut-2-blog","timestamp":"2024-11-05T06:09:49Z","content_type":"text/html","content_length":"34666","record_id":"<urn:uuid:8c297569-be3d-4a63-bb05-6896a1ccecf3>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00325.warc.gz"}
|
Solving Linear Parabolic Pde Systems With Coupled Ode Systems
This tutorial is automatically generated from TestSolvingLinearParabolicPdeSystemsWithCoupledOdeSystemsTutorial.hpp at revision 8422a2c1f0b1. Note that the code is given in full at the bottom of the
Examples showing how to solve a system of coupled linear parabolic PDEs and ODEs
In this tutorial we show how Chaste can be used to solve a system of coupled linear parabolic PDEs and ODEs. This test uses the LinearParabolicPdeSystemWithCoupledOdeSystemSolver.
The following header files need to be included. First we include the header needed to define this class as a test suite.
#include <cxxtest/TestSuite.h>
On some systems there is a clash between Boost Ublas includes and PETSc. This can be resolved by making sure that Chaste’s interface to the Boost libraries are included as early as possible.
#include "UblasIncludes.hpp"
This is the class that is needed to solve a system of coupled linear parabolic PDEs and ODEs.
#include "LinearParabolicPdeSystemWithCoupledOdeSystemSolver.hpp"
The next header file defines the Schnackenberg system, which comprises two reaction-diffusion PDEs that are coupled through their reaction terms.
#include "SchnackenbergCoupledPdeSystem.hpp"
The next header file will allow us to specify a random initial condition.
#include "RandomNumberGenerator.hpp"
We then include header files that allow us to specify boundary conditions for the PDEs, deal with meshes and output files, and use PETSc. As noted before, PetscSetupAndFinalize.hpp must be included
in every test that uses PETSc.
#include "BoundaryConditionsContainer.hpp"
#include "ConstBoundaryCondition.hpp"
#include "OutputFileHandler.hpp"
#include "TrianglesMeshReader.hpp"
#include "PetscSetupAndFinalize.hpp"
Test 1: Solving the Schnackenberg system
Here, we solve the Schnackenberg system of PDEs, given by
$$ \begin{align*} u_t &= \nabla. (D_1 \nabla u) + k_1 - k_{-1}u + k_3 u^2 v,\\\ v_t &= \nabla. (D_2 \nabla v) + k_2 -k_3 u^2 v, \end{align*} $$
on a 2d butterfly-shaped domain. We impose non-zero Dirichlet boundary conditions and an initial condition that is a random perturbation of the spatially uniform steady state of the system.
To do this we define the test suite (a class). It is sensible to name it the same as the filename. The class should inherit from CxxTest::TestSuite.
class TestSolvingLinearParabolicPdeSystemsWithCoupledOdeSystemsTutorial : public CxxTest::TestSuite
All individual tests defined in this test suite must be declared as public.
Define a particular test.
void TestSchnackenbergSystemOnButterflyMesh()
As usual, we first create a mesh. Here we are using a 2d mesh of a butterfly-shaped domain.
TrianglesMeshReader<2,2> mesh_reader("mesh/test/data/butterfly");
TetrahedralMesh<2,2> mesh;
We scale the mesh to an appropriate size.
Next, we instantiate the PDE system to be solved. We pass the parameter values into the constructor. (The order is $D_1, D_2, k_1, k_{-1}, k_2, k_3$)
SchnackenbergCoupledPdeSystem<2> pde(1e-4, 1e-2, 0.1, 0.2, 0.3, 0.1);
Then we have to define the boundary conditions. As we are in 2d, SPACE_DIM=2 and ELEMENT_DIM=2. We also have two unknowns u and v, so in this case PROBLEM_DIM=2. The value of each boundary condition
is given by the spatially uniform steady state solution of the Schnackenberg system, given by $u = (k_1 + k_2)/k_{-1}$, $v = k_2 k_{-1}^2 / k_3(k_1 + k_2)^2$.
BoundaryConditionsContainer<2,2,2> bcc;
ConstBoundaryCondition<2>* p_bc_for_u = new ConstBoundaryCondition<2>(2.0);
ConstBoundaryCondition<2>* p_bc_for_v = new ConstBoundaryCondition<2>(0.75);
for (TetrahedralMesh<2,2>::BoundaryNodeIterator node_iter = mesh.GetBoundaryNodeIteratorBegin();
node_iter != mesh.GetBoundaryNodeIteratorEnd();
bcc.AddDirichletBoundaryCondition(*node_iter, p_bc_for_u, 0);
bcc.AddDirichletBoundaryCondition(*node_iter, p_bc_for_v, 1);
This is the solver for solving coupled systems of linear parabolic PDEs and ODEs, which takes in the mesh, the PDE system, the boundary conditions and optionally a vector of ODE systems (one for each
node in the mesh). Since in this example we are solving a system of coupled PDEs only, we do not supply this last argument.
LinearParabolicPdeSystemWithCoupledOdeSystemSolver<2,2,2> solver(&mesh, &pde, &bcc);
Then we set the end time and time step and the output directory to which results will be written.
double t_end = 10;
solver.SetTimes(0, t_end);
We create a vector of initial conditions for u and v that are random perturbations of the spatially uniform steady state and pass this to the solver.
std::vector<double> init_conds(2*mesh.GetNumNodes());
for (unsigned i=0; i<mesh.GetNumNodes(); i++)
init_conds[2*i] = fabs(2.0 + RandomNumberGenerator::Instance()->ranf());
init_conds[2*i + 1] = fabs(0.75 + RandomNumberGenerator::Instance()->ranf());
Vec initial_condition = PetscTools::CreateVec(init_conds);
We now solve the PDE system and write results to VTK files, for visualization using Paraview. Results will be written to $CHASTE_TEST_OUTPUT/TestSchnackenbergSystemOnButterflyMesh as a results.pvd
file and several results_[time].vtu files. You should see something like
All PETSc Vecs should be destroyed when they are no longer needed.
Full code
#include <cxxtest/TestSuite.h>
#include "UblasIncludes.hpp"
#include "LinearParabolicPdeSystemWithCoupledOdeSystemSolver.hpp"
#include "SchnackenbergCoupledPdeSystem.hpp"
#include "RandomNumberGenerator.hpp"
#include "BoundaryConditionsContainer.hpp"
#include "ConstBoundaryCondition.hpp"
#include "OutputFileHandler.hpp"
#include "TrianglesMeshReader.hpp"
#include "PetscSetupAndFinalize.hpp"
class TestSolvingLinearParabolicPdeSystemsWithCoupledOdeSystemsTutorial : public CxxTest::TestSuite
void TestSchnackenbergSystemOnButterflyMesh()
TrianglesMeshReader<2,2> mesh_reader("mesh/test/data/butterfly");
TetrahedralMesh<2,2> mesh;
mesh.Scale(0.2, 0.2);
SchnackenbergCoupledPdeSystem<2> pde(1e-4, 1e-2, 0.1, 0.2, 0.3, 0.1);
BoundaryConditionsContainer<2,2,2> bcc;
ConstBoundaryCondition<2>* p_bc_for_u = new ConstBoundaryCondition<2>(2.0);
ConstBoundaryCondition<2>* p_bc_for_v = new ConstBoundaryCondition<2>(0.75);
for (TetrahedralMesh<2,2>::BoundaryNodeIterator node_iter = mesh.GetBoundaryNodeIteratorBegin();
node_iter != mesh.GetBoundaryNodeIteratorEnd();
bcc.AddDirichletBoundaryCondition(*node_iter, p_bc_for_u, 0);
bcc.AddDirichletBoundaryCondition(*node_iter, p_bc_for_v, 1);
LinearParabolicPdeSystemWithCoupledOdeSystemSolver<2,2,2> solver(&mesh, &pde, &bcc);
double t_end = 10;
solver.SetTimes(0, t_end);
std::vector<double> init_conds(2*mesh.GetNumNodes());
for (unsigned i=0; i<mesh.GetNumNodes(); i++)
init_conds[2*i] = fabs(2.0 + RandomNumberGenerator::Instance()->ranf());
init_conds[2*i + 1] = fabs(0.75 + RandomNumberGenerator::Instance()->ranf());
Vec initial_condition = PetscTools::CreateVec(init_conds);
|
{"url":"https://chaste.github.io/releases/2024.1/user-tutorials/solvinglinearparabolicpdesystemswithcoupledodesystems/","timestamp":"2024-11-05T15:54:14Z","content_type":"text/html","content_length":"56452","record_id":"<urn:uuid:879ae4ef-b9d1-4213-8466-0668302f3f43>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00055.warc.gz"}
|
Integers Class 6 Notes | StudyTution - StudyTution
Integers Class 6 Notes | StudyTution
Main Concept And Result
• The collection of numbers 0, +1, –1, +2, –2, +3, –3, …… is called integers.
• The numbers +1, +2, +3, +4, ….. are referred to as positive integers.
• The numbers –1, –2, –3, –4, ……. are referred to as negative integers.
• The numbers 0, +1, +2, +3, …… are called non-negative integers.
Chapter 6: Integers
• All the positive integers lie to the right of 0 and the negative integers to the left of 0 on the number line.
• All non negative integers are the same as whole numbers and hence all the opertations on them are done as in the case of whole numbers.
• To add two negative integers, we add the corresponding positive integers and retain the negative sign with the sum.
• To add a positive integer and a negative integer, we ignore the signs and subtract integer with smaller numerical value from the integer with larger numerical value and take the sign of the
larger one.
• Two integers whose sum is zero are called additive inverses of each other. They are also called the negatives of each other.
Chapter 7: Fractions
• Additive inverse of an integer is obtained by changing the sign of the integer.
• For example, the additive inverse of +5 is –5 and the additive inverse of –3 is +3.
• To subtract an integer from a given integer, we add the additive inverse of the integer to the given integer.
• To compare two integers on the number line, we locate their positions on the number line and the integer lying to the right of the other is always greater.
Chapter 9: Data Handling
Facebook Comments Box
|
{"url":"https://studytution.com/integers-class-6-notes-studytution/","timestamp":"2024-11-13T12:01:21Z","content_type":"text/html","content_length":"41692","record_id":"<urn:uuid:cda9637e-d790-4470-9181-953435bce882>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00563.warc.gz"}
|
Performing spatial joins with ST_Intersects
The ST_Intersects function determines if two GEOMETRY objects intersect or touch at a single point.
The ST_Intersects function determines if two GEOMETRY objects intersect or touch at a single point.
Use ST_Intersects when you want to identify if a small set of geometries in a column intersect with a given geometry.
The following example uses ST_Intersects to compare a column of point geometries to a single polygon. The table that contains the points has 1 million rows.
ST_Intersects returns only the points that intersect with the polygon. Those points represent about 0.01% of the points in the table:
=> CREATE TABLE points_1m(gid IDENTITY, g GEOMETRY(100)) ORDER BY g;
=> COPY points_1m(wkt FILLER LONG VARCHAR(100), g AS ST_GeomFromText(wkt))
FROM LOCAL '/data/points.dat';
Rows Loaded
(1 row)
=> SELECT ST_AsText(g) FROM points_1m WHERE
ST_GeomFromText('POLYGON((-71 42, -70.9 42, -70.9 42.1, -71 42.1, -71 42))')
POINT (-70.97532 42.03538)
POINT (-70.97421 42.0376)
POINT (-70.99004 42.07538)
POINT (-70.99477 42.08454)
POINT (-70.99088 42.08177)
POINT (-70.98643 42.07593)
POINT (-70.98032 42.07982)
POINT (-70.95921 42.00982)
POINT (-70.95115 42.02177)
(116 rows)
Vertica recommends that you test the intersections of two columns of geometries by creating a spatial index. Use one of the STV_Intersect functions as described in STV_Intersect: scalar function vs.
transform function.
|
{"url":"https://docs.vertica.com/24.4.x/en/data-analysis/geospatial-analytics/working-with-spatial-objects-tables/spatial-joins-with-st-intersects-and-stv-intersect/when-to-use-st-intersects-vs-stv-intersect/performing-spatial-joins-with-st-intersects/","timestamp":"2024-11-11T11:25:25Z","content_type":"text/html","content_length":"48363","record_id":"<urn:uuid:a54cd024-e2aa-4f8e-ad5d-aff4bf5fde22>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00732.warc.gz"}
|
Yee Lattice
Yee Lattice#
In order to discretize Maxwell's equations with second-order accuracy for homogeneous regions where there are no discontinuous material boundaries, FDTD methods store different field components for
different grid locations. This discretization is known as a Yee lattice.
The form of the Yee lattice in 3d is shown in the schematic above for a single cubic grid voxel with dimensions $\Delta x \times \Delta x \times \Delta x$. The three components of $\mathbf{E}$ are
stored on the edges of the cube in the corresponding directions, while the components of $\mathbf{H}$ are stored on the cube faces.
More precisely, let a coordinate $(i,j,k)$ in the grid correspond to:
where $\hat{\mathbf{e}}_k$ denotes the unit vector in the k-th coordinate direction. Then, the $\ell$^th component of $\mathbf{E}$ or $\mathbf{D}$ (or $\mathbf{P}$) is stored for the locations:
The $\ell$^th component of $\mathbf{H}$, on the other hand, is stored for the locations:
In two dimensions, the arrangement is similar except that we set $\hat{\mathbf{e}}_3=0$. The 2d Yee lattice for the $H_z$-polarization ($\mathbf{E}$ in the $xy$ plane and $\mathbf{H}$ in the $z$
direction) is shown in the figure below.
The consequence of the Yee lattice is that, whenever you need to access field components, e.g. to find the energy density $(\mathbf{E}^* \cdot \mathbf{D} + |\mathbf{H}|^2)/2$ or the flux $\textrm{Re}
\, \mathbf{E}^* \times \mathbf{H}$, then the components need to be interpolated to some common point in order to remain second-order accurate. Meep automatically does this interpolation for you
wherever necessary — in particular, whenever you compute energy density or Poynting flux, or whenever you output a field to a file, it is stored at the centers of each grid voxel: $
In a Meep simulation, the coordinates of the Yee lattice can be obtained using a field function.
|
{"url":"https://meep.readthedocs.io/en/latest/Yee_Lattice/","timestamp":"2024-11-05T21:56:53Z","content_type":"text/html","content_length":"20237","record_id":"<urn:uuid:3c0391c6-19f2-431a-b92a-b37a59dec020>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00763.warc.gz"}
|
public abstract class Spring
extends Object
An instance of the
class holds three properties that characterize its behavior: the
, and
values. Each of these properties may be involved in defining its fourth,
, property based on a series of rules.
An instance of the Spring class can be visualized as a mechanical spring that provides a corrective force as the spring is compressed or stretched away from its preferred value. This force is
modelled as linear function of the distance from the preferred value, but with two different constants -- one for the compressional force and one for the tensional one. Those constants are specified
by the minimum and maximum values of the spring such that a spring at its minimum value produces an equal and opposite force to that which is created when it is at its maximum value. The difference
between the preferred and minimum values, therefore, represents the ease with which the spring can be compressed and the difference between its maximum and preferred values, indicates the ease with
which the Spring can be extended. See the sum(javax.swing.Spring, javax.swing.Spring) method for details.
By defining simple arithmetic operations on Springs, the behavior of a collection of Springs can be reduced to that of an ordinary (non-compound) Spring. We define the "+", "-", max, and min
operators on Springs so that, in each case, the result is a Spring whose characteristics bear a useful mathematical relationship to its constituent springs.
A Spring can be treated as a pair of intervals with a single common point: the preferred value. The following rules define some of the arithmetic operators that can be applied to intervals ([a, b]
refers to the interval from a to b, where a <= b).
[a1, b1] + [a2, b2] = [a1 + a2, b1 + b2]
-[a, b] = [-b, -a]
max([a1, b1], [a2, b2]) = [max(a1, a2), max(b1, b2)]
If we denote Springs as [a, b, c], where a <= b <= c, we can define the same arithmetic operators on Springs:
[a1, b1, c1] + [a2, b2, c2] = [a1 + a2, b1 + b2, c1 + c2]
-[a, b, c] = [-c, -b, -a]
max([a1, b1, c1], [a2, b2, c2]) = [max(a1, a2), max(b1, b2), max(c1, c2)]
With both intervals and Springs we can define "-" and min in terms of negation:
X - Y = X + (-Y)
min(X, Y) = -max(-X, -Y)
For the static methods in this class that embody the arithmetic operators, we do not actually perform the operation in question as that would snapshot the values of the properties of the method's
arguments at the time the static method is called. Instead, the static methods create a new Spring instance containing references to the method's arguments so that the characteristics of the new
spring track the potentially changing characteristics of the springs from which it was made. This is a little like the idea of a lazy value in a functional language.
If you are implementing a SpringLayout you can find further information and examples in How to Use SpringLayout, a section in The Java Tutorial.
Warning: Serialized objects of this class will not be compatible with future Swing releases. The current serialization support is appropriate for short term storage or RMI between applications
running the same version of Swing. As of 1.4, support for long term storage of all JavaBeans has been added to the java.beans package. Please see XMLEncoder.
See Also:
SpringLayout, SpringLayout.Constraints
• Field Summary
Modifier and Type Field Description
static int UNSET An integer value signifying that a property value has not yet been calculated.
• Constructor Summary
Modifier Constructor Description
protected Spring() Used by factory methods to create a Spring.
• Method Summary
Modifier and Method Description
static Spring constant(int pref) Returns a strut -- a spring whose minimum, preferred, and maximum values each have the value pref.
static Spring constant(int min, int pref, Returns a spring whose minimum, preferred, and maximum values have the values: min, pref, and max respectively.
int max)
abstract int getMaximumValue() Returns the maximum value of this Spring.
abstract int getMinimumValue() Returns the minimum value of this Spring.
abstract int getPreferredValue() Returns the preferred value of this Spring.
abstract int getValue() Returns the current value of this Spring.
Returns a spring whose minimum, preferred, maximum and value properties are defined by the heights of the minimumSize, preferredSize, maximumSize and
static Spring height(Component c) size properties of the supplied component.
static Spring max(Spring s1, Spring s2) Returns max(s1, s2): a spring whose value is always greater than (or equal to) the values of both s1 and s2.
static Spring minus(Spring s) Returns -s: a spring running in the opposite direction to s.
static Spring scale(Spring s, float Returns a spring whose minimum, preferred, maximum and value properties are each multiples of the properties of the argument spring, s.
abstract void setValue(int value) Sets the current value of this Spring to value.
static Spring sum(Spring s1, Spring s2) Returns s1+s2: a spring representing s1 and s2 in series.
static Spring width(Component c) Returns a spring whose minimum, preferred, maximum and value properties are defined by the widths of the minimumSize, preferredSize, maximumSize and
size properties of the supplied component.
Methods declared in class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• Field Details
□ UNSET
public static final int UNSET
An integer value signifying that a property value has not yet been calculated.
See Also:
• Method Details
□ getMinimumValue
public abstract int getMinimumValue()
Returns the minimum value of this Spring.
the minimumValue property of this Spring
□ getPreferredValue
public abstract int getPreferredValue()
Returns the preferred value of this Spring.
the preferredValue of this Spring
□ getMaximumValue
public abstract int getMaximumValue()
Returns the maximum value of this Spring.
the maximumValue property of this Spring
□ getValue
public abstract int getValue()
Returns the current value of this Spring.
the value property of this Spring
See Also:
□ setValue
public abstract void setValue(int value)
Sets the current value of this Spring to value.
value - the new setting of the value property
See Also:
□ constant
public static Spring constant
int pref)
Returns a strut -- a spring whose minimum, preferred, and maximum values each have the value pref.
pref - the minimum, preferred, and maximum values of the new spring
a spring whose minimum, preferred, and maximum values each have the value pref
See Also:
□ constant
public static Spring constant
int min, int pref, int max)
Returns a spring whose minimum, preferred, and maximum values have the values: min, pref, and max respectively.
min - the minimum value of the new spring
pref - the preferred value of the new spring
max - the maximum value of the new spring
a spring whose minimum, preferred, and maximum values have the values: min, pref, and max respectively
See Also:
□ minus
Returns -s: a spring running in the opposite direction to s.
s - a Spring object
-s: a spring running in the opposite direction to s
See Also:
□ sum
: a spring representing
in series. In a sum,
, of two springs,
, the
, and
are maintained at the same level (to within the precision implied by their integer
s). The strain of a spring in compression is:
value - pref
pref - min
and the strain of a spring in tension is:
value - pref
max - pref
is called on the sum spring,
, the strain in
is calculated using one of the formulas above. Once the strain of the sum is known, the
s of
are then set so that they are have a strain equal to that of the sum. The formulas are evaluated so as to take rounding errors into account and ensure that the sum of the
s of
is exactly equal to the
s1 - a Spring object
s2 - a Spring object
s1+s2: a spring representing s1 and s2 in series
See Also:
□ max
Returns max(s1, s2): a spring whose value is always greater than (or equal to) the values of both s1 and s2.
s1 - a Spring object
s2 - a Spring object
max(s1, s2): a spring whose value is always greater than (or equal to) the values of both s1 and s2
See Also:
□ scale
Returns a spring whose
properties are each multiples of the properties of the argument spring,
. Minimum and maximum properties are swapped when
is negative (in accordance with the rules of interval arithmetic).
When factor is, for example, 0.5f the result represents 'the mid-point' of its input - an operation that is useful for centering components in a container.
s - the spring to scale
factor - amount to scale by.
a spring whose properties are those of the input spring s multiplied by factor
NullPointerException - if s is null
□ width
Returns a spring whose minimum, preferred, maximum and value properties are defined by the widths of the minimumSize, preferredSize, maximumSize and size properties of the supplied component.
The returned spring is a 'wrapper' implementation whose methods call the appropriate size methods of the supplied component. The minimum, preferred, maximum and value properties of the
returned spring therefore report the current state of the appropriate properties in the component and track them as they change.
c - Component used for calculating size
a spring whose properties are defined by the horizontal component of the component's size methods.
NullPointerException - if c is null
□ height
Returns a spring whose minimum, preferred, maximum and value properties are defined by the heights of the minimumSize, preferredSize, maximumSize and size properties of the supplied
component. The returned spring is a 'wrapper' implementation whose methods call the appropriate size methods of the supplied component. The minimum, preferred, maximum and value properties of
the returned spring therefore report the current state of the appropriate properties in the component and track them as they change.
c - Component used for calculating size
a spring whose properties are defined by the vertical component of the component's size methods.
NullPointerException - if c is null
|
{"url":"https://cr.openjdk.org/~iris/se/15/latestSpec/api/java.desktop/javax/swing/Spring.html","timestamp":"2024-11-02T02:15:05Z","content_type":"text/html","content_length":"38500","record_id":"<urn:uuid:fe4264e7-69c0-401f-879f-47a09c63acbd>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00760.warc.gz"}
|
PuzzlersWorld.com - Mind Blowing Logical Puzzles, Interview Puzzles and Brain Teasers
Alex, Betty, Carol, Dan, Earl, Fay, George and Harry are eight employees of an organization. They work in three departments: Personnel, Administration and Marketing with not more than three of them
in any department. Each of them has a different … [Continue reading]
We've got whole milk (with 4% fat) and low-fat milk (with 1% fat). How much low-fat milk should be added to 50 liters of whole milk to get milk with 3% fat? Check your answer:- … [Continue reading]
Guess the number. … [Continue reading]
|
{"url":"https://puzzlersworld.com/","timestamp":"2024-11-14T08:56:47Z","content_type":"text/html","content_length":"98786","record_id":"<urn:uuid:5ca393fc-cf33-468b-a541-b11ca2f6f9ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00309.warc.gz"}
|
Build an AMM with Proteus | Shell Protocol Developer Documentation
Build an AMM with Proteus
A brief summary of how to construct an AMM using Proteus
Quick Links
Derive a Bonding Curve for Proteus
Constructing an AMM using Proteus is much simpler than building one from scratch. Here is a brief summary of how to derive a Proteus bonding curve with concentrated liquidity.
Gather price data by examining the historical prices for the pair over the last year. You can use Dune Analytics, CoinMarketCap, CoinGecko, or any resource of your liking for this.
Use the historical prices to decide where to concentrate liquidity. Stable-to-stable pools are designed so that when the pool's liquidity is split 50-50 across both tokens, the swap rate is the
median rate over the last year. Volatile-to-stable pools (like USD+ETH) are designed so that when the pool's liquidity is split 50-50 across both tokens, the swap rate uses the median dollar
value of the volatile token over the last year (note, this creates visually unusual bonding curves—basically it looks like a vertical line).
Assign price buckets to fit the historical price distribution. You can use as many as you like. The first deployed pools used 8-12 for stable-to-stable pairs and fewer (~4) for volatile-to-stable
pairs. In this case, fewer buckets means larger buckets. This is necessary because their prices (of ETH or BTC, for example) are so variable. Therefore these volatile-to-stable curves are less
concentrated than the stable-to-stable pools, although they are still significantly more concentrated than constant product curves.
Next, you assign approximate liquidity concentration values to each price bucket. The concentration of liquidity determines how much to magnify that portion of the curve (as detailed in the
section "Deriving segment parameters"). In other words, assign multipliers to each price bucket so a higher proportion of the pool's liquidity will always be allocated to the more popular price
Finally, you plug all this information into a gigantic system of equations to help create a composite function that takes each magnified curve section and stitches it together, resulting in one
big curve.
|
{"url":"https://docs.shellprotocol.io/tutorials/build-an-amm-with-proteus","timestamp":"2024-11-08T01:59:25Z","content_type":"text/html","content_length":"153166","record_id":"<urn:uuid:64fe4458-dd3b-4910-a9b1-5f2b9b28b227>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00824.warc.gz"}
|
Measurement in Year 6 (age 10–11) - Oxford Owl for Home
Measurement in Year 6 (Ages 10–11)
In Year 6, your child will solve measurement problems using up to three decimal places. They will use simple formulae to calculate area and volume, and will convert between many different units.
The key words for this section are capacity, rectilinear, standard unit, and volume.
What your child will learn
Take a look at the National Curriculum expectations for measurement in Year 6 (age 10–11):
Use, read, write, and convert between standard units of measurement
Your child will use, read, write, and convert between standard units of measurement for:
They will be able to convert smaller units of measure into larger units and vice versa. For example, they will be able to convert 1570g to 1.57kg.
Your child will know approximate conversions between metric and imperial units. For example, they will know that there are approximately 25 grams in an ounce and 16 ounces in a pound.
Solve problems involving units of measure, using up to 3 decimal places
Your child will solve measurement problems using addition, subtraction, multiplication, and division.
They will solve problems that need them to convert units. For example:
2.3m + 110cm = 340cm
2.3m + 110cm = 3.4m
Your child will use decimal notation up to three decimal places, where appropriate. They should be able to tell if an answer is sensible by estimating their answer before working out the problem
Convert between miles and kilometres
Your child will know that there are approximately 1.6 kilometres in a mile. They will use this to convert between kilometres and miles.
Recognise that shapes with the same areas can have different perimeters and vice versa
Your child will recognise that shapes with the same size areas can have different perimeters, and that shapes with the same length perimeters can have different areas.
Recognise when to use formulae for the area and volume of shapes
Your child will use formulae to find the area of rectilinear shapes by multiplying length by height.
Calculate the area of parallelograms and triangles
Your child will use what they know about finding the area of a rectangle to calculate the area of parallelograms and triangles.
They will do this by splitting big shapes into different smaller shapes. For example, they could find the area of a triangle by finding the area of a square with the same length and height and
dividing it by two.
Calculate, estimate, and compare the volume of cubes and cuboids using standard units
Your child will calculate, estimate, and compare the volume of cubes and cuboids.
They will record the volume of cubes and cuboids using standard units such as cubic centimetres (cm³), cubic metres (m³), cubic millimetres (mm³), and cubic kilometres (km³).
How to help at home
There are lots of everyday ways you can help your child to understand measurement. Here are just a few ideas.
1. Practise converting between units of measurement
As well as metric units such as metres, grams, and litres, your child will have to use common imperial units such as inches, pounds, and pints. The most common metric units your child will need to
convert between are:
• kilometres and metres
• centimetres and metres
• centimetres and millimetres
• grams and kilograms
• litres and millilitres.
You can explore the approximate equivalences between metric and imperial units using a table like this:
You can help your child practise conversions by making the most of real-life opportunities. For example, if you are travelling abroad and the distances are measured in kilometres, you could convert
these to miles.
Closer to home, think about how many miles it is to the shops and try to convert that into metres, or vice versa.
2. Bake!
If you are cooking with your child, discuss which ingredients are measured in metric units and which are measured in imperial units. Can they convert between the two? For example:
The recipe requires half a pound of butter. Can you convert this into grams?
If 1 pound is equivalent to 450g, then half a pound will be equivalent to 225g.
Activity: Shortbread fingers
Convert units and make a tasty treat!
3. Explore area and perimeter
Your child will learn to measure and calculate the perimeter and area of rectangles and composite rectilinear shapes. Composite shapes are shapes that can be divided into more than one simple shape.
Rectilinear shapes are shapes which only have straight sides and right angles.
Floor plans are great for showing how important measurement is in real-life situations. You could find floor plans for houses or flats online, and ask your child if they can work out the perimeter
and area of particular rooms. For example, why not ask your child to calculate the quantity of carpet needed to cover the floor in each bedroom?
Once they are comfortable with the idea of a floor plan, you could encourage them to make a floor plan of their dream bedroom! Remind them to think carefully about the room’s scale, and to use
squared paper if needed.
They should record their perimeters and areas using the most appropriate units, such as metres and square metres (m²).
4. Explore volume
The difference between volume and capacity is quite subtle, and can be difficult to understand:
• Volume means the amount of space occupied by a3Dshape. It is measured in cubed units like cm³.
• Capacitymeans the amount a container can hold. Capacity is measured in metric units such as litres or imperial units such as pints.
You can help your child get to grips with the different units used to measure volume by pointing them out in the real world. When you are at the swimming pool, ask your child whether its capacity is
most likely to be measured in cubic millimetres, centimetres, metres, or kilometres, and discuss why.
Another fun activity you can use to explore volume is making boxes. Drawing a plan like the one below onto squared paper, your child could make a number of lidless boxes with different volumes:
Find out what happens to the base of each box when you use a smaller piece of paper, and measure the height of the sides of each of the boxes. Can your child work out how much each box can hold?
Encourage your child to record and talk about their findings.
|
{"url":"https://home.oxfordowl.co.uk/maths/primary-measurement/measurement-year-6-age-10-11/","timestamp":"2024-11-09T09:38:58Z","content_type":"text/html","content_length":"102467","record_id":"<urn:uuid:4da06d99-0d52-49b6-81b8-ab8b78506699>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00307.warc.gz"}
|
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.
|
{"url":"https://topic.alibabacloud.com/zqpop/gcd-of-2-numbers_38930.html","timestamp":"2024-11-08T17:41:36Z","content_type":"text/html","content_length":"84501","record_id":"<urn:uuid:c47b3aa7-ec1c-4235-b1e7-13a4bf92a480>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00152.warc.gz"}
|
What is a Triplet Sum in an Array?What is a Triplet Sum in an Array?
Arrays need no introduction. As a programmer, you may work with arrays almost every day. This linear data structure is extensively used in real life as well. This significant usage makes arrays an
important topic to prepare for an interview or competitive exams. So, What is a Triplet Sum in an Array and how to find triplet sum?
But do you think you’ll be asked direct questions?
There is no direct answer to this question and hence, it is always better to practice than regret later.
Therefore, we have come up with a common question related to an array i.e, how to find triplet sum in array.
We are going to impart all the vital information regarding the triplet sum problem with this blog.
Understanding Problem Statement for Finding Triplet Sum in an Array
You will be provided with an array and you need to find a triplet in the array whose sum is equal to the target value.
For instance,
Given array: {1, 2, 3, 4} given sum: 6
The output will be { 1, 2, 3}
Approaches To Find Triplet Sum In An Array
The possible approaches to find triplet sum in an array includ3.e:
• Naive approach
• Recursion approach
• Hashing-based approach
• Two-Pointer approach
Naive Approach
The very basic approach to find triplet sum in array is the naive approach. The simple logic behind this approach is to find all the possible triplets present in an array and then compare the sum of
each triplet to the given value.
To do this, you will have to use three nested loops.
Steps to Follow:
• You will be given a sum value and an array of size n.
• You need to now declare three nested loops. Here, the first loop will run from i to array[0]. The value of i varies from 0 to N.
• After this, the second loop will run the counter j which ranges between array[1] to n.
• Now, the last loop will run the counter k which will begin from array[2] to n.
• You will now have to calculate the sum of j, i and k. In case this sum matches the given value, you will have to print the values.
• In case no triplet has the same sum value, you need to return false.
Complexity Analysis
• Time Complexity: The time complexity of this method is calculated as O(n^3).
• Space Complexity: The space complexity of this method is calculated as O(1) because we are not using any additional space.
Recursion Method
The next method you can use to find triplet sum in an array is the recursion method. In this method, you will have to either consider the present number or leave that number.
Then you need to repeat the same process for all the other numbers present in an array. In case we exclude or include the present number and you will get the given value, you need to return.
Steps To Follow:
• To begin with, you will have to create a recursive function to determine if the required triplet exists in the array or not.
• This function will accept an array, its length, required sum and the present count.
• Now, we have to determine if the current triplet has achieved the given sum or not. In case it has the desired sum, you need to return true.
• Else, you need to return false in case the value of sum is negative.
• In the end, you need to return the recursion function with conditions.
Complexity Analysis to Find Triplet Sum in an Array
• Time Complexity: The time complexity of this method is O(3^n).
• Space Complexity: The space complexity of this method is calculated as O(N).
Hashing-Based Approach
In this method, we will be using HashSet to find triplet sum in an array. To begin with, you will have to run an inner loop from i+1 to the n positions. At the same time, you need to run the outer
loop from the beginning to the end of the array.
Now, declare a HashSet to store all the elements from i+1 to j-1. You are now required to check if there is a number present in the set which is equal to target- array[i]- array[j] in case the
provided total is the target value.
Steps to follow:
• From the beginning to end, you need to check all the elements of the given array with the help of counter i.
• Now, declare a HashSet and save distinct pairs in it.
• You will then have to run the loop from i+1 till n.
• if there is a number present in the set which is equal to target- array[i]- array[j], you need to print the triplet and end the loop.
• You need to then add the element present at jth location in the HashSet.
Complexity Analysis
• Time Complexity: In this method, the time complexity is calculated as O(n^2).
• Space Complexity: The space complexity is calculated as O(n).
Two-Pointers Approach
In the two-pointer approach, you need to first traverse the given array and then fix the beginning element of the triplet. You are then required to determine a pair which has a sum equal to
Steps to Follow:
• Begin the algorithm with sorting the given array to find triplet sum in an array.
• After this, you need to fix the beginning number of the triplet after iterating the given array.
• You will then have to use two pointers. One. At i+1 and other at i-1. Now simply find the sum.
• In case the sum is smaller than the target value, you need to increase the value of the first pointer.
• Otherwise, you need to decrease the end pointer in case the sum achieved is higher.
• Lastly, if you have found the exact value, you are required to print that triplet and break the loop.
Complexity Analysis
• Time Complexity: In this method, the time complexity is calculated as O(n^2).
• Space Complexity: The space complexity is calculated as O(1).
Conclusion: Finding Triplet Sum in an Array
Finding triplet sum in array is one of the complex and advanced level questions that you may encounter in an interview. Similarly, the sum tree problem is another one of the most-asked questions in
the coding interviews.
Here, we have discussed all the four methods that you can use to find triplet sum in an array. You can choose the method that you find most optimal.
Vice President, İntelligent Design & Consultancy Ltd
Over 12 years of global & rich experience in Portfolio & Program Delivery Management in leading & managing IT Governance, PMO, IT Portfolio/Program, IT Products, IT service delivery management,
Budget Management, and more.
|
{"url":"https://www.projectcubicle.com/what-is-a-triplet-sum-in-an-array/","timestamp":"2024-11-13T21:29:08Z","content_type":"text/html","content_length":"205726","record_id":"<urn:uuid:6408b68f-c450-4535-b123-12d7760ec12f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00396.warc.gz"}
|
Very basic stock-flow diagram of simple interest with table and graph output in interest, bank account and savings development per year. Initial deposit, interest rate, yearly deposit and withdrawal,
and initial balance bank account can all be modified.
I have developed a lesson plan in which students work on both simple and compound interest across both IM and Excel. I also wrote an article about this. Both are in Dutch, which you can translate
using for example Google Translate.
|
{"url":"https://insightmaker.com/tag/Savings","timestamp":"2024-11-04T14:31:55Z","content_type":"text/html","content_length":"76373","record_id":"<urn:uuid:374ac724-2dab-4bb9-bebe-5ac161424566>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00544.warc.gz"}
|
Kossovsky, AE (2021)
On the Mistaken Use of the Chi-Square Test in Benford’s Law
Stats 4(2), pp. 419–453.
ISSN/ISBN: Not available at this time. DOI: 10.3390/stats4020027
Abstract: Benford’s Law predicts that the first significant digit on the leftmost side of numbers in real-life data is distributed between all possible 1 to 9 digits approximately as in LOG(1 + 1/
digit), so that low digits occur much more frequently than high digits in the first place. Typically researchers, data analysts, and statisticians, rush to apply the chi-square test in order to
verify compliance or deviation from this statistical law. In almost all cases of real-life data this approach is mistaken and without mathematical-statistics basis, yet it had become a dogma or
rather an impulsive ritual in the field of Benford’s Law to apply the chi-square test for whatever data set the researcher is considering, regardless of its true applicability. The mistaken use of
the chi-square test has led to much confusion and many errors, and has done a lot in general to undermine trust and confidence in the whole discipline of Benford’s Law. This article is an attempt to
correct course and bring rationality and order to a field which had demonstrated harmony and consistency in all of its results, manifestations, and explanations. The first research question of this
article demonstrates that real-life data sets typically do not arise from random and independent selections of data points from some larger universe of parental data as the chi-square approach
supposes, and this conclusion is arrived at by examining how several real-life data sets are formed and obtained. The second research question demonstrates that the chi-square approach is actually
all about the reasonableness of the random selection process and the Benford status of that parental universe of data and not solely about the Benford status of the data set under consideration,
since the focus of the chi-square test is exclusively on whether the entire process of data selection was probable or too rare. In addition, a comparison of the chi-square statistic with the Sum of
Squared Deviations (SSD) measure of distance from Benford is explored in this article, pitting one measure against the other, and concluding with a strong preference for the SSD measure.
@Article{, AUTHOR = {Kossovsky, Alex Ely}, TITLE = {On the Mistaken Use of the Chi-Square Test in Benford's Law}, JOURNAL = {Stats}, VOLUME = {4}, YEAR = {2021}, NUMBER = {2}, PAGES = {419--453}, URL
= {https://www.mdpi.com/2571-905X/4/2/27}, ISSN = {2571-905X}, DOI = {10.3390/stats4020027} }
Reference Type: Journal Article
Subject Area(s): Statistics
|
{"url":"https://benfordonline.net/fullreference/2387","timestamp":"2024-11-12T09:30:11Z","content_type":"application/xhtml+xml","content_length":"5685","record_id":"<urn:uuid:db39e6e5-cbb4-4be3-852b-0c75f700f8e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00434.warc.gz"}
|
100 watt solar panel charge 12v battery
How Long To Charge 12v battery With 100 Watt Solar Panel?
An average size 12v 50Ah car battery discharged 20% will take 2 hours to charge using a 100 watt solar panel. A deep-cycle 12v 50Ah battery discharged 50%
Solar Panel Charge Time Calculator
1. Enter your battery voltage. For instance, if you''re using a 12V battery, you''d enter the number 12. 2. Enter your battery capacity in amp hours. If you have a 50Ah battery, you''d enter the
number 50. (If
What Size Solar Panel To Charge 70ah Battery?
Lithium: 99%; Lead-acid: 85%; Solar power required to charge 70ah battery = 420 × 1.15 = 483wh. 4. Divide the battery capacity value by the desired number of charge peak sun hours. let''s suppose you
want to recharge your battery in 5 peak sun hours. Solar power required in peak sun hour = 483 ÷ 5 = 96 watts. 5.
12v Battery for Solar Panel (Best Charge for Each Amp)
For a 12v battery, you''ll ideally need a panel of 200 watts to charge a 100ah battery — the most common 12v battery size. Given that a 200-watt panel can produce around 60 amp-hours per day — on a
sunny day under ideal conditions — you should be able to fully charge a 100ah battery with a 200-watt panel in 5–8 hours.
Renogy 100 Watt 12 Volt Portable Solar Panel with
Solar Charger with 2 USB Outputs for 12v Batteries/Power Station AGM LiFePo4 RV Camping Trailer Car Marine Renogy 100
Solar Panel Size Calculator
A 100-watt solar panel will charge a 100Ah 12V lithium battery in 10.8 peak sun hours (or, realistically, in little more than 2 days, if we presume an average of 5 peak sun hours per
What Size Solar Panel To Charge 150ah Battery? (Calculator)
You need a 210 watt solar panel to fully charge a 12v 150ah lead-acid battery from 50% depth of discharge in 6 peak sun hours using an MPPT charge controller. Read the below post to find out how fast
you can charge your battery. Related Post: Guide: Maximum Charging Current & Voltage For 12v Battery.
100 Watt 12 Volt Solar Starter kits | Renogy Solar
100W 12V Monocrystalline Solar Starter Kit w/Wanderer 30A Charge Controller. + 1490 Renogy Rays after purchase. -500Wh average daily output. 22.5% cell efficiency. 100% EL-tested solar panel.
-10-year material and workmanship warranty on solar panel. 2-year on charge controller. -UL61730-listed panel. 2400Pa wind, 5400Pa snow.
How Long Will A 100-Watt Solar Panel Take To Charge A 12V Battery
To calculate how long a 100-Watt solar panel will take to charge a 12V battery, we need to know the battery''s capacity in amp-hours (Ah). Let''s assume the battery capacity is X Ah. First, we''ll
calculate the current output of the solar panel in amperes: Power (P) = Voltage (V) × Current (I) 100 W = 12 V × I. I = 100 W / 12 V I ≈ 8.33
How Many Watts of Solar Panel Do You Need to Charge a Deep Cycle Battery?
Wattage = (Battery Capacity x Depth of Discharge x 2) / Hours of Sunlight. For example, if you have a 12 V 100 Ah deep cycle battery and want to charge it within 5 hours of sunlight, the wattage
required would be: Wattage = (100 Ah x 0.8 x 2) / 5 = 64 Watts. Therefore, you would need a solar panel with a wattage of at least 64 watts to
How To Charge 12v 7ah Battery With Solar Panel | A Guide To Charging Batteries With Solar Panels
Before diving into the process, it''s essential to gather the necessary materials. You will require: 12V 7Ah battery: Ensure you have a battery of the correct voltage and capacity for your specific
needs. Solar panel: Invest in a solar panel with sufficient wattage to generate the required power for charging the battery.
Can 100w Solar Panel Charge 100ah Battery?
A 100-watt solar panel generates approximately 8.33 amps per hour when charging a 12V battery. This calculation is based on dividing the solar panel''s wattage by the battery''s voltage. The proper
battery size for 100w solar panel should be done based on these measurements.
Solar Panel Charge Time Calculator
Enter the wattage of your solar panel or solar array. If you''re using a 100W solar panel, you''d enter the number 100. If you''re using a 400W solar array, you''d enter the number 400. 6. Select
your charge controller type. 7. Click "Calculate" to get your results. Your estimated charge time is given in peak sun hours.
What size solar panel do I need to charge a 12v battery?
Based on the earlier calculation, a 100 watt panel will produce an average of about 30 amp-hours per day (based on an average sunny day). This means you would need three 100 watt solar panels or
How Long Does It Take to Charge 12V Battery with
So, to charge a 100Ah 12V battery, it would take 100Ah/7.08A, which equals about 14 hours. While this may sound like a long time, it is important to remember that this is just a single solar panel
What Size Solar Panel to Charge 12V Battery?
What Size Solar Panel to Charge 12V Battery? For a 12V lithium-ion battery, a 150-watt solar panel can charge the device (100 Ah capacity) in 10 hours. But
What Size Solar Panel To Charge 12V Battery?
How long will a 100 watt solar panel take to charge a 12V battery? A 100 watt panel can generate up to 400 watt-hours of energy per day with a maximum output current of 6 amp. A 60Ah battery with 25%
What Size Solar Panel to Trickle Charge 12V Battery? (Answered)
Generally, a solar panel with a wattage of at least 15 watts can be used to charge a 12V battery. If the battery capacity is higher, a higher-wattage panel should be used. It is also important to
consider the amount of sunlight available and the charging time required when selecting the wattage of the solar panel.
10 Best 12 Volt Solar Battery Chargers for RVs, Cars & Boats
How Long Will a 100-Watt Solar Panel Take to Charge a 12V Battery Conclusion Top-rated 10 Volt Solar Battery Charger Reviews 1. ECO-WORTHY L02EP5BB18V-1 Solar Trickle Charger Specifications Size:
8.07 x 9.25″ Weight: 0.76 lbs Pmax: 5W This 12
100W 12V Monocrystalline Solar Starter Kit w/Wanderer 10A Charge Controller | Renogy Solar
Renogy 100W Monocrystalline Solar Panel Wanderer 10A PWM Charge Controller Maximum Power: 100W Nominal Voltage: 12V/24V Auto Recognition Maximum System Voltage: 600V DC (UL) Rated Charge Current: 10A
Open-Circuit Voltage (Voc): 22.3V Max
How Long to Charge 12V Battery with 100 Watt Solar
If you''re wondering how long does a 100 watt solar panel charge a battery, the answer to that will largely depend on the battery''s size. On average, it could vary between five to eight hours.
Hence, we
How to Charge LiFePO4 Batteries with Solar Panels
A 100 watt solar panel produces around 300-500 watt hours per day, so it usually takes about 3-4 sunny days for one to fully charge a 12V 100Ah LiFePO4 battery. Though the exact number will vary
quite a bit based on weather, location, and time of year.
Solar Panel Charge Time Calculator For 12V Batteries (100W
Calculated table of charging times for 12V batteries with 100W, 200W, 300W, 400W, and 500W solar panels. Alright, let''s look at how to easily calculate battery charging time:
What Size Battery For 100w Solar Panel?
100W solar panels are compatible with 12V batteries. You can choose a 50 amp or 100 amp Lead-Acid or Lithium-ion battery for 100W solar panels. You will have to use a battery double the capacity of
your solar panel''s output. Before everything else, you should also know that a 100W solar panel is compatible with 12V batteries.
Solar Panel Size To Charge A 12V Battery (50Ah, 80,
Estimate the time it takes for a 100-watt panel to charge a 12-volt battery by using this simple formula to calculate your charging time: (battery capacity in Ah) x voltage/panel wattage. In most
cases, when
How To Charge a 12V Battery with Solar Panels?
Components You Need to Charge a 12V Battery. How to Charge a 12V Battery with Solar Panels. Step 1: Connect the 12V Battery to Your Charge Controller. Step 2: Connect Your Solar Panels to the Charge
Controller. Step 3: Check the Connection. Step 4: Position the Solar Panels Under Direct Sunlight. Conclusion.
ExpertPower 100W 12V Solar Power Kit with Battery : 100W 12V Solar Panel + 10A Charge Controller + 21Ah Gel Battery
ECI Power 100W 12V Solar Power Kit | 12V 20Ah LiFePO4 Lithium Battery | 100W Mono Rigid Solar Panel, 10A PWM Solar Charge Controller | RV, Trailer, Camper, Marine, Off Grid, Solar Projects dummy
ECO-WORTHY 200 Watt 12V Complete Solar Panel Starter Kit for RV Off Grid with Battery and Inverter: 2pcs 100W Solar Panel
What Size Solar Panel to Charge 12V Battery?
Step 1: First, we need to convert the battery capacity from Ah into watts. For this we can use the basic formula i.e. amperage x voltage = watts. Here let us take the example of a 12V battery with 50
amp, it requires 50 x 12 = 600W. Step 2: Then we need to ascertain the output per hour by the solar panel on an average.
How Long Does It Take A 200W Solar Panel to
The short answer is that a 200-watt solar panel that generates 1 amp of current takes between 5 to 8 hours to completely charge a 12-volt car battery. However, it is a bit more complicated than
How To Charge A 12V Battery With Goal Zero Solar Panels
The Guardian 12V charge controller has an 8mm input so you can connect Goal Zero solar panels to it. You''ll then use the alligator clips coming out of the output port on the Guardian to connect it
to your battery. Remember to connect the positive (red) clip first to the positive terminal on the battery, then the negative (black) clip to the
What Size Solar Panel To Charge 200Ah Battery? (Incl. Calculator)
Battery capacity: 200ah. Battery volts: 12v. Battery type: Lithium. Depth of discharge: 100%. Charge controller: MPPT. Desired charge time: 6 peak sun hours. "Enter CALCULATE button to get the
result." Result: You need about 500 watt solar panel to charge a 12v 200ah lithium battery in 6 peak sun hours using an MPPT charge
What Size Solar Panel Do I Need To Charge A 12V
Again, we use the same calculation to get amps by dividing power in watts by voltage in volts. A 100 amp-hour battery will take five hours to fully charge at 12 volts and 20 amps. We advise utilizing
a 300
Charging Multiple Batteries With One Solar Panel
Suppose you have a 100-Watt solar panel connected in parallel to two 12-volt batteries (100Ah each). As a result, you will notice an output voltage of 12 volts with an increased capacity of 200Ah. A
What Size Solar Panel To Charge 120ah Battery?
Charge controller efficiency: PWM: 80%; MPPT 98%. Let''s suppose you''re using an MPPT charge controller. Solar panel power required after charge controller = 118 × 1.02 = 120 watts. 6. Now add
What Size Solar Panel to Charge 12V Battery: Factors and
Key takeaways: Understanding battery capacity and amp hours is crucial. Calculate solar panel size based on watt-hours and charging time. Choose an appropriately sized
What Size Solar Panel To Charge 12V Battery?
Solar panel sizing chart for 12 volt battery charging. Use the chart to below to figure out the size of solar panel you would need for your battery: 12 Volt Lead-Acid Capacity in Amp-hours/
Watt-hours. (assume 25% discharged) – 8 hours charge time – peak-sun-hours 4.0. Down – Solar Panel Size (watts)
How Long To Charge 12v battery With 100 Watt Solar Panel?
An average size 12v 50Ah car battery discharged 20% will take 2 hours to charge using a 100 watt solar panel. A deep-cycle 12v 50Ah battery discharged 50% will take 4 hours to charge with a 100 watt
solar panel. Both examples assume a current of 5.75 amps and MPPT controller.
|
{"url":"https://ozeenergy.eu/01_07_23_11587.html","timestamp":"2024-11-11T21:15:06Z","content_type":"text/html","content_length":"33963","record_id":"<urn:uuid:13dc8dcb-001b-4e32-9641-37958e241a59>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00130.warc.gz"}
|
Johann Guilleminot : Stochastic Modeling and Simulations of Random Fields in Computational Nonlinear Mechanics
Javascript must be enabled
Johann Guilleminot : Stochastic Modeling and Simulations of Random Fields in Computational Nonlinear Mechanics
Accounting for system-parameter and model uncertainties in computational models is a highly topical issue at the interface of computational mechanics, materials science and probability theory. In
addition to the construction of efficient (e.g. Galerkin-type) stochastic solvers, the construction, calibration and validation of probabilistic representations are now widely recognized as key
ingredients for performing accurate and robust simulations. This talk is specifically focused on the modeling and simulation of spatially-dependent properties in both linear and nonlinear frameworks.
Information-theoretic models for matrix-valued random fields are first introduced. These representations are typically used, in solid mechanics, to define tensor-valued coefficients in elliptic
stochastic partial differential operators. The main concepts and tools are illustrated, throughout this part, by considering the modeling of elasticity tensors fluctuating over nonpolyhedral
geometries, as well as the modeling and identification of random interfaces in polymer nanocomposites. The latter application relies, in particular, on a statistical inverse problem coupling
large-scale Molecular Dynamics simulations and a homogenization procedure. We then address the probabilistic modeling of strain energy functions in nonlinear elasticity. Here, constraints related to
the polyconvexity of the potential are notably taken into account in order to ensure the existence of a stochastic solution. The proposed framework is finally exemplified by considering the modeling
of various soft biological tissues, such as human brain and liver tissues.
0 Comments
Comments Disabled For This Video
|
{"url":"https://www4.math.duke.edu/media/watch_video.php?v=fd83b1b41a72eb0881844941c30d64fc","timestamp":"2024-11-07T15:55:29Z","content_type":"text/html","content_length":"50120","record_id":"<urn:uuid:8bb696bb-2586-476c-b18b-35ae6f51dc0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00619.warc.gz"}
|
FV Function
Get the future value of an investment
• rate - The interest rate per period.
• nper - The total number of payment periods.
• pmt - The payment made each period. Must be entered as a negative number.
• pv - [optional] The present value of future payments. If omitted, assumed to be zero. Must be entered as a negative number.
• type - [optional] When payments are due. 0 = end of period, 1 = beginning of period. Default is 0.
How to use
The future value (FV) function calculates the future value of an investment assuming periodic, constant payments with a constant interest rate.
1. Units for rate and nper must be consistent. For example, if you make monthly payments on a four-year loan at 12 percent annual interest, use 12%/12 (annual rate/12 = monthly interest rate) for
rate and 4*12 (48 payments total) for nper. If you make annual payments on the same loan, use 12% (annual interest) for rate and 4 (4 payments total) for nper.
2. If pmt is for cash out (i.e deposits to saving, etc), payment value must be negative; for cash received (income, dividends), payment value must be positive.
|
{"url":"https://exceljet.net/functions/fv-function","timestamp":"2024-11-09T09:21:38Z","content_type":"text/html","content_length":"66893","record_id":"<urn:uuid:ea2bb07d-9175-4eb0-abf0-9c2a4b416bc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00391.warc.gz"}
|
The motor of an engine is rotating about its axis with class 11 physics JEE_Main
Hint: In the given question, we are given with the angular velocity of the motor engine and the time for which it works. We can also see that there will be angular declaration, which is given as
constant in the problem. Now, to find the number of revolutions we will use the relation between all these properties.
Formula used: We will use formula for angular displacement $\theta = {\omega _0}t + \dfrac{1}{2}\alpha {t^2}$ and angular acceleration $\omega = {\omega _0} + \alpha t$
Complete step by step answer:
In the above question, we are given that
Initial angular velocity is $100\dfrac{{rev}}{m}$
Now, converting angular velocity to rad/sec,
$rad/s = \dfrac{{rev/m}}{{60\sec /m}} \times 2\pi rad/rev$
Now, substituting the value,
\Rightarrow \dfrac{{100}}{{60}} \times 2\pi rad/\sec \\
\Rightarrow \dfrac{{10}}{3}\pi rad/\sec \\
Hence, the initial angular velocity in rad/sec is $\dfrac{{10}}{3}\pi rad/\sec $
Total time interval is $15s$ .
Now, we will use formula for angular acceleration,
That is $\omega = {\omega _0} + \alpha t$, where $\omega $ is the final velocity, ${\omega _0}$ is the initial velocity, $\alpha $ is the angular acceleration and $t$ is the time interval.
Now, substituting the values given in the problem,
\omega = {\omega _0} + \alpha t \\
\Rightarrow 0 = \dfrac{{10}}{3}\pi + \alpha 15 \\
\Rightarrow \alpha = - \dfrac{2}{9}\pi \\
Now, the angular acceleration is $ - \dfrac{2}{9}\pi rad/{s^2}$
Now, using the formula for angular displacement $\theta = {\omega _0}t + \dfrac{1}{2}\alpha {t^2}$
\Rightarrow \theta = {\omega _0}t + \dfrac{1}{2}\alpha {t^2} \\
\Rightarrow \theta = \dfrac{{10}}{3}\pi \left( {15} \right) - \dfrac{1}{2} \cdot \dfrac{2}{9}\pi {\left( {15} \right)^2} \\
\Rightarrow \theta = \pi \left( {15} \right)\left( {\dfrac{{30 - 15}}{9}} \right) \\
\Rightarrow \theta = 25\pi rad = 12.5rev \\
Hence, the answer for the above problem is $12.5$ revolutions.
Note: In the given question, we know that when the engine is switched off the final velocity will be zero, as the motor goes to rest. We also know that the angular acceleration will be also negative
as the body is decelerating. Now, we will use the certain formulas to find the number of revolutions made by the motor before coming to rest.
|
{"url":"https://www.vedantu.com/jee-main/the-motor-of-an-engine-is-rotating-about-its-physics-question-answer","timestamp":"2024-11-10T07:30:41Z","content_type":"text/html","content_length":"148622","record_id":"<urn:uuid:028d2b70-a8c9-4d02-bb1a-9687bb1e134b>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00054.warc.gz"}
|
MAT128A: Project Four: Piecewise Chebyshev Expansions
1. Project Description
In this project, you will write C functions to construct and evaluate piecewise Chebyshev expansions. Recall that the Nth order Chebyshev expansion of the function f on the interval [a, b] is the
p(x) = X
b − a
x −
b + a
b − a
where the coefficients {an} are defined via the formula
an =
N + 1
b − a
xj +
b + a
Tn pxj q
with x0, x1, . . . , xN given by
xj = cos ˜
j +
N + 1
The Nth order piecewise Chebyshev expansion of the function f : [a, b] → R
n given on the partition
a = a0 < a1 < a2 < . . . < am = b (1) consists of m polynomials of degree N p0, p1, . . . , pm−1. The i th polynomial pi is the Nth order Chebyshev expansion of f on the interval [ai , ai+1]. It is
defined by the formula pi(x) = X N n=0 c i n Tn ˆ 2 ai+1 − ai x − ai+1 + ai ai+1 − ai ˙ , where the coefficients c i n are c i n = 2 N + 1 X N j=0 f ˆ ai+1 − ai 2 xj + ai+1 + ai 2 ˙ Tn pxj q. I have
provided you with a function called “chebadap” which, given a user-specified function f : [a, b] → R and an integer N, attempts to determine a partition of [a, b] of the form (1) such that for each i
= 0, 1, . . . , m − 1, pi approximates f on the interval [ai , ai+1] to specified precision. 1 Your task is to implement two functions, “chebadap coefs” and “chebadap eval”. The first of these
computes the coefficients c i n in a piecewise Chebyshev expansion of a user-specified function f. The second uses the coefficients in the piecewise Chebyshev expansion of f to approximate f at a
specified point. More explicitly, it finds the interval rai , ai+1q containing the point x and then evaluates pi(x) in order to approximate f(x). The file “chebdap.c” contains the “chebadap” routine
and gives the calling syntax for the “chebadap coefs”, “chebadap eval” . Your task is to implement the functions as described there. Your code should rely on the “chebexp.c” code you wrote for
Project 3 (or, if you wish, you can use the version of “chebexp.c” which I wrote and posted to the course website). 2. Testing and grading A public test code is given in the file adaptest1.c. Another
test code, called adaptest2.c, will be used to test your function as well. Half of the project grade will come from the first test file, and the second half will come from the second. The commands
gcc -o adaptest1 chebexp.c chebadap.c adaptest1.c -lm ./adaptest1 can be used to compiler and execute your program. There are five tests of your function in “adaptest1.c”, and the program will tell
your score out of 5. We will also test your code by compiling against adaptest2.c, which we will not release until after the projects are due. You will get a 0 on your project if it does not compile
and run. Please start work on your project early and come see either myself or our TA, Karry Wong, if you are having difficulties getting it to compile. 3. Submitting your project You will submit
your project using canvas. You should submit only your chebadap.c file. You must submit your file by 11:59 PM on the due data. Late assignments will not be accepted. 2
|
{"url":"https://codingprolab.com/answer/mat128a-project-four-piecewise-chebyshev-expansions/","timestamp":"2024-11-14T01:09:50Z","content_type":"text/html","content_length":"109229","record_id":"<urn:uuid:6811c861-427e-47b7-8337-d8b657a0fdce>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00459.warc.gz"}
|
Click on button by class
I want to click on a button but it doesn't have an id or a name :
<div class="Buttons">
<button class="Button Button18 RedButton" data-text-pin-it="Pin It" data-text-pinning="Pinning…" type="button">Pin It</button>
Can you please help me with that ?
Maybe something like this?
$oButtons = _IETagNameGetCollection ($oIE, "INPUT") ; not sure about input
For $oButton In $oButtons
If $oButton.class = "Button Button18 RedButton" Then
_IEAction ($oButton, "click")
my signature allows xpath object retrieval...all matching objects will be returned in an array
IEbyXPATH-Grab IE DOM objects by XPATH IEscriptRecord-Makings of an IE script recorder ExcelFromXML-Create Excel docs without excel installed GetAllWindowControls-Output all control data on a given
Thank you all, but it still doesn't work : /
I have iE 8 do I need IE 9 to use this code :
$oButtons = _IETagNameGetCollection ($oIE, "INPUT") ; not sure about input
For $oButton In $oButtons
If $oButton.class = "Button Button18 RedButton" Then
_IEAction ($oButton, "click")
Because all the codes like this one doesn't work for me : s
Thank you again
You dont need to get other version. Maybe my code is wrong. What error you get? Or maybe you could send me link of webpage?
It's okay I solved the thing by using imgclick ^^ thx
|
{"url":"https://www.autoitscript.com/forum/topic/142035-click-on-button-by-class/","timestamp":"2024-11-07T20:45:03Z","content_type":"text/html","content_length":"133032","record_id":"<urn:uuid:317ebe75-cbc4-4994-acd1-1bae02ce43d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00215.warc.gz"}
|
Excel Formula for Distance Between Two Points
In this article, we will explore how to calculate the distance between two points on the Earth's surface using an Excel formula. This formula is based on the Haversine formula and provides the result
in kilometers. By understanding this formula, you will be able to accurately calculate the distance between any two points specified by latitude and longitude.
To calculate the distance between two points, we need to convert the latitude and longitude values from degrees to radians. This is done using the RADIANS function in Excel. The formula then uses
trigonometric functions such as SIN and COS to calculate the central angle between the two points on the sphere.
The central angle is then passed to the ACOS function, which calculates the arc cosine of the central angle. This gives us the angular distance in radians between the two points on the sphere.
Finally, the angular distance is multiplied by the average radius of the Earth (6371 kilometers) to convert it to kilometers.
Let's consider an example to understand how this formula works. Suppose we have the latitude and longitude values for New York City (latA = 40.7128, longA = -74.0060) and London (latB = 51.5074,
longB = -0.1278). By substituting these values into the formula, we can calculate the distance between the two cities, which is approximately 5570.271 kilometers.
In conclusion, the Excel formula provided allows you to accurately calculate the distance between two points on the Earth's surface using latitude and longitude values. This can be useful in various
applications such as mapping, navigation, and geolocation. By understanding the underlying principles of this formula, you can customize it to suit your specific requirements and perform distance
calculations for any pair of points on the Earth's surface.
Excel Formula
=ACOS(SIN(RADIANS(latA)) * SIN(RADIANS(latB)) + COS(RADIANS(latA)) * COS(RADIANS(latB)) * COS(RADIANS(longB) - RADIANS(longA))) * 6371
Formula Explanation
This formula is used to calculate the distance between two points (specified by latitude and longitude) on the surface of the Earth, using the Haversine formula. The result is in kilometers.
Step-by-step explanation
1. RADIANS(latA), RADIANS(latB), RADIANS(longA), and RADIANS(longB) convert the latitudes and longitudes from degrees to radians, as the trigonometric functions in Excel require radians.
2. SIN(RADIANS(latA)) * SIN(RADIANS(latB)) + COS(RADIANS(latA)) * COS(RADIANS(latB)) * COS(RADIANS(longB) - RADIANS(longA)) calculates the central angle between the two points on the sphere.
3. ACOS(...) calculates the arc cosine of the central angle, which gives the angular distance in radians between the two points on the sphere.
4. ... * 6371 converts the angular distance from radians to kilometers, using the average radius of the Earth (6371 kilometers).
For example, if we have the following data:
latA = 40.7128 (New York City latitude)
longA = -74.0060 (New York City longitude)
latB = 51.5074 (London latitude)
longB = -0.1278 (London longitude)
The formula =ACOS(SIN(RADIANS(40.7128)) * SIN(RADIANS(51.5074)) + COS(RADIANS(40.7128)) * COS(RADIANS(51.5074)) * COS(RADIANS(-0.1278) - RADIANS(-74.0060))) * 6371 would return the value 5570.271,
which is the distance in kilometers between New York City and London.
|
{"url":"https://codepal.ai/excel-formula-generator/query/TRenssIe/excel-formula-distance-between-two-points","timestamp":"2024-11-09T16:49:35Z","content_type":"text/html","content_length":"93005","record_id":"<urn:uuid:4aaf30fb-5c12-4f5b-b6aa-a00ee060ecd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00179.warc.gz"}
|
operator types: simple
simple operator
an operator that takes no parameters
either a literal operator defined with respect to some ket
or a function with one parameter
or an operator in the operator library
depending on the exact definition of the operator, it sometimes corresponds to sparse matrix multiplication
see also:
|
{"url":"http://semantic-db.org/docs/usage3/operator_type/simple.html","timestamp":"2024-11-12T17:17:47Z","content_type":"text/html","content_length":"1381","record_id":"<urn:uuid:91165e31-a1a8-4cb4-9596-0c9af9941da1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00563.warc.gz"}
|
Calculating a mean or standard deviation from a frequency table
Calculating a mean or standard deviation from a frequency table
Calculating a mean or a standard deviation is not something done all that often, given that you can only calculate such statistics with interval or ratio level variables and most such variables have
too many values to put into a frequency table that will be informative beyond what raw data would look like. But there are times in which this is appropriate and helpful, especially, for example,
when one’s professor wants to fit in a question on an exam that has a large enough n to have the power to reach significance, but not enough space on the exam to write out 50 or 100 raw numbers (and
what a waste of the precious little time students have to write an exam!).
So, in light of last night’s Liberal minority government win here in Canada last night, here’s why and how one goes about doing this. So, let’s say, you were interested in knowing how the liberals
were polling on average over the month of October 2019, so you head on over to the CBC Poll Tracker and pull out the data for every poll listed for this month and discover that in October, Liberals
were polling between 28% and 37% over the 61 polls you find. In other words, some polls suggested that 28% of the sample was planning to vote liberal and some polls said 37% of their sample was
planning to vote liberal and the other polls had the liberals falling somewhere in between.
One way to calculate the mean and the standard deviation for this would be to just use the raw numbers, which would look something like this:
Personally, typing 61 numbers into my calculator feels quite time consuming and ripe for lots of errors. So, one way we could organize these data to make it a bit clearer for the reader would be to
put them into a frequency table as such:
Okay, so now these numbers are looking a bit more manageable, and you could just interpret this table if you wanted, as shown in my other post about frequency tables. You might say something like:
Over the course of October, Liberals were polling with between 28% and 37% in 61 different polls. The modal polling number was 32% of respondents favoring the liberals in 15 polls, although 11
other polls found 31% of respondents favoring the liberals, suggesting a bimodal distribution.
But, let’s say this is the only information given to you and you REALLY want to know want to calculate a mean and a standard deviation for how the Liberals were polling in October, and you want to do
this without punching in 61 different numbers, here’s whatcha do:
To calculate a mean from a frequency table:
First, how do you calculate a mean with raw numbers? It’s pretty simple, you just add up all of the scores and then divide by the total number of scores you have. This is presented in a more formal
way with the following equation:
Let’s say that there were only two 28’s in the raw data. Calculating the mean would be simple, as so:
But, in our example, we have 61 values. This will include a lot more addition. We *could* do it like this:
But, I had to make the values so small you can barely see them. Instead of digging through a junk drawer for a magnifying glass, we can just multiply each value by its frequency, since that tiny
equation above is the same if we were to write it out like this:
In other words, since there were 2 polls that said 28% of folks support the Liberals, that’s the same as saying 28*2 and then for the 5 polls that said 29% we can multiple 5*29 instead of adding 29
five times. It’s the same thing!
To simply this even more, we can use the set-up of the frequency table to make our work even easier to see as so:
What we’re doing is basically saying is: okay, we’ve got two polls at 28, five polls at 29, 3 polls at 30, and so on. Once we get those values, all we have to do is add them up (this is the sigma-x
part) to get a total of 1972, then divide by the number of polls (61) to get a mean polling value of 32.3%.
To interpret this, we can now say:
Among the 61 polls conducted in October 2019 about the 2019 Federal Election in Canada, the Liberals polled at 32.3% on average. In other words, the central tendency of political polls suggested
that the liberals would receive around 32% of the popular vote.
(Interestingly, as a side note, in the election, the Liberals actually won with 33.1% of the popular vote. To which, I say “take that!” to those who suggest that the 2016 U.S. election surprise
win by Trump despite Clinton’s clear lead in the polls, was due to problems in the data rather than problems in the election…. but again, I digress.)
Also, I should note, that our mean here is a rather crude mean and does not take into account things like size of polls or margins of error. If one were to do that, as do the folks over at
FiveThirtyEight, you’d get a slightly different mean of the means that would better predict the outcomes (although, not to beat a dead horse, this too makes me wonder why Nate Silver isn’t using his
platform to show that there’s a problem in the US electoral system instead of throwing himself under the bus… but whatever, I haven’t read his book, just a Netflix show about it.
To calculate a standard deviation from a frequency table
Now let’s say that we want to know, not just the central tendency but also the degree of dispersion (i.e. variability or variation among the scores). In other words, were all the polls saying
basically the same thing or were they bouncing all over the place? To do this, we need to calculate the standard deviation.
Doing this is very similar to calculating a mean from a frequency table but the tricky part is in knowing when to multiple the scores times the frequency and when to add everything up, because if you
don’t do it right, you’ll get a wonky answer. Let’s start with how to calculate a standard deviation in general:
(NOTE! Don’t get mixed up with the lower case “sigma” and the upper case Σ “sigma”! They are pronounced the same way but mean different things.)
Notice that this formula is VERY similar to the formula for the mean, in that we are taking some values and dividing them by the total number of cases. The difference between the two is that for the
mean we are figuring out what the average score is, whereas the standard deviation tells us the average distance from each score to the mean. Notice in our equation for we are adding up the distances
from the x to the x-bar.
The confusing part is that we have to also square each one. This is because if we didn’t, when we added up all the distances from the mean, we’d get zero because mean is like the fulcrum around which
all the scores balance out. For example, if one were to average out the average basketball scores from 2016 for the following eight teams, we’d get an average of 108.8 points per game with five teams
clustering pretty close below the mean and only three teams above the mean because the highest scoring team pulls the mean up.
Thus, because some are below and some are above, and the sum of the differences will always add up to zero (or nearly zero with rounding, unless you made a calculation error), we need to square these
differences to remove the negative signs to calculate the average distances. The way I suggest doing this is pretty much the same regardless of whether the data are in a frequency table or are raw
data, until we get near the end of the steps. So, just like with the mean, let’s use our frequency table to help us keep our calculations in a clear order.
For now, I’m going to shift the frequencies to the right, just to keep them out of the way while we’re calculating our standard deviation, and create columns to help us calculate our squared
deviations from the mean as such (also note that I put borders in the table, this is just to make it easier to see. It’s liking changing into old clothes when painting, we don’t present our frequency
tables like this publicly, but we might use them like this while working):
I put that up before adding in numbers to make it clear what data you’ll have been given (the scores and the frequencies of the scores) and what you need to calculate (the deviations from the means,
squaring the means), and multiplying the squared means times the frequency. Remember that we already calculated the mean as 32.3. So the steps we’ll take are:
Step 1. Take each possible score and subtract 32.3 from it (e.g. 28-32.3… 29-32.3… 30-32.3… and so on).
Step 2. Square each of these values
Step 3. Multiply them times the frequency (think of this again like with the means, we’re need to add up all the squared deviations from the mean, so by multiplying the two squared deviations from
the mean for “28” by 2, it’s like saying: (28-32.3)^2 + (28-32.3)^2
Step 4. We then need to sum up all the products of the squared deviations times the frequencies
Step 5. To calculate the variance, we then simply take this sum and divide it by the total n (61), to get:
Step 6. To calculate the standard deviation, we just take the square root of the variance:
To interpret the standard deviation, we typically discuss it in reference to the mean, so I’ll do so here:
In October 2019, the average Canadian poll indicated that the Liberals would garner 32% of the popular vote in the 2019 federal election, with a standard deviation of 2.13%. In other words, this
suggests that the bulk of the polls predicted that the Liberals would take in somewhere around 30% to 34% of the popular vote. As a side note, for funsies, I just calculated the means and the
standard deviations for all of the parties included in the polls and found the following results:
There are a few things I find interesting about these statistics. For one, just how accurate the means of the polls were to the actual popular vote (this is what the Central Limit Theorem teaches us
should happen, by the way.). But also, that the standard deviations for the Liberals and NDP (2.13 and 2.35, respectively) were quite a bit bigger than the standard deviations for the other parties
(e.g. 1.32 Conservatives, 1.53 Greens, etc.). What this suggests to me, based on my experience and paying attention to the news this past month, is that there was likely a lot more bouncing around
between whether people wanted to vote Liberal or NDP. I know for myself and many people I know who lean towards one or the other party, might engage in strategic voting for the other party to ensure
that the Conservatives to get into office. One would need to dig into the data a bit deeper to fully make this argument but I would suggest that the data are supporting the argument in the media that
Liberals and NDPers were having a harder time making up their minds about who to vote for than did those supporting other parties.
A final interesting analysis of these numbers is that you might notice that in reality more voters picked the Conservatives (34% won the popular vote) compared to the Liberals (33%) who won the
election. The explanation is likely that there was high voter turn-out in areas that voted conservative, but fewer seats went to a Conservative. This is how the Canadian system works (for those rusty
on their Canadian civics): we vote for a local Member of Parliament and the one in our riding who gets the most votes wins—then the “seats” filled by any particular party are added up and whoever has
the most seats gets to have their leader as Prime Minister. We only get to vote for the Prime Minister if that politician happens to represent our riding. I find this offers much food for thought for
those interested in electoral reform, particularly if they tend to lean left. This is not to suggest electoral reform is bad (nor good), just that such changes don’t ensure that any particular
ideology would end up with a lock on winning.
|
{"url":"https://doingsocialresearch.com/calculating-a-mean-or-standard-deviation-from-a-frequency-table/","timestamp":"2024-11-06T18:29:50Z","content_type":"text/html","content_length":"185530","record_id":"<urn:uuid:5844fe35-0a1a-4ce6-b288-b13f2ed147cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00598.warc.gz"}
|
Velocity and Time period of electron in Bohr orbit
Velocity of the electron in the orbit
We know that according to Bohr’s atomic model, electrons revolve in specified orbits. These orbits are called stationery orbits. When the electron is in that orbit, it neither loose energy nor gain
that energy. The electron in this orbit will have a specific velocity and we can derive the equation for the velocity.
According to second postulate the angular momentum of the electron is integral multiples of a constant. This number which is an integral multiple is actually called as principal quantum number.
Taking this concept into consideration we can write the equation for the velocity of the electron in that terms. It can be mathematically proved that velocity of the electron in any orbit is
independent of mass of electron. It is directly proportional to atomic number and is inversely proportional to principal quantum number. We can calculate the velocity of the electron in the first
orbit by writing atomic number as well as the principal quantum member equal to 1. The expression for the velocity is derived as shown below.
Time period of the electron in the orbit
As the electron is revolving the in a circular path, it take specific time to complete one rotation. This specific time is called time period. We can derive the equation for the time period by
writing a small relation between linear velocity and the angular velocity of the electron. The derivation is as shown below. It is proved that time period of the electron is directly proportional to
cube of the principal quantum number and inversely proportional to Squire of the atomic number.
Related Posts
No comments:
|
{"url":"https://www.venkatsacademy.com/2015/02/velocity-and-time-period-of-electron-in-orbit.html","timestamp":"2024-11-14T21:37:16Z","content_type":"application/xhtml+xml","content_length":"78312","record_id":"<urn:uuid:7dccdb88-9637-4cc7-963d-7d482b32736e>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00481.warc.gz"}
|
Enumerative Geometry, Physics and Representation Theory
Organising Committee: Andrei Negut (Massachussetts Institute of Technology), Francesco Sala (Università di Pisa) and Olivier Schiffmann (CNRS and Université de Paris-Sud)
Scientific Committee: Mina Aganagic (University of California at Berkeley), Hiraku Nakajima (Kavli IPMU), Nikita Nekrasov (Simons Center for Geometry and Physics), and Andrei Okounkov (Colombia
The 2021 IHES Summer school on "Enumerative Geometry, Physics and Representation Theory" will be held in a blended format at the Institut des Hautes Etudes Scientifiques (IHES) from 5 to 16 July 2021
with a reduced number of selected participants and through Zoom for all those who are interested in the subject (cf.link to the new registration form below).
This school is open to everybody but intended primarily for young participants, including Ph.D. students and postdoctoral fellows.
The School will be managed via a Slack workspace. If you have registered but you have not received yet the registration link to the Slack workspace, please contact Francesco Sala
The main theme of this Summer School is enumerative geometry, with particular emphasis on connections with mathematical physics and representation theory. As its core, enumerative geometry is about
counting geometric objects. The subject has a history of more than 2 000 years and has enjoyed many wonderful breakthroughs in the golden years of classical algebraic geometry, but we will be
interested in more recent developments.
This Summer School will focus on the following main subjects:
INVITED LECTURERS: Eugene Gorsky (University of California at Davis) Joel Kamnitzer (University of Toronto) Davesh Maulik (Massachusetts Institute of Technology) Rahul Pandharipande (ETH Zürich)
Markus Reineke (Ruhr-Universität Bochum) Richard Thomas (Imperial College London)
ADVANCED TALKS: Pierrick Bousseau (CNRS and Université Paris-Saclay) Alexander Braverman (University of Toronto and Perimeter Institute for Theoretical Physics) Tudor Dimofte (University of
California at Davis and University of Edinburgh) Lothar Gottsche (ICTP) Michael Groechenig (University of Toronto) Maxim Kontsevich (IHES) Georg Oberdieck (Mathematisches Institut der Universität
Bonn) Richard Rimanyi (University of North Carolina at Chapel Hill) Peng Shan (Tsinghua University) Dimitri Zvonkine (Laboratoire Mathématiques de Versailles)
EXERCISE SESSIONS / Q&A SESSIONS: For Gorsky’s course: Oscar Kivinen (University of Toronto), Jose Simental Rodriguez (Max-Planck Institute for Mathematics).
For Kamnitzer’s course: Yehao Zhou (Perimeter Institute for Theoretical Physics), Michael McBreen (Chinese University of Hong Kong).
For Thomas’ course: Woonam Lim (ETH Zürich), Michail Savvas (University of California, San Diego), Shubham Sinha (University of California, San Diego).
This is an IHES Summer School organized with the support of the Société Générale and in partnership with the Fondation Mathématique Jacques Hadamard, the National Science Foundation, the Clay
Mathematics Institute and the Foundation Compositio Mathematica.
|
{"url":"https://indico.math.cnrs.fr/event/5382/timetable/?view=standard_inline_minutes","timestamp":"2024-11-02T20:26:58Z","content_type":"text/html","content_length":"296360","record_id":"<urn:uuid:291b9882-efdd-480f-aed8-0b521457af1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00893.warc.gz"}
|
User Guide
cladCount determines the maximum number of individuals represented in raw counts of cladoceran subfossils.
In paleolimnological cladoceran analyses, all remains (carapaces, headshields, ephippia, postabdominal claws, etc.) should be tabulated seperately, with only the most frequently encountered remain
for each taxon used to estimate its abundance (Korhola and Rautio 2001). cladCount is a function to quickly calculate the maximum number of individuals represented in appropriately formatted raw
counts of subfossils.
Input data must contain the taxon name and subfossil name in the first two columns (respectively), with each subsequent column a sample/interval. If a taxon can be represented by more than one
subfossil type, the ‘Taxon’ cell should be left blank from the second row onwards (these blank cells are how the function identifies the number of subfossils present for each taxon).
The required format of cladCount input data is illustrated by cladCountInput:
Bosmina sp. Antennule 495 623 178 221
Carapace 531 511 143 188
D. longispina PA Claw 11 23 10 15
D. pulex PA Claw NA NA NA NA
Daphniid Tail Stem NA NA NA NA
Ephippia NA 1 NA NA
Acroperus harpae Headshield NA NA NA NA
Carapace NA NA NA NA
Postabdomen NA NA NA NA
Alona affinis Headshield NA 1 NA NA
The largest number of individuals possibly represented by all the subfossils for a particular taxon is identified by attributing the following numbers of each subfossil to an individual (rounding up
in the case of ‘half-individuals’; Korhola and Rautio, 2001):
• Headshield (1)
• Carapace (2)
• PA Claw (2)
• Postabdomen (1)
• Mandible(2)
• Caudal Furca (2)
• Exopodite Segment (2)
• Basal Exp Segment (2)
• Exp Segment 2 (2)
• Exp Segment 3 (2)
• Antennule (2)
• Tail Stem (1)
The output of cladCount is controlled by three arguments (percCutoff, sampleCutoff, and outputType):
• percCutoff (defaults to 2): minimum relative abundance (i.e. %) required for a taxon to be included in the reduced subset (‘gt’ - abbreviation of ‘greater than’)
• sampleCutoff (defaults to 2): the minimum number of samples a taxon must be present in with at least a relative abundance of percCutoff for inclusion in the reduced subset (‘gt’)
• outputType (defaults to ‘indiv’): the format of the output, either individuals (‘indiv’), relative abundance (‘perc’), or the relative abundances of only those taxa that meet the sampleCutoff and
percCutoff criteria (‘gt’)
For example, to return the number of individuals for all taxa present in the dataset:
Similarly, to return the relative abundances of only those taxa with greater than 4% abundance in at least 2 samples:
cladCount(cladCountInput, percCutoff=4, sampleCutoff=2, outputType='gt')
Korhola A, Rautio M (2001) 2. Cladocera and other branchiopod crustaceans. In: Smol JP, Birks HJB, Last WM (eds.) Tracking Environmental Change Using Lake Sediments. Volume 4: Zoological Indicators.
Kluwer Academic Publishers, Dordrecht, The Netherlands, pp 4-41
dipteranVWHO harmonizes chironomid/chaoborid count data with the taxonomy used in the Quinlan and Smol (2001, 2010) calibration set, returns the chironomid inferred volume weighted hypolimnetic
oxygen concentrations (VWHO) from the sample data using the inference model described in Quinlan and Smol (2001, 2010), and also performs some analog matching between the sample data and the
calibration set.
The required format of the input data is provided in an example data set dipteranVWHOInput
The output of dipteranVWHO is modified by three arguments (evaluate, percentileCut, and lowCount):
• evaluate (defaults to FALSE): logical to indicate whether to display the model information
• percentileCut (defaults to 5): cutoff value for flagging samples in the plots as having poor modern analogs
• lowCount (defaults to 50): cutoff value to flag samples that have low counts (i.e. less than 50 individuals)
The function returns a list of four lists:
• formattedData contains the sample info, the raw chironomid counts, the raw chaoborid counts, the chironomid counts aggregated by taxon code to match the taxonomy used in vwhoQuinlan 2010 dataset,
and vector of the 44 taxon codes used in the model.
• vwhoModel the model object constructed by the ‘WA’ function provided by ‘rioja’
• vwhoResults contains model ouput, relative abundance of the dipterans used in the model, relative abundances of all dipterans in the dataset
• analogResults contains the output from the analog analyis, the close modern analogs, and the minimum dissimilarity values
In addition, three plots are produced:
• minimum dissimilarity between each sample interval and the calibration set data
• sediment depth vs dipteran-inferred VWHO
• a composite plot combining data from both graphs
For example, to perform the analyses on the example dataset using a modern analog cutoff of 10, and only flagging counts with less than 40 individuals as low:
dipteranVWHO(dipteranVWHOInput, evaluate=TRUE, percentileCut=10, lowCount=40)
Quinlan R, Smol JP (2001) Setting minimum head capsule abundance and taxa deletion criteria in chironomid-based inference models. Journal of Paleolimnology 26: 327-342
Quinlan R, Smol JP (2001) Chironomid-based inference models for estimating end-of-summer hypolimnetic oxygen from south-central Ontario shield lakes. Freshwater Biology 46: 1529-1551
Quinlan R, Smol JP (2010) Use of Chaoborus subfossil mandibles in models for inferring past hypolimnetic oxygen. Journal of Paleolimnology 44: 43-50
interpDates interpolates dates for any undated intervals in a sediment core from the midpoint and age of the dated intervals, as well as the sectioning resolution.
Input data must contain two columns, and the argument intervalWidth is used to specifiy the sectioning resolution:
• Column 1: midpoint depth of the dated interval
• Column 2: date of the interval
The required format of interpDates input data:
## Midpt Age
## [1,] 0.25 2017
## [2,] 4.25 2000
## [3,] 8.25 1950
## [4,] 18.25 1850
The output of interpDates is modified by a single argument (intervalWidth):
• intervalWidth (defaults to 0.5): a single numeric value is used to indicate a constant sectioning resolution (e.g. the default value assumes the entire core was section at 0.5cm intervals). For a
variable sectioning resolution, change intervalWidth to a vector providing the width of each interval in the core
Three different interpolation methods are used to determine the dates:
• ‘connectTheDotsDates’ are calculated from straight lines between sequential date pairs
• ‘linearDates’ are calculated by fitting a straight line through all dated intervals
• ‘polynomialDates’ are calculated by fitting a 2nd order polynomial line through all dated intervals
For example, to return the interpolated dates from all 3 methods (as well as a plot comparing the two fitted lines) for a core with four dated intervals and a constant sectioning resolution of 0.5cm:
interpDates.input <- cbind(c(0.25, 4.25, 8.25, 18.25), c(2017, 2000, 1950, 1850))
Similarly, to return interpolated dates for a core with four dated intervals sectioned at a 0.5cm resolution for the first 10 intervals, and then a 1.0cm resolution for the next 10 intervals:
interpDates.input <- cbind(c(0.25, 4.25, 8.5, 13.5), c(2017, 2000, 1950, 1850))
intervalWidth <- c(rep(0.5, 10), rep(1.0, 10))
interpDates(interpDates.input, intervalWidth)
For more involved approaches to the estimation of age-depth relationships consult Blaauw and Heegaard (2012).
Blaauw M, Heegaard E (2012) 12. Estimation of age-depth relationships. In: Birks HJB, Lotter AF, Juggins S, Smol JP (eds.) Tracking Environmental Change Using Lake Sediments. Volume 5: Data Handling
and Numerical Techniques. Springer, Netherlands, pp 379-413
quickGAM fits a GAM along with the related analyses described in Simpson 2018. This function applies the code provided in the Simpson 2018 Supplementary Information.
Input data must be composed of two vectors:
• x: the independent variable
• y: the dependent variable
The output of ‘quickGAM’ is modified by three optional arguments, that can be used to ensure the axis labels and y-axis range remain consistent across all the output plots.
• xLabel (defaults to NULL)
• yLabel (defaults to NULL)
• yRange (defaults to NULL)
quickGAM returns a series of six plots: - A simple scatterplot of the input data - Estimated CAR1 process from the GAM fitted to the input data ( i.e. an autocorrelation check) - GAM-based trends
fitted to the input data - Estimated trends (w 20 random draws of the GAM fit to the input data) - 95% simultaneous confidence intervals (light grey) and across-the-function confidence intervals
(dark grey) on the estimated trends. - Estimated first derivatives (black lines) and 95% simultaneous confidence intervals of the GAM trends fitted to the data. Where the simultaneous interval does
not include 0, the models detect significant temporal change in the response.
For example, to fit a GAM between sediment depth (x) and the inferred-chlorophyll a values (y) provided in the example data set vrsChlaInput:
Read in the example data set:
The optional arguments allow for consistent output plots.
indepVarLabel <- "Core Depth (cm)"
depenVarLabel <- expression("VRS-Inferred Chlorophyll "~italic("a")~" (mg"%.%"g"^"-1"*textstyle(")"))
depenVarRange <- c(0, 0.0825)
Note, that to use the default values for the plots, using ‘drop=FALSE’ when reading in data can preserve column names for use in the plots.
quickGAM(plotData[,1, drop=FALSE],plotData[,2, drop=FALSE], xLabel = indepVarLabel, yLabel = depenVarLabel, yRange = depenVarRange)
vrsChla infers chlorophyll a concentrations of sediments from spectral measurements of absorbance at wavelengths between 650-700 nm, following the approach described in Wolfe et al. (2006) and
Michelutti et al. (2010), and reviewed in Michelutti and Smol (2016).
The technique uses a simple linear predictive model to infer sedimentary chlorophyll a concentrations (along with its primary degradation products, pheophytin a and pheophorbide a) from the
absorbance peak centered on 675nm following the equation:
\[[\mbox{chlorophyll }\textit{a } + \mbox{ derivatives}] = 0.0919 · \mbox{peak area}_{\textit{ 650-700 nm}} + 0.0011\]
Input data must contain 27 columns:
• Column 1: midpoint depth of the sediment interval
• Columns 2-27: values from the spectrophotometer for wavelengths 650-700 nm, measured every 2 nm
The required format of vrsChla input data is illustrated by vrsChlaInput:
## V1 V2 V3 V4 V5 V6
## 1 0.125 -0.01127052 -0.011009693 -0.010835648 -0.010232449 -0.009695530
## 2 0.625 0.04971433 0.049901485 0.050267935 0.050736427 0.051381111
## 3 1.125 0.02501535 0.025347233 0.025770903 0.026331663 0.026941776
## 4 1.625 0.00387764 0.004512548 0.005248785 0.006270409 0.007355452
## 5 2.125 0.12109232 0.121597052 0.122285128 0.123096704 0.123881340
## 6 2.625 0.08708549 0.087878942 0.088815451 0.089810133 0.090915442
## 7 2.875 0.09193444 0.092756271 0.093790054 0.094962120 0.096058369
## 8 3.125 0.12888002 0.129751921 0.130694628 0.131823540 0.132953167
## 9 3.375 0.14781332 0.148777962 0.150153875 0.151404858 0.152652025
## 10 3.625 0.14603949 0.146726370 0.147392273 0.148148060 0.148859501
NOTE: When using the Model 6500 series Rapid Content Analyzer at PEARL, the necessary values are contained in the ‘spectra’ tab of the excel file output (although they must be transposed). Ensure
cells are formatted to 15 decimal places to avoid small rounding errors.
For example, to determine the chlorophyll a concentrations of vrsChlaInput and plot sediment depth vs chlorophyll a (red line denotes the estimated 0.01 mg·g^-1 lower detection limit of the method):
|
{"url":"https://shiggo.github.io/jezioro/vignettes/jezioroGuide.html","timestamp":"2024-11-06T00:37:50Z","content_type":"text/html","content_length":"1049321","record_id":"<urn:uuid:180b6e43-ddf7-444d-a96e-bd6bbd3361e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00431.warc.gz"}
|
Capital Budgeting & Investment problems
This paper concentrates on the primary theme of Capital Budgeting & Investment problems in which you have to explain and evaluate its intricate aspects in detail. In addition to this, this paper has
been reviewed and purchased by most of the students hence; it has been rated 4.8 points on the scale of 5 points. Besides, the price of this paper starts from £ 79. For more details and full access
to the paper, please refer to the site.
Capital Budgeting & Investment problems
Question 1
Apex Ltd. is an Australian operated company with mainly American resident shareholders. The company is currently in the process of comparing two mutually exclusive machines for use in a new project
with Machine A costing $30,000, having a useful life of five years and machine B costing $45,000 and having a useful life of 10 years.
Cash inflows from sales are expected to be $22,000 p.a. from each machine, while total cash-based operating costs are expected to be $10,000 and $8,000 respectively for each machine. All revenues and
costs are assumed to occur at the end of the respective years for simplicity of assessment. Machine A is expected to have a salvage value of $4,000 at the end of 5 years, while at the end of 10 years
machine B will be worthless. Depreciation for accounting and tax purposes is calculated on a straight-line basis on the original cost of each machine with no consideration in depreciation
calculations for any expected salvage value.
The company has an after-tax required rate of return of 14% and pays income tax at the rate of 40% in the year following the year of income (that is, taxes levied on year 1 income are paid at the end
of year 2)
Required -
a) Provide some reasons as to why the alternative machines are said to be mutually exclusive for the company.
b) Record the relevant cash-flows for each machine on a time (cash-flow) diagram.
c) Advise the company which machine (if any), should be purchased and justify all the processes you have used in order to reach your decision.
d) i) If Apex Ltd., prior to finalising the decision as to which machine (if any), should be purchased, was taken over by a group of Australian resident shareholders, resulting in the company
becoming fully Australian owned, briefly discuss what difference this would make (if any) to your calculations in part c) of this question.
ii) Perform any relevant calculations relating to this part of the question and advise the company whether your decision from part c) of this question will now change.
Question 2
You are considering two mutually exclusive projects. The expected values for each project`s end-of-year cash flows are:
Year Project A Project B
0 -$1,000,000 -$1,000,000
1 500,000 500,000
2 700,000 600,000
3 600,000 700,000
4 500,000 800,000
You have decided to evaluate these projects using the certainty equivalent method. The certainty equivalent coefficients for each project`s cash flows are given below:
Year Project A Project B
0 1.00 1.00
1 0.95 0.90
2 0.90 0.70
3 0.80 0.60
4 0.70 0.50
Required :
a) Given that the risk-free rate of return is 5%p.a., what is the NPV of each project?
b) Briefly discuss and justify which project, if any, should be preferred.
c) In practice, what factors are likely to influence the selection of the certainty equivalent coefficients for each project`s expected cash flows?
Question 3
Mr. Huey Lewis met with his accountant recently and the accountant strongly advised Huey to invest in Australis Exploration Solutions, an Australian high technology listed public company specialising
in filtering out dust particles in mine shafts and thus reducing worker exposure to bacteria etc.
The advice was based on the accountant obtaining a preview of the company`s soon to be released financial statements which had reported a significant increase in its accounting revenues and profit
margins in the most recent reporting period. The accountant, who was a recent MBA graduate (from an online program offered by a recently established Banga University), also advised your neighbour
that, according to the company`s most recent balance sheet, it held assets valued 10 times greater than its liabilities.
Included in the company`s assets was the recently revalued amount (from $100,000 to $1 million) of some vacant land attached to the building that the company currently operates its business from. The
company directors had established the revalued amount based on average sales in the local area over the last 5 years.
The accountant fortuitously had a client who was seeking to sell their current holding in Australis Communications.com and would accept a price of only $3 per share from Mr. Lewis, given that the
client had an outstanding tax bill that needed to be paid quickly. Although Mr. Lewis has no reason to doubt the word of his accountant, he has sought your advice prior to undertaking the investment
as he is aware of your current study in a finance course at University.
Required :
Based on the information given, what would be your advice to your neighbour about undertaking the investment described above? You are welcome to make any assumptions reasonably necessary in order to
provide an informed response.
Note: Your discussion would be expected to be based on your critical interpretation and analysis of the specific information provided above together with any reasonable assumptions that allow for the
development of an informed response.
Question Number 4
The Dayana Company is planning on issuing a debenture that pays no interest but can be converted into a face value of $1,000 at maturity, 8 years from their issue date. To price these debentures
competitively with other bonds of equal risk, it is determined that they should yield 9%, compounded annually.
Required :
a) At what price should the Dayana Company currently sell these debentures?
b) If you purchased these debentures on the issue date from the Dayana Company in part a) of this question, what would be your sale price after holding the bonds for 3 years given the following
information available at the time of sale (3 years after the date of issue) :
- Government bonds having a par value of $1,000 with a 5 year term are able to be issued at a yield of 5% with annual coupon payments in arrears of $60.
- Dayana Company at this time was in the process of issuing a 5 year
debenture (using the same form of security as previously to debenture
holders), having a par value of $500, with quarterly coupon payments to
debenture holders in advance of $14 per payment. The issue price for
these debentures was to be $485.
Show all calculations (including a cash-flow diagram) and justify any assumptions included in the relevant calculations made.
c) Which, if any, of the information included in part b) of this question was not used in calculating the sale price required by part b) ? Justify your response.
Question Number 5
If the nominal rate of interest is 14.2% p.a. and the anticipated rate of inflation is 5.5% p.a., what is the real rate of interest to the nearest 0.1% ?
b) Explain to an inexperienced investor, your interpretation of a real rate of interest.
|
{"url":"https://www.yourdissertation.co.uk/capital-budgeting-and-investment-problems-1/","timestamp":"2024-11-03T04:40:00Z","content_type":"text/html","content_length":"32878","record_id":"<urn:uuid:5d215ec7-5f9a-4cb2-9d9d-43c1e204590e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00792.warc.gz"}
|
Math Tutor Network's Fundamental Skills Development program is designed for students who would like to solidify their math foundations knowledge. Often, students struggle in Math not because they are
not able to understand the theory in topic, but because of a gap in their math foundations. This barrier prevents students from trying further as they lost their confidence in Math.
Who Should Enroll?
Students should enroll in this program if they have a weak math foundation or are not having enough practice in basic algebra. Students who rely too much on the calculator or make a lot of careless
mistakes are indications that they are not strong enough with their fundamental skills.
This program is most suitable for students in Math 6 to Math 9.
Course Outline
Students will learn the steps to perform numerical operations without a calculator. Having a good understanding of algebraic steps will allow them to solve complex equations as they move into higher
level Math.
This program would focus on the following topics:
• Integers and Decimals
• Fractions
• Ratios, Rates and Proportions
• Squares and Square Roots
• Powers and Exponents
• Order of Operations
• Solving Equations
Lesson Format
Lessons are available for both individual tutoring and group tutoring. Lessons will be held at the student's home, at the public library or online via our virtual tutoring platform. The length and
number of lessons will be tailored to students.
For more information on the availability and rates of our lessons, please visit the Private Lessons and Group Lessons page.
Date and Time
Private lessons have flexible start dates and are self-paced. Students may request for as many lessons as necessary to understand each concept.
Group lessons follows a set schedule that is determined based on the students’ schedule. Subject to availability. Please contact us for more detail.
Please fill out the lesson inquiry form. A staff member will contact you within 24 hours.
For other inquiry options, please visit our Inquiry page.
|
{"url":"https://www.mathtutornetwork.ca/fundamental-skills-development.html","timestamp":"2024-11-02T18:53:35Z","content_type":"text/html","content_length":"19248","record_id":"<urn:uuid:d6f5e8ba-c9f9-4cbd-8a72-edeb21717bd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00159.warc.gz"}
|
Sudo Null - Latest IT News
And again about topological sorting ...
Greetings to all readers of Habr! Having decided to write this article, I found on Habré a lot of materials on graphs, and, in particular, on
topological sorting
. For example,
here the
theoretical part is described in rather detail and examples of basic algorithms are given. Therefore, I will not repeat myself, but will talk about the practical field of application of
Topological sorting
, or rather, I want to share my personal experience with this method when developing
products . From the article, the motives and reasons that prompted the use of this algorithm will become clear. At the end I will give our version of the algorithm for sorting dependent objects.
Scope of the sorting algorithm in DevExpress
In a
previous article,
we talked about the
XtraScheduler scheduler
and its print extension. It includes a report designer that works with visual elements. When designing the appearance of a printed form in a designer, it is necessary to establish links between the
visual elements of the report on the principle of “master-subordinate”. These dependencies should determine how data is transferred between the elements. For internal implementation, this implies the
formation of the correct print priority for these objects.
Elements in essence represented a directed graph of objects, because the options of these controls uniquely determined the direction of the dependency by setting a link to the main object.
The topological sorting algorithm was the best suited to build dependent objects in the right order before printing, analyzing the relationships between them. In addition, the dependent controls used
to print the main controls when printing. Therefore, the algorithm was also used to organize internal data caching objects containing the main iterator object according to the data and a subordinate
list associated with it.
Where else have we applied the algorithm?
A little later, in the
when developing the import and export of styles for documents in RTF and DOC formats, it also became necessary to receive objects containing dependencies in the correct order. The described algorithm
was generalized and successfully applied in this case.
Let's move on to our implementation of the sorting algorithm. It is based on the described Algorithm-T from the book
"The Art of Programming" by
Donald Knuth (Volume 1 of Chapter 2.2.3). Therefore, you can read about the details of the algorithm in the original, here I will only give a diagram of the algorithm for a general understanding of
the idea.
Why did we choose this algorithm? Just let me quote the author a little.
“An analysis of the T algorithm can be quite simply done using Kirchhoff’s law. Using this law, the execution time can be estimated using the formula c1 * m + c2 * n, where m is the number of
entered relations, n is the number of objects, and c1 and c2 are constants. A faster algorithm to solve this problem is simply impossible to imagine! ”
The implemented algorithm is located in the
assembly in the
class , which along with topological sorting contains a number of other useful algorithms.
namespace DevExpress.Utils {
public static class Algorithms {
public static IList TopologicalSort(IList sourceObjects, IComparer comparer) {
TopologicalSorter sorter = new TopologicalSorter();
return sorter.Sort(sourceObjects, comparer);
Using the algorithm is extremely simple. It is enough to call the static method of the class, passing it the necessary parameters. An example call is as follows:
protected virtual IList SortControls(List sourceControls) {
return Algorithms.TopologicalSort(sourceControls, new ReportViewControlComparer());
You can do without calling the static method by explicitly creating an instance of the sorter object.
The source code of the sorter class that implements the algorithm will be given at the end of the article.
As can be seen from the parameters, in addition to the list of objects, a specialized comparator object is passed to the method. If it is not specified, then the default object will be used. In
practice, the comparator object is usually defined, since it is in it that the comparison logic is determined, which can be based on the properties of the compared objects. In addition, such an
object can implement several IComparer interfaces simultaneously for several inherited types.
As an example of such a class, I’ll give our ReportViewControlComparer, which is used in the
public class ReportViewControlComparer : IComparer, IComparer {
#region IComparer Members
public int Compare(XRControl x, XRControl y) {
return CompareCore(x as ReportRelatedControlBase, y as ReportViewControlBase);
#region IComparer Members
public int Compare(ReportViewControlBase x, ReportViewControlBase y) {
return CompareCore(x as ReportRelatedControlBase, y);
int CompareCore(ReportRelatedControlBase slave, ReportViewControlBase master) {
if (slave != null && master != null) {
if (slave.LayoutOptionsHorizontal.MasterControl == master ||
slave.LayoutOptionsVertical.MasterControl == master)
return 1;
return 0;
Application example
To demonstrate the operation of the algorithm, we will create a console application. As an example of a graph, take a simple graph of 5 nodes (see the figure at the beginning of the article).
G = ({a, b, c, d, e}, {(a, b), (a, c), (a, d), (a, e), (b, d), (c, d ), (c, e), (d, e)})
To represent the graph, we will use a simple class that defines a node with a list of other nodes associated with it.
public class GraphNode {
List linkedNodes = new List();
object id;
public GraphNode(object id) {
this.id = id;
public List LinkedNodes { get { return linkedNodes; } }
public object Id { get { return id; } }
The code for the application itself is shown below:
class Program {
static void Main(string[] args) {
private static void DoDXTopologicalSort() {
GraphNode[] list = PrepareNodes();
Console.WriteLine("DX Topological Sorter");
Console.WriteLine(new string('-', 21));
GraphNode[] list = PrepareNodes();
IComparer comparer = new GraphNodeComparer();
IList sortedNodes = DevExpress.Utils.Algorithms.TopologicalSort(list, comparer);
Console.WriteLine("Sorted nodes:");
static GraphNode[] PrepareNodes() {
GraphNode nodeA = new GraphNode("A");
GraphNode nodeB = new GraphNode("B");
GraphNode nodeC = new GraphNode("C");
GraphNode nodeD = new GraphNode("D");
GraphNode nodeE = new GraphNode("E");
nodeA.LinkedNodes.AddRange(new GraphNode[] { nodeB, nodeC, nodeE });
nodeC.LinkedNodes.AddRange(new GraphNode[] { nodeD, nodeE });
return new GraphNode[] { nodeD, nodeA, nodeC, nodeE, nodeB };
static void PrintNodes(IList list) {
for (int i = 0; i < list.Count; i++) {
string s = string.Empty;
if (i > 0)
s = "->";
s += list[i].Id.ToString();
Direct graph relationships are set in the
method . In this case, dependencies are created arbitrarily.
To compare nodes, you will also need the GraphNodeComparer class
public class GraphNodeComparer : IComparer {
public int Compare(GraphNode x, GraphNode y) {
if (x.LinkedNodes.Contains(y))
return -1;
if (y.LinkedNodes.Contains(x))
return 1;
return 0;
After starting the application, we will get a sorted list of nodes and
A-> B-> C-> D-> E
will be displayed in the console .
The result of the program is shown in the figure below:
Sorter Source Code
As I promised above, I give the code for implementing the topological sorting algorithm.
namespace DevExpress.Utils.Implementation {
#region TopologicalSorter
public class TopologicalSorter {
#region Node
public class Node {
int refCount;
Node next;
public Node(int refCount, Node next) {
this.refCount = refCount;
this.next = next;
public int RefCount { get { return refCount; } }
public Node Next { get { return next; } }
#region Fields
int[] qLink;
Node[] nodes;
IList sourceObjects;
IComparer comparer;
#region Properties
protected internal Node[] Nodes { get { return nodes; } }
protected internal int[] QLink { get { return qLink; } }
protected IComparer Comparer { get { return comparer; } }
protected internal IList SourceObjects { get { return sourceObjects; } }
protected IComparer GetComparer() {
return Comparer != null ? Comparer : System.Collections.Generic.Comparer.Default;
protected bool IsDependOn(T x, T y) {
return GetComparer().Compare(x, y) > 0;
public IList Sort(IList sourceObjects, IComparer comparer) {
this.comparer = comparer;
return Sort(sourceObjects);
public IList Sort(IList sourceObjects) {
this.sourceObjects = sourceObjects;
int n = sourceObjects.Count;
if (n < 2)
return sourceObjects;
int r = FindNonRelatedNodeIndex();
IList result = ProcessNodes(r);
return result.Count > 0 ? result : sourceObjects;
protected internal void Initialize(int n) {
int count = n + 1;
this.qLink = new int[count];
this.nodes = new Node[count];
protected internal void CalculateRelations(IList sourceObjects) {
int n = sourceObjects.Count;
for (int y = 0; y < n; y++) {
for (int x = 0; x < n; x++) {
if (!IsDependOn(sourceObjects[y], sourceObjects[x]))
int minIndex = x + 1;
int maxIndex = y + 1;
Nodes[minIndex] = new Node(maxIndex, Nodes[minIndex]);
protected internal int FindNonRelatedNodeIndex() {
int r = 0;
int n = SourceObjects.Count;
for (int i = 0; i <= n; i++) {
if (QLink[i] == 0) {
QLink[r] = i;
r = i;
return r;
protected virtual IList ProcessNodes(int r) {
int n = sourceObjects.Count;
int k = n;
int f = QLink[0];
List result = new List(n);
while (f > 0) {
result.Add(sourceObjects[f - 1]);
Node node = Nodes[f];
while (node != null) {
node = RemoveRelation(node, ref r);
f = QLink[f];
return result;
Node RemoveRelation(Node node, ref int r) {
int suc_p = node.RefCount;
if (QLink[suc_p] == 0) {
QLink[r] = suc_p;
r = suc_p;
return node.Next;
* This source code was highlighted with Source Code Highlighter.
If necessary, in a certain order of processing objects dependent on each other, they can be pre-ordered using the topological sorting algorithm. As a result, the correct sequence of objects and the
execution of actions on them is built.
The algorithm proposed in the article provides the following advantages:
• using generic, i.e. it can be used to sort objects of various types
• the ability to define your own Comparer class, that is, it allows you to implement different logic for comparing objects.
• linearity of the algorithm, the algorithm is not recursive
An example with source code is available
I hope this material will be useful to you in future projects.
|
{"url":"https://sudonull.com/post/161622-And-again-about-topological-sorting-Developer-Soft-Blog","timestamp":"2024-11-05T23:42:40Z","content_type":"text/html","content_length":"23634","record_id":"<urn:uuid:fd4837d1-19d2-4deb-ad5f-a4c9e5144f90>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00636.warc.gz"}
|
Magnification Factor Calculator - Calculator Wow
Magnification Factor Calculator
The magnification factor is a critical parameter in optics, essential for anyone using telescopes, microscopes, or other optical devices. It determines how much an object appears enlarged compared to
its actual size. A Magnification Factor Calculator simplifies this calculation, ensuring users can quickly and accurately determine the magnification of their optical equipment. Understanding how to
use this calculator can enhance your observational experiences and ensure optimal equipment performance.
The formula for calculating the magnification factor is straightforward:
M = F<sub>obj</sub> / F<sub>ep</sub>
• M is the magnification factor.
• F<sub>obj</sub> is the focal length of the objective lens (in mm).
• F<sub>ep</sub> is the focal length of the eyepiece (in mm).
This formula divides the focal length of the objective lens by the focal length of the eyepiece to determine how much the image is magnified.
How to Use
Using a Magnification Factor Calculator involves a few simple steps:
1. Input Focal Length of Objective: Enter the focal length of the objective lens in millimeters. This is usually provided by the manufacturer of your optical device.
2. Input Focal Length of Eyepiece: Enter the focal length of the eyepiece in millimeters. This value is also typically provided by the manufacturer.
3. Calculate: Click the “Calculate” button to determine the magnification factor. The calculator will use the formula to compute the result and display it immediately.
Let’s consider an example where you have a telescope with an objective lens focal length of 1000 mm and an eyepiece focal length of 25 mm.
Using the formula:
M = 1000 mm / 25 mm
M = 40
This means the telescope will magnify the image 40 times, making distant objects appear 40 times closer than they are to the naked eye.
10 FAQs and Answers
1. What is the magnification factor?
□ The magnification factor indicates how much larger an optical device makes an object appear compared to its actual size.
2. Why is the focal length important in calculating magnification?
□ The focal length determines the distance over which light rays are focused, affecting the level of magnification.
3. Can I use any units for focal length?
□ For consistency and accuracy, it’s best to use millimeters (mm) when inputting focal lengths.
4. How do I find the focal length of my lenses?
□ The focal lengths are typically printed on the lenses or provided in the manufacturer’s specifications.
5. Does the magnification factor affect image quality?
□ Higher magnification can reduce image brightness and sharpness, depending on the quality of the optical device.
6. Is a higher magnification always better?
□ Not necessarily. Higher magnification can limit the field of view and reduce image clarity, especially in low-quality optics.
7. Can this calculator be used for both telescopes and microscopes?
□ Yes, the calculator works for any optical devices that use objective lenses and eyepieces.
8. What if I have multiple eyepieces?
□ Calculate the magnification for each eyepiece separately to understand the range of magnifications available.
9. Does the calculator account for any other factors?
□ The calculator only considers focal lengths; other factors like lens quality and environmental conditions can also affect magnification.
10. Where can I find a reliable Magnification Factor Calculator?
□ Many websites offer free online calculators, or you can create one using simple HTML and JavaScript.
The Magnification Factor Calculator is a valuable tool for anyone using optical devices like telescopes or microscopes. By understanding and using this calculator, you can ensure that you are getting
the most out of your equipment, whether for stargazing, scientific research, or other observational activities. Accurate calculations help in selecting the right lenses and achieving the best
possible viewing experience. Regular use of this tool can greatly enhance your knowledge and enjoyment of the optical world.
|
{"url":"https://calculatorwow.com/magnification-factor-calculator/","timestamp":"2024-11-06T00:53:41Z","content_type":"text/html","content_length":"65839","record_id":"<urn:uuid:675df9b0-7756-484c-9684-b80d4c46d1fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00572.warc.gz"}
|
Hilbert's Hotel
Infinity is where things happen that don't
- Anonymous
Hilbert's Hotel, named after mathematician David Hilbert, is a hotel with an infinite number of rooms. Imagine that every single one of them is occupied. What does the manager do when someone else
shows up and wants a room? He doesn't need to turn that person away. Instead, he just moves the person in room 1 into room 2, the person in room 2 into room 3, the person in room 3 into room 4, and
so on. Room 1 is now vacant for the new guest. Now, a hundred new guests appear. The manager now moves the guest in room 1 into room 101, the guest in room 2 into room 102, etc. and thus creates room
for the 100 guests.
Now an infinite number of people show up, all wanting rooms. What does the manager do now? He simply moves the person in room 1 into room 2, the person in room 2 into room 4, the person in room 3
into room 6, and in general the person in room n into room 2n. Now there are an infinite number of rooms free (all of the odd-numbered rooms) for this infinite group of people.
Hilbert's Hotel is paradoxical, but it illustrates an interesting property of infinite sets: An infinite set can be put in one-to-one correspondence with an infinite subset of itself. It also
illustrates the seemingly impossible situations that become possible when dealing with infinity.
|
{"url":"https://mathlair.allfunandgames.ca/hotel.php","timestamp":"2024-11-12T16:19:56Z","content_type":"text/html","content_length":"3519","record_id":"<urn:uuid:428431dc-2b47-4319-854a-e9966ff37234>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00067.warc.gz"}
|
Radius of Icosahedron Calculators | List of Radius of Icosahedron Calculators
List of Radius of Icosahedron Calculators
Radius of Icosahedron calculators give you a list of online Radius of Icosahedron calculators. A tool perform calculations on the concepts and applications for Radius of Icosahedron calculations.
These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of Radius of
Icosahedron calculators with all the formulas.
|
{"url":"https://www.calculatoratoz.com/en/radius-of-icosahedron-Calculators/CalcList-9650","timestamp":"2024-11-12T15:42:02Z","content_type":"application/xhtml+xml","content_length":"120627","record_id":"<urn:uuid:17f7fdbd-ca63-4cb6-86e5-7cf3dcd59420>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00228.warc.gz"}
|
Wolfram Function Repository
Function Repository Resource:
Generate a graph of branch pair ancestry for a list substitution "glocal" (hybrid of global and local) multiway system
Contributed by: Jonathan Gorard
generates the graph of branch pair ancestry after n steps in the glocal multiway evolution of a list substitution system with the specified rules, starting from initial conditions init.
uses the function sel to select which of the events obtained at each step to include in the evolution.
Details and Options
Glocal multiway systems combine the global events (and the corresponding causal structure) of ordinary multiway systems, with the individual token philosophy of local multiway systems. A single event
vertex in the evolution causal graph (or token event graph) for a glocal multiway system will, in general, have many incoming and outgoing evolution edges, corresponding to the fact that several
tokens must be assembled together in order to reconstruct a single global input or output state for the event.
The branchial graph for a glocal multiway system shows the branch pair (i.e. critical pair) ancestry for a glocal evolution causal graph/token event graph. Unlike the branchial graph for an ordinary
(global) multiway system, which only contains information regarding the evolution ancestry of states, glocal branchial graphs contain information about both the causal ancestry of events and the
evolution ancestry of states/tokens.
Argument and option patterns for
are similar to those of the resource function
Replacement rules for a list substitution system are specified as {{l[11],l[12],…}→{r[11],r[12],…},{l[21],l[22],…}→{r[21],r[22],…},…}.
ResourceFunction["ListGlocalBranchialGraph"] accepts both individual rules and lists of rules, and likewise for initial conditions. ResourceFunction["ListGlocalBranchialGraph"][rules,init,n] is
interpreted as ResourceFunction["ListGlocalBranchialGraph"][rules,{init},n] etc.
The event selection function sel in ResourceFunction["ListGlocalBranchialGraph"][rules→sel,…] can have the following special forms:
"Sequential" applies the first possible replacement (sequential substitution system)
"Random" applies a random replacement
{"Random",n} applies n randomly chosen replacements
Duplicated tokens are displayed separately at each time step by default, although this can be overridden by setting
Events are represented in the form {rule,input,rest}, where rule is the rule used in the updating event, input is the part of the state to which the rule is applied and rest is the remainder of the
state. For substitution systems, rest is given in the form {prefix,suffix}.
Options for ResourceFunction["ListGlocalBranchialGraph"] include:
"DeduplicateTokens" False whether to merge all instances of equivalent tokens that appear at each time step
"VertexRendering" True whether to use special rendering for state/token and event vertices
"StateRenderingFunction" Automatic how to label states/tokens that appear in the evolution causal graph/token event graph
"EventRenderingFunction" Automatic how to label events that appear in the evolution causal graph/token event graph
All of the standard options for
can also be applied to
Possible settings for "StateRenderingFunction" and "EventRenderingFunction" include:
Automatic make a label from the name of the vertex
Inherited use the explicit vertex name as the label
None use no label for the vertex
"string" use a shape from the VertexShapeFunction collection
func apply the function func to the name of the vertex
Basic Examples (4)
Generate glocal branchial graphs for two simple list substitution evolutions:
Show just the structure of the graphs, without labels:
Generate a glocal branchial graph for a more complicated list substitution evolution:
Show just the structure of the graph, without labels:
Merge all instances of equivalent tokens that appear at each time step:
Run the system for more steps:
Show just the structure of the graphs, without labels:
Lists can contain arbitrary symbolic elements:
Specify an event selection function that picks only up to two events at each step:
Scope (5)
Rules and initial conditions (2)
ListGlocalBranchialGraph accepts both individual rules and lists of rules:
Likewise for initial conditions:
Event selection functions (3)
Apply only the first possible event at each step:
Apply the first and last possible events at each step:
Compare this to the full branchial graph for the unrestricted glocal multiway evolution:
Options (15)
DeduplicateTokens (2)
By default, equivalent tokens remain unmerged at each time step:
Merging of equivalent tokens at each time step can be enforced using the option "DeduplicateTokens":
VertexRendering (2)
By default, state/token vertices and event vertices use special rendering (inherited from the MultiwaySystem resource function):
This rendering can be disabled using the option "VertexRendering":
StateRenderingFunction (4)
By default, states/tokens are labeled by their contents:
Use no labeling for states/tokens:
Use raw state/token names as vertex labels:
Use a named shape as each state/token label:
EventRenderingFunction (5)
By default, both states/tokens and events are labeled by their contents:
Use no labeling for states/tokens:
Also use no labeling for events:
Disabling vertex rendering yields an equivalent result:
Use raw event expressions as their labels:
GraphLayout (2)
Generate an example glocal branchial graph based on page 209 of A New Kind of Science:
Force a spring embedding (as opposed to the default spring electrical embedding):
Related Links
Version History
Source Metadata
Related Resources
Related Symbols
License Information
|
{"url":"https://resources.wolframcloud.com/FunctionRepository/resources/ListGlocalBranchialGraph/","timestamp":"2024-11-11T00:47:01Z","content_type":"text/html","content_length":"79943","record_id":"<urn:uuid:6f7175b5-83a0-4624-8ecf-5444a42a03a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00116.warc.gz"}
|
planetary ball mill pm 100
The best results are obtained using ball mills which provide the necessary energy input. Besides the planetary ball mills, including PM 100, PM 200 and PM 400, the new high energy ball mill E max
is particularly suitable for colloidal grindings down to the nanometer range due to the high energy input and innovative water cooling system.
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 300 is a powerful and ergonomic benchtop model with two grinding stations for grinding jar volumes up to 500 ml. This setup allows for processing up to 2 x 220 ml
sample material per batch. Thanks to the high maximum speed of 800 rpm, extremely high centrifugal forces result in very high pulverization energy and ...
WhatsApp: +86 18838072829
Planetary Ball Mills. Sample volumes up to 4 x 220 ml. Final fineness*: µm. Extremely high centrifugal forces result in high energy input. Dry and wet grinding by impact and friction. To the
product range. Ultrafine grinding with up to 76 g.
WhatsApp: +86 18838072829
Laboratory Planetary Ball Mill BM6pro. introduction. Planetary Ball Mill BM6pro are used wherever the highest degree of fineness is are ideally suited for wet and dry comminution of
hard,mediumhard,brittle and fibrous materials to final fineness level down to less than meet the requirements of colloid grinding.
WhatsApp: +86 18838072829
Planetary ball mill "PM100" process (highenergy) The planetary ball mill "PM100" (noted: PM) made by RETSCH (Fig. 2) is driven by 750 W power, which can provide much higher mechanical energy than
the vibratory micromill this work, process in PM is named after "high energy" to make distinguish from the relatively lowenergy process in P0.
WhatsApp: +86 18838072829
Pm 100 Planetary Ball Mill found in: Planetary Ball Mill, .Multilanguage GUI The PM 100 planetary ball mill is a benchtop unit..
WhatsApp: +86 18838072829
For use with Retsch PM 100 and PM 200 Planetary Ball Mills. Supplier: RETSCH View more versions of this product. Catalog No. . 1, / Pack of 1; Qty. Add to Cart Description; Specifications; SDS;
Documents; Product Certifications; Promotions; Description. Description. In assorted diameters and materials ...
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 100 is a powerful benchtop model with a single grinding station and an easytouse counterweight which compensates masses up to 8 kg. It allows for grinding up to 220 ml
sample material per batch.
WhatsApp: +86 18838072829
Planetary Ball Mill PM 400. zirconium oxide, for PM 100 and PM 400 Counter wrench IQ/OQ Documentation for PM 400 Grinding jars "comfort" PM 100 / PM 200 / PM 400 Hardened steel 50 ml 125 ml 250
ml 500 ml Stainless steel 12 ml 25 ml 50 ml
WhatsApp: +86 18838072829
The present operating instructions for the ball mills of type PM100/200 provide all the necessary information on the headings contained in the table of contents. They act as a guide for the
target group(s) of readers defined for each topic for the safe use of the PM100/200 in accordance with its intended purpose. Familiarity with the relevant
WhatsApp: +86 18838072829
Product Information Planetary Ball Mill PM 100 Videolink Function Principle The grinding jar is arranged eccentrically on the sun wheel of the planetary ball mill. The direction of movement of
the sun wheel is opposite to that of the grinding jars in the ratio 1:2.
WhatsApp: +86 18838072829
The PM 300 is the first planetary ball mill in the RETSCH portfolio to be operated exclusively with the new EasyFit grinding jars (Fig. 7) which replace the "c omfort" jar line and are suit
WhatsApp: +86 18838072829
.GUI The PM 100 planetary ball mill is a benchtop unit designed to pulverize soft, fibrous and brittle materials. The mill develops extremely high centrifugal forces resulting in energy input
that is up to 50% higher than in other planetary ball mills. It has a single grinding station for.
WhatsApp: +86 18838072829
Planetary Ball Mills 6 Benchtop models PM 100, PM 100 CM and PM 200 Type PM 100 The convenient benchtop model with 1 grinding station for grinding jars with a nominal volume of 12 to 500 ml. Both
PM 100 models feature Free ForceCompensationSockets (FFCS) which ensure a safe lowvibration run and minimal oscillation transfers to the ...
WhatsApp: +86 18838072829
Among the various types of ball mills, planetary ball mills are widely used by the research community. A planetary type ball mill is economical, simple to operate, and ideally suited for
smallquantity batch type synthesis of powders and alloys and for reactive processing of powders. ... Planetary Ball Mill PM 100 at 1 h (blue) and 3 h (green ...
WhatsApp: +86 18838072829
During the last decade numerous protocols have been published using the method of ball milling for synthesis all over the field of organic chemistry. However, compared to other methods leaving
their marks on the road to sustainable synthesis ( microwave, ultrasound, ionic liquids) chemistry in ball mills is rather underrepresented in the knowledge of organic chemists.
WhatsApp: +86 18838072829
Retsch Pm 100 Planetary Ball Mill found in: Planetary Ball Mill, .Multilanguage GUI The PM 100 planetary ball mill is a benchtop unit..
WhatsApp: +86 18838072829
Photos of Retsch planetarytype highenergy ball mill of (A) PM 100 and (B) PM 400. The equipment is housed in the Nanotechnology Laboratory, Energy and Building Research Center (EBRC), Kuwait
Institute for Scientific Research (KISR).
WhatsApp: +86 18838072829
Planetary mills with a unique crushing station require a counterweight for balancing purposes. In the planetary ball mill PM 100 this counterweight sack be adjusted on can slanting guide rail to
compensate for the distinct feet of of centers the gravity of differentlysized grinding jars and thus escape undesired fluctuation of the machine.
WhatsApp: +86 18838072829
MSE PRO 10L (4 x ) Vertical High Energy Planetary Ball Mill. 8,99500. MSE PRO 12L (4 x 3L) Vertical High Energy Planetary Ball Mill. 9,75000. MSE PRO 16L (4 x 4L) Vertical High Energy Planetary
Ball Mill. 10,89500. MSE PRO 1L (4 x 250ml) Vertical High Energy Planetary Ball Mill. 4,49500 Save 500.
WhatsApp: +86 18838072829
Ball mills are among the most variable and effective tools when it comes to size reduction of hard, brittle or fibrous materials. More: https://
WhatsApp: +86 18838072829
phone: +49 2104 fax: +49 2104 email: info page 2/4 Ball charge Planetary Ball Mills PM 100 / PM 100 CM / PM 200 / PM 400 Dry Grinding Recommended ball charge (Pieces) Wet Grinding Recommended
ball charge (Mass, g) Volume of the grinding jar Sample amount Max. Feed particle size Ø 5 mm Ø 10 mm Ø 15
WhatsApp: +86 18838072829
The Retsch highspeed Planetary Ball Mill PM 100 pulverizes and mixes soft, mediumhard and even extremely hard, brittle and fibrous materials. The PM 100 is a singlestation unit suitable for jars
from 12 ml to 500 ml. Both wet and dry grinding is possible. The Planetary Ball Mill can be used successfully in almost every field of industry and ...
WhatsApp: +86 18838072829
PM 100 Planetary Ball Mill. 0 out of 5 (0) Planetary Ball Mills are used wherever the highest degree of fineness is required. In addition to wellproven mixing and size reduction processes, these
mills also meet all technical requirements for colloidal grinding and provide the energy input necessary for mechanical alloying. The extremely high ...
WhatsApp: +86 18838072829
The roller milled wheat flours were modified using a planetary ball mill PM 100 (Retsch GmbH, Haan, Germany) in dry grinding mode using eight stainless steel balls with a diameter of 30 mm. 150 ±
1 g of flour was weighed into a 500 mL stainless steel grinding jar and ground for 5, 10, 15, and 20 min. Flour of wheat cv. Akteur was milled at ...
WhatsApp: +86 18838072829
For use with Retsch PM 100 and PM 200 Planetary Ball Mills Specifications. Product Type: Grinding Ball: For Use With (Equipment) PM 100 and PM 200 planetary ball mill: View More Specs. Products
3; Description; Specifications; SDS; Documents; Product Certifications ...
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 200 is a powerful benchtop model with 2 grinding stations for grinding jars with a nominal volume of 12 ml to 125 ml. The extremely high centrifugal forces of Planetary
Ball Mills result in very high pulverization energy and therefore short grinding times. The PM 200 can be found in virtually all industries where the ...
WhatsApp: +86 18838072829
The PM 100 CM has a speed ratio of 1:1, size reduction is effected by pressure and friction, rather than by impact, which means it is gentler on the material. The PM 400 is a robust compact floor
model on castors with 4 grinding stations for grinding jars with a nominal volume of 12 to 500 ml.
WhatsApp: +86 18838072829
The PM 100 is a planetary ball mill with 1 grinding station. Planetary ball mills comminute by impact and friction and achieve grind sizes down to 1 micron, in colloidal grindings even µm. Thanks
to the high centrifugal forces, the mill achieves a very high pulverization energy which keeps the grinding times short.
WhatsApp: +86 18838072829
PM 100 Bolygóművesgolyós malom. The Planetary Ball Mill PM 100 is a powerful benchtop model with a single grinding station and an easytouse counterweight which compensates masses up to 8 kg. It
allows for grinding up to 220 ml sample material per batch. The extremely high centrifugal forces of Planetary Ball Mills result in very high ...
WhatsApp: +86 18838072829
The planetary ball mill PM 100 offered by RETSCH is utilized for applications requiring a high level of fineness. In addition to the standard mixing and size reduction processes, the mill
complies with all the technical requirements for colloidal grinding and has the required energy input for mechanical alloying processes.
WhatsApp: +86 18838072829
HighEnergy Ball Milling [PM 100 / Emax, Retsch, Germany] Located at: CRF Annexe G07. About Ball mills are used for mixing and size reduction of powders, colloidal grinding and mechanical
alloying. Two types of mills are installed, (A) High energy ball mill (B) Planetary ball mill.
WhatsApp: +86 18838072829
|
{"url":"https://panirecord.fr/planetary_ball_mill_pm_100/7706.html","timestamp":"2024-11-02T07:59:28Z","content_type":"application/xhtml+xml","content_length":"28794","record_id":"<urn:uuid:231040f1-9224-4fdb-8c9a-faae18abd0f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00763.warc.gz"}
|
PPT - Design of Statistical Investigations PowerPoint Presentation, free download - ID:4527184
1. Design of Statistical Investigations 1 General Introduction Stephen Senn SJS SDI_1
2. Course Outline • General Introduction • Experiments • Observational studies • Sample surveys (and other sampling schemes) NB Each of these fields is huge and all that is attempted is a brief
introduction SJS SDI_1
3. General Warning • Your lecturer is not equally experienced in these fields • I know more about experimental design than the other two • Examples from my personal experience tend to be drawn from
pharmaceutical research and development or other medical applications SJS SDI_1
4. Example Exp_1A Simple Experiment • Four experimental p38 kinase inhibitors • Vehicle and marketed product as controls • Thrombaxane B2 (TXB2) is used as a marker of COX-1 activity (low values
bad) • Six rats per group were treated for a total of 36 rats • At the end of the study rats are sacrificed and TXB2 is measured. SJS SDI_1
6. Specific Features of this Design • Several experimental treatments • Two controls • active • neutral • Six replicates per treatment • Several tests compounds • no ordering • No blocks • rats
considered exchangeable The meaning and relevance of these terms will be explained during the course SJS SDI_1
7. Example Obs_1An Observational Study • Case-control study (Fine et al Lancet, 1986, Quoted in Clayton and Hills) • Does BCG protect against leprosy? • BCG scar status in a population survey were
available • Data from 260 leprosy cases were obtained SJS SDI_1
8. Fine et al SJS SDI_1
9. Case-Control Study • Note that this is sampled by outcome • The number of these is fixed • Exposure is measured • In a clinical trial, patients are assigned the exposure (the treatment) • The
outcome is measured • An experiment involves manipulation • Case-control does not SJS SDI_1
10. Example Surv_1A Sample Survey • Population of pharmaceutical record forms in Pembury Hospital, Tunbridge Wells • Thousands of such forms available • A sample of 108 forms was chosen from patients
discharged between 1 July and 31 December 1976 • Records chosen at fixed intervals • Number of prescriptions recorded on each was noted SJS SDI_1
12. Purposes of Statistical Investigations SJS SDI_1
13. Problem to Bear in Mind • We can only study past/present • We can construct formal theories of inference only about the past/present • We often wish to make inference about the future • This
requires an ‘extra-statistical’ element • Most naively an assumption that the future is like the past SJS SDI_1
14. Example • The effect of streptomycin on TB • Trial carried out by Austin Bradford Hill and colleagues 1947 • Treatment highly effective • Is it still as effective? SJS SDI_1
15. Experiments Causal purpose Convenient material Allocation of treatments crucial Randomisation Sampling Descriptive purpose Representative material Choice of sample members crucial Random sampling
Experimentation v Sampling SJS SDI_1
16. Caution • These two are sometimes confused • The growth of modelling approaches tends to increase the confusion • Experiments rarely use representative material • Surveys (and other samples)
usually do. SJS SDI_1
17. Basic Design Cycle Objective Possible Conclusions Tentative Design Potential Data Possible Analysis Relevant factors SJS SDI_1
18. Questions 1Exp_1Rat TXB2 • How do you decide which rat gets which treatment? • How would you analyse these data? • What use will be made of these data? SJS SDI_1
19. Questions 2Obs_1Fine et al • What difference would it make to the precision of the conclusions if the population survey had been smaller? • What difference would it make if there had been fewer
leprosy cases? • How would you test for an association between BCG and leprosy? • What interpretations are there for an association? SJS SDI_1
20. Questions 3Surv_1Pharmaceutical Record Forms • What is a simple random sample? • In this specific case how would one choose such a sample?* • Suppose that the sample of 108 forms was chosen from
5,000. What should the size of the sample have to be if there were 10,000 to choose from? * The sample chosen in this example was not a simple random sample SJS SDI_1
21. Suggested Reading Experimental Design: Mead, R. The Design of Experiments, Cambridge University Press, Cambridge, 1988 Clarke, G.M and Kempson, R.E. Introduction to the Design and Analysis of
Experiments, Arnold, London, 1997. Case-control Studies, Breslow and Day, 1980, Statistical Method in Cancer Research, vol 1 Sampling Hague and Harris, Sampling and Statistics, Kogan Page (This
is a very elementary book.) S-PLUS Krause, A and Olson, M The Basics of S and S-PLUS (2nd edition) , Springer, 2000 SJS SDI_1
|
{"url":"https://fr.slideserve.com/isanne/design-of-statistical-investigations","timestamp":"2024-11-05T13:13:24Z","content_type":"text/html","content_length":"91145","record_id":"<urn:uuid:0d804019-1889-4ef3-8220-0f63c2dfdf47>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00675.warc.gz"}
|
Transport Reach
Transport reaches simulate the translation and retention behavior of natural water courses or pipelines. There are different approaches for the calculation of pipes or natural channels.
The following options are implemented:
The inflow wave is output at the outlet with a time offset that corresponds to the flow time in the transport reach. If the flow time is smaller than the simulation time step, the translation
behavior is not visible in the simulation results.
Non-Pressurized Pipeline
This option encompasses flow routing calculation for pipes according to Kalinin-Miljukov. The parameters required by the Kalinin-Miljukov method are estimated internally according to /Euler, 1983/
for circular pipes, and for non-circular profiles, are determined from the hydraulic diameter and the cross-sectional area when completely filled.
Characteristic length: [math]\displaystyle{ L=0.4 \cdot \frac{D}{I_S}~\mbox{[m]} }[/math]
Retention constant: [math]\displaystyle{ 0.64 \cdot L \cdot \frac{D^2}{Q_v} ~\mbox{[s]} }[/math]
[math]\displaystyle{ D~\mbox{[m]} }[/math]: Circular pipe diameter or hydraulic diameter
[math]\displaystyle{ I_S~\mbox{[-]} }[/math]: Slope of the pipe
[math]\displaystyle{ Q_v ~\mbox{[m³/s]} }[/math]: Discharge capacity of the pipe when completely filled
The discharge capacity of the pipe when completely filled is calculated according to the flow law of Prandtl-Colebrook:
[math]\displaystyle{ Q_v=A_v \left [ -2 \cdot \lg \left [\frac{251 \cdot \nu}{D \sqrt{2 g D I_S}} + \frac{k_b}{3.71 \cdot D} \right ] \cdot \sqrt{2gDI_s} \right ] }[/math]
[math]\displaystyle{ A_v~\mbox{[m²]} }[/math]: Cross-sectional area of the profile
[math]\displaystyle{ \nu~\mbox{[m²/s]} }[/math]: Kinematic viscosity
[math]\displaystyle{ k_b ~\mbox{[m³/s]} }[/math]: Operating roughness
[math]\displaystyle{ g ~\mbox{[m/s²]} }[/math]: Gravitational acceleration
Using the characteristic length [math]\displaystyle{ L }[/math], the length of the transport reach [math]\displaystyle{ L_g }[/math] is divided into [math]\displaystyle{ n }[/math] calculation
sections of equal length with
[math]\displaystyle{ n=L_g/L }[/math] (where [math]\displaystyle{ n }[/math] is an integer number)
Parameters are adjusted as follows for the individual calculation sections:
[math]\displaystyle{ L^*=L_g/n }[/math]
[math]\displaystyle{ K^*=K \cdot L^*/L }[/math]
Based on these parameters, after calculating the following recursion formula [math]\displaystyle{ n }[/math] times,
[math]\displaystyle{ Q_{a,i}=Q_{a,i-1}+C_1 \cdot \left(Q_{z,i-1} - Q_{a,i-1} \right ) + C_2 \cdot \left(Q_{z,i}-Q_{z,i-1} \right) }[/math]
[math]\displaystyle{ Q_z }[/math]: Inflow to calculation section
[math]\displaystyle{ Q_a }[/math]: Outflow from calculation section
[math]\displaystyle{ i }[/math]: Current calculation time step
[math]\displaystyle{ i-1 }[/math]: Previous calculation time step
[math]\displaystyle{ dt }[/math]: Calculation time interval
[math]\displaystyle{ C_1=1- e^{-dt/K^*} }[/math]
[math]\displaystyle{ C_2=1- \frac{K^*}{dt}/C_1 }[/math]
produces the outflow at the end of the pipe. This approximation method derived from Kalinin-Miljukov is identical to the linear storage cascade used for calculating runoff concentration. This means
the flow ina transport reach can be simulated using a linear storage cascade consisting of [math]\displaystyle{ n }[/math] storages with the retention constant [math]\displaystyle{ K^* }[/math].
Cross-Section (Open Channel)
As with non-pressurized pipelines, the translation and retention behavior is simulated using flow routing according to Kalinin-Miljukov. The characteristic length required as a parameter for the
Kalinin-Miljukov method is derived from the steady uniform flow relationship according to Manning-Strickler /Rosemann, 1970/.
The channel is divided into individual segments with the characteristic length. For each segment, the calculation of flow routing is carried out using nonlinear storage calculation with the help of
the steady uniform flow relation.
Rating Curve (Open Channel)
If the flow behavior of the transport reach is known, e.g. from previous hydraulic calculations, a rating curve defining the relationship between water level, cross sectional area and discharge can
be used.
|
{"url":"http://www.talsim.de/docs/index.php?title=Spezial:Meine_Sprache/Transportstrecke","timestamp":"2024-11-12T22:26:49Z","content_type":"text/html","content_length":"30614","record_id":"<urn:uuid:1ba8c3af-4fa2-493c-955d-4c5124fbfa16>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00494.warc.gz"}
|
Update: Sudoku
1. Update sudoku
Update: Sudoku
21 Aug 2006
I have added a method to solve puzzles utilizing the same method I already wrote to generate Sudoku puzzles. So now we can solve them given information. It is cool to watch when given just a little
bit of information how many combinations there are. This function basically generates the board and removes the row, cell, and 3x3 options before calling the solve/generate method. Kind of nice to
have the same algorithm solve and generate puzzles.
|
{"url":"https://crmacd.com/journal/2006/08/21/update-sudoku.html","timestamp":"2024-11-05T15:13:58Z","content_type":"text/html","content_length":"9314","record_id":"<urn:uuid:55e88b99-6d6a-48b8-b8f0-537a5f56328e>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00114.warc.gz"}
|
How to Analyze Beams Using Virtual Work? - Structural Engineering | WeTheStudy
Let's learn how to use the virtual work method in beam analysis.
The solution presented is in SI. The author will update the post soon to reflect English units.
Structural Model
Let's consider a simple beam with a single overhang. It is assumed to have a uniform section of the same material.
Structural Loads
The beam has the following static loads:
• A \(40kN\) concentrated load.
• A \(80kN\) concentrated load.
• A \(16kN\) concentrated load.
• A \(4kN/m\) uniform distributed load.
We can see the position and direction of these loads in the following figure. We can talk more about this in preparation.
|
{"url":"https://www.wethestudy.com/tree-posts/how-to-analyze-beams-using-virtual-work","timestamp":"2024-11-04T02:46:52Z","content_type":"text/html","content_length":"86664","record_id":"<urn:uuid:b90ba85c-1cc6-45fc-ad97-e8c09808beb6>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00387.warc.gz"}
|
Advanced Sprockets and Chain | ION BUILD SYSTEM
Sprocket and Chain Physics
Sprockets are one common way to transmit power and change the output torque or speed of a mechanical system. Understanding these basic concepts is required to make optimized design decisions. This
section will briefly cover the definition of these concepts and then explain them in relationship to basic sprocket and chain designs.
Speed is the measure of how fast an object is moving. The speed of an object is how far it will travel in a given amount of time. The SI unit for speed is meters per second but speed is also commonly
expressed in feet per second.
Chain Drive
Selecting sprockets with different sizes relative to the input sprocket varies the output speed and the output torque. However, total power is not effected through these changes.
Sprocket and chain is a very efficient way to transmit torque over long distances. Modest reductions can be accomplished using sprockets and chain, but gears typically provide a more space-efficient
solution for higher ratio reductions.
Sprocket Ratio
When a larger sprocket drives a smaller one, for every rotation of the larger sprocket, the smaller sprocket must complete more revolutions, so the output will be faster than the input. If the
situation is reversed, and a smaller sprocket drives a larger output sprocket, then for one rotation of the input, the output will complete less than one revolution- resulting in a speed decrease
from the input. The ratio of the sizes of the two sprockets is proportional to the speed and torque changes between them.
The ratio in size from the input (driving) sprocket to the output (driven) sprocket determines if the output is faster (less torque) or has more torque (slower). To calculate exactly how the sprocket
size ratio effects the relationship from input to output, use the ratio of the number of teeth between the two sprockets.
In the image below, the ratio of the number of teeth from the input sprocket to the output sprocket is 20T:15T, which means the input needs to turn 1.3 rotations for the output to complete one
rotation $20T/15T =1.3$
Compound Reduction
Some designs may require more reduction than is practical in a single stage. The ratio from the smallest sprocket available to the largest is 64:16, so if a greater reduction then 4x is required,
multiple reduction stages can be used in the same mechanism which is called a compound gear reduction. There are multiple gear or sprocket pairs in a compound reduction with each pair linked by a
shared axle. When using sprockets and chain in a multi stage reduction, it’s very common to use gears for the first stage and then use sprockets and chain for the last stage. The figure below is an
example of a two-stage reduction using all gears, but one of the pairs could be replaced with sprockets and chain. The driving gear (input) of each pair is highlighted in orange.
Reduction is calculated the same for gears and sprockets based on the ratio of the number of teeth. To calculate the total reduction of a compound reduction, identify the reduction of each stage and
then multiply each reduction together.
$\text{CR}=\text{R}_1×\text{R}_2 ×\text{…} ×\text{R}_n$
CR is the total Compound Reduction
Rn is the total reduction of each stage
Using the image above as an example, the compound reduction is 12:1.
$\text{CR}=\text{R}_1× \text{R}_2 =\frac{60}{30}× \frac{90}{15}=2 ×6=12$
For any gear system, there are a limited number of gear and sprocket sizes available, so in addition to being able to create greater reductions using compound reductions, it is also possible to
create a wider range of reduction values or the same reduction of a single stage, but with smaller diameter motion components.
Each additional compound stage will result in a decrease in efficiency of the system.
Spacing and Center to Center Distances
Chain Loops can be used with ION Sprockets and structure featuring the MAX Pattern. Any 1:1 ratio will have the correct center-to-center distance for a properly tensioned chain, without the need for
tensioning bushings. To calculate how many links you will need, multiply the center-to-center distance by eight, and add the number of teeth on one sprocket.
Links of #25 chain = (Center-to-center Distance x 8) + Teeth in one sprocket
If a ratio other than 1:1 is needed when using the REV ION Build System, use our Ratio Plates to accommodate for the change in center-to-center distance. An ION Ratio Plate provides an offset from
the standard MAX Pattern pitch that creates the center-to-center distance.
In order for sprockets to work effectively, it’s important that the center-to-center distance is correctly adjusted. The sprocket and chain example with the red "X" in the image below may work under
very light loads, but they will certainly not work and will skip under any significant loading. The sprockets in this example are too close together, so the chain is loose enough that it can skip on
the sprocket teeth. The sprockets with the green check mark are correctly spaced, which will provide smooth and reliable operation.
|
{"url":"https://docs.revrobotics.com/ion-build-system/motion/sprockets-and-chain/advanced-sprockets-and-chain","timestamp":"2024-11-12T06:56:49Z","content_type":"text/html","content_length":"475284","record_id":"<urn:uuid:7300479e-293b-4c95-9310-123762e45329>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00150.warc.gz"}
|
Non - Routine Algebra
In this thread, I'll be posting some non-routine algebra questions.
Non-Routine Maths (alias N-R Maths) is a type of math that has little to no practical significance, but helps immensely in increasing one's mathematical mental horizon.
I'll try to post at least one question a week at the very least but don't count on me.
If you find something worthwhile which fits in this category, you're most welcome to post, but just remember to number you're questions, so it's easier to navigate.
If possible also use the bold and italics tag:
[b][i] Bold, Italic Text [/i][/b]
Answers (with solutions) are most welcome!!
Question 1
Let the value of a certain number be equal to 0.1234567891011121314...997998999,
The digits are obtained via writing the integers from 1 to 999 in order.
The 2016th digit to the right of the decimal point is m.
What is the value of m?
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
Hi CurlyBracket;
Nice one!
I hope I got it right!
Edit 5/5/2022: Improved my solution (see post #4)
Last edited by phrontister (2022-05-04 04:12:40)
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: Non - Routine Algebra
Three cheers for Phrontister! Hip Hip Hooray!
The answer is certainly perfect, but how did you get this expression?
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
CurlyBracket wrote:
...how did you get this expression?
Sorry...was a bit hasty there.
This is better:
Is that what you got?
I've fixed my previous post.
Last edited by phrontister (2022-05-04 04:31:06)
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: Non - Routine Algebra
Well, here's how I went about it.
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
Hi CurlyBracket;
Yes, that's my reasoning also (see post #2), and my m=RIGHT((2016-ax-by)/3+x+y) expression is an attempt to state it algebraically.
I probably cheated by employing spreadsheet's RIGHT function to extract the last digit of 708, but used that to keep the expression small (though non-standard).
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: Non - Routine Algebra
I'm sort of unfamiliar with most spreadsheet functions, so this was something new for me.
Yes, I can see the parallels between your solution and mine.
Here's the next question :
Question 2
A and B are two friends.
By subtracting B's age from the square of A, we get 158.
On the other hand, by subtracting A's age from the square of B gives us 108.
What is A's age?
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
hi CurlyBracket
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Re: Non - Routine Algebra
Hi Bob,
The answer is correct.
Could you give me a hint on how to solve it?
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
I called the ages A and B and made two equations. You can make, say, B the subject of one and substitute it into the other and solve the quadratic. I didn't, so I don't know about a second answer. My
guess would be that the second solution is inadmissible in the context of the question (eg negative or out of range).
What I actually did, because it looked easier and was, was to factorise (A-B) out of the equation which left me with two factors = 50. It's reasonable to assume that there's an integer solution and
50 doesn't have that many factors, so I was able to spot one straight away that works.
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Re: Non - Routine Algebra
Hi Bob & CurlyBracket;
I solved it this way:
A = B^2 - 108
B = A^2 - 158
Solution 1:
Substituting B into the first equation resulted in A^4 - 316A^2 - A + 24856 = 0 ... which I couldn't solve longhand, only online (eg, WolframAlpha) or with CAS (eg, Mathematica, Excel).
Answer: A = 13
Solution 2:
Trial & error: Evaluating the first 2 equations with A = 13 yields B = 11, evaluating with B = 11 yields A = 13, and figures in either direction from those yield increasingly large errors.
Answer: A = 13
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: Non - Routine Algebra
Hi Bob and phrontister,
Okay, so trial and error is the way to go. I was hoping there would be another method to do it, since this question was picked out of a strictly "no calculator allowed" exam.
But I suppose simultaneous equations would be very difficult in a case where the squares are involved.
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
OK, I'll outline my complete method. I don't really consider it to be trial and error; rather it enables me to home in on possible solutions.
Let the numbers be A and B, with A>B wnlog
A^2 - B = 158
B^2 - A = 108
A^2 - B^2 - B + A = 50
(A-B)(A+B+1) = 50
Assuming we are seeking +ve integer solutions, with the factors of 50 being {50,25,10,5,2,1}
If A+B+1 = 50 then A = 25, B = 24. This is not a solution.
If A+B+1 = 25 then A = 13, B = 11. This is a solution.
If A+B+1 = 10 then A = 7, B = 2. This is not a solution,
It is unnecessary to check lower values of A+B+1 as A-B would come out bigger and lead to impossible soltions.
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Re: Non - Routine Algebra
Hi Bob,
Thank you, that makes sense.
Here's the next question:
Question 3
If n! has 4 zeroes at the end.
(n+1)! has 6 zeroes at the end.
Find the value of n.
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
Why would a factorial gain zeros at all?
Whenever there is a factor of 5 and 2 in the calculation another zero is added.
eg. It isn't until 5! that we get the first zero .... 120.
factors of 2 are plentiful so I looked at when another factor of 5 is acquired
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Re: Non - Routine Algebra
I was trying using this method:
I think it is almost the same method, just a little more direct.
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
That is pretty much my way too.
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Re: Non - Routine Algebra
Ahoy there! I'm back!
Question 4
If x²+ 2x + 5 is a factor of x^4 + px² + q.
p, q =?
Learning is fun - Exceptio probat regulam!
Registered: 2005-06-28
Posts: 48,343
Re: Non - Routine Algebra
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
Re: Non - Routine Algebra
hi CurlyBracket
Here's a way to do this.
The x^3 term is zero and will give you a, and then the x term will give you b.
From there you can work out p and q.
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Re: Non - Routine Algebra
I've got it now! Thanks for the help, Bob and Ganesh!
Here's another question. It's a bit different from the others. Flavours are the essence of life!
Question 5
Three teams of woodcutters have decided to organize a competition. The winner is the team who fells the maximum number of trees in the given time.
The first and third team together felled twice the number of trees felled by the second team.
The second and third team together felled three fold of the number of trees felled by the first team.
Who won? Or in the event of there being joint winners, who would then be named as such?
Last edited by CurlyBracket (2022-08-29 21:12:16)
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
Hi CurlyBracket;
Question 5: The Woodcutting Competition:
I think that there was no tie, and Third won, the felling ratio being Third:Second:First = 5:4:3.
EDIT: Included my solution method...
Last edited by phrontister (2022-08-07 21:35:53)
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: Non - Routine Algebra
Hi phrontister,
That's the correct answer!
I solved it like this. It's quite similar.
Learning is fun - Exceptio probat regulam!
Re: Non - Routine Algebra
Hi CurlyBracket;
Yes, I like that method.
Just a comment re the puzzle wording:
I know what you meant with it, but, in a strict interpretation of the wording, answering the second question could have been a bit awkward if there was a tie...and there are 3 scenarios with at least
one winner in which a tie occurs.
Last edited by phrontister (2022-08-28 22:52:02)
"The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do." - Ted Nelson
Re: Non - Routine Algebra
Then should I edit it to just say "Who won?"
Learning is fun - Exceptio probat regulam!
|
{"url":"https://mathisfunforum.com/viewtopic.php?pid=425058","timestamp":"2024-11-09T19:17:55Z","content_type":"application/xhtml+xml","content_length":"49426","record_id":"<urn:uuid:610d07e9-c29e-4bd8-8c6e-2b26efc722f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00581.warc.gz"}
|
Teaching Strategies On The Training Of Mathematic Thinking Of Middle School Student
Posted on:2009-08-15 Degree:Master Type:Thesis
Country:China Candidate:X S Qin Full Text:PDF
GTID:2167360245954400 Subject:Curriculum and pedagogy
In math teaching, the foremost goal and requirement is that students should learn through their own thinking. Math-learning is not only about acquiring practical mathematic knowledge, skills and
abilities, but also mastering math-thinking approaches and developing thinking capability.Therefore, in high school math teaching, teachers must pay attention to the process and cultivation of
students'math-thinking. However, one of the defects in practical thinking happens to be ignoring or depressing the process of math-thinking: some teachers adopt the method of feeding or large amount
of exercise-doing,considering math learning as perception and recognition and thus student's thinking is stiffened and they become reluctant to think; Some teachers hold the view that math thinking
is merely logical thinking, ignoring that it is also holistic, dialectic and dynamic. It is obvious that to change this situation, studies on the importance of math thinking and teachers'teaching
approaches are highly necessary. Only by solving the problem fundamentally can students'math thinking be developed and can they not only master basic math knowledge but apply the knowledge to
everyday life.This thesis further discusses thinking and math thinking. By recording classroom teaching and analyzing teaching approaches and teaching effect, this thesis is devoted to discussing the
issues of the cultivation of students'math thinking and the approaches of cultivating math thinking.
Keywords/Search Tags: Teachers, Math teaching approaches, Math thinking, Training of math thinking
|
{"url":"https://www.globethesis.com/?t=2167360245954400","timestamp":"2024-11-11T22:37:18Z","content_type":"application/xhtml+xml","content_length":"7454","record_id":"<urn:uuid:20244106-b650-40fb-9783-fb2837897675>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00428.warc.gz"}
|
Module 6: Compare Numbers-Math 2A - Course Guide
Module Overview:
In this module, students will use different ways to make a number, make a number using base-ten blocks, use place-value concepts to represent amounts of tens and ones, and compare two-digit numbers.
They will use place-value concepts to represent amounts of hundreds, tens, and ones, compare three-digit numbers, and understand that the three digits of a three-digit number represent amounts of
hundreds, tens, and ones.
Module Materials:
Lesson # Lesson Title Material(s)
1 Number Structure (Tens and Ones) 9 rods
9 unit cubes
Base-ten blocks
2 Number Structure (Hundreds) Scissors
3 Make Groups of 10s and 1s to Compare 18 unit cubes and 18 rods;
4 Make Groups of 100s to Compare 3-Digit Numbers Base-ten blocks;
5 Problem-Solving Strategy: Reasoning Scissors;
Module Objectives:
Lesson # Lesson Title Objective(s)
1 Number Structure (Tens and Ones) 1. Use different ways to make a number
2 Number Structure (Hundreds) 1. Make a number using base-ten blocks
1. Use place-value concepts to represent amounts of tens and ones
3 Make Groups of 10s and 1s to Compare 2. Compare two-digit numbers
1. Use place-value concepts to represent amounts of hundreds, tens, and ones
4 Make Groups of 100s to Compare 3-Digit Numbers 2. Compare three-digit numbers
5 Problem-Solving Strategy: Reasoning 1. Understand that the three digits of a three-digit number represent amounts of hundreds, tens and ones
Module Key Words:
Key Words
Number structure
Compare numbers
Break apart
Greater than (>)
Greater than
Less than
Module Assignments:
Lesson # Lesson Title Page # Assignment Title
1 Number Structure (Tens and Ones) 4 Hands-On
2 Number Structure (Hundreds) 4 Hands-On
3 Make Groups of 10s and 1s to Compare 4 Hands-On
4 Make Groups of 100s to Compare 3-Digit Numbers 4 Hands-On
5 Problem-Solving Strategy: Reasoning 4 Hands-On
Learning Coach Notes:
Lesson Lesson Title Notes
1 Number Structure (Tens and Ones) Ask your student to share with you what they learned about how to make numbers using base-ten blocks.
• The Number Structure Matching Game worksheet has cut-outs in it. If your student has a workbook, they can find this worksheet by looking at the cut-outs
2 Number Structure (Hundreds) section in the Table of Contents of the workbook.
• Ask your student to explain what they learned in this lesson.
3 Make Groups of 10s and 1s to Ask your student to describe what they learned about comparing numbers in this lesson.
4 Make Groups of 100s to Compare In their Math notebook, have your student write and label the sign used to show a number is greater than another number, and write and label the sign used
3-Digit Numbers to show a number is lesser than another number.
5 Problem-Solving Strategy: Reasoning The Number Cards Worksheet will be found in the cut-out section of the workbook if your student has a workbook. The cut-out section can be found in the
Table of Contents of the workbook.
Module Guiding Questions:
When a student starts a lesson ask them questions to check for prior knowledge and understanding and to review concepts being taught. At the end of the lesson ask the questions again to see if their
answer changes.
Lesson Title Question
Number Structure (Tens and Ones) 1. Can you make a number using base-ten blocks?
Number Structure (Hundreds) 1. Where are the hundreds, tens, and ones?
Make Groups of 10s and 1s to Compare 1. Why do we compare numbers?
Make Groups of 100s to Compare 3-Digit Numbers 1. How can we use place value to compare numbers?
Problem-Solving Strategy: Reasoning 1. Can I show place values for whole numbers up to 100?
Module Video Questions:
When a student watches a video take time to ask them questions about what they watched. Suggested questions for the videos in this module are listed here. Suggestion: Have the student watch the
entire video first all the way through. Then have them watch the video a second time, as they watch it pause the video and ask the questions.
Lesson Title Video Question
Module Suggested Read Aloud Books:
Take time to read to your student or have them read aloud to you. Read a different book each day. While reading the book point out concepts being taught. You may purchase these books or find them at
your local library. Suggested things to discuss while reading the book:
• What is the main idea?
• What are three things new you learned?
• How does this book relate to what you are learning about?
# Book Author Lexile Level
Module Outing:
Take some time to apply what your student is learning to the real world. Suggested outings are below.
|
{"url":"https://ideal.accelerate-ed.com/modulecourseguide/index/c8f71e90-906c-4e87-9ed9-74fb95e229dd","timestamp":"2024-11-09T10:06:23Z","content_type":"text/html","content_length":"33997","record_id":"<urn:uuid:991aac85-1c34-4158-a3a2-43640be5f23f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00106.warc.gz"}
|
1.4 Creating Graphs from External Sources
1.4 Creating Graphs from External Sources¶
The options to construct a DGLGraph from external sources include:
• Conversion from external python libraries for graphs and sparse matrices (NetworkX and SciPy).
• Loading graphs from disk.
The section does not cover functions that generate graphs by transforming from other graphs. See the API reference manual for an overview of them.
Creating Graphs from External Libraries¶
The following code snippet is an example for creating a graph from a SciPy sparse matrix and a NetworkX graph.
>>> import dgl
>>> import torch as th
>>> import scipy.sparse as sp
>>> spmat = sp.rand(100, 100, density=0.05) # 5% nonzero entries
>>> dgl.from_scipy(spmat) # from SciPy
Graph(num_nodes=100, num_edges=500,
>>> import networkx as nx
>>> nx_g = nx.path_graph(5) # a chain 0-1-2-3-4
>>> dgl.from_networkx(nx_g) # from networkx
Graph(num_nodes=5, num_edges=8,
Note that when constructing from the nx.path_graph(5), the resulting DGLGraph has 8 edges instead of 4. This is because nx.path_graph(5) constructs an undirected NetworkX graph networkx.Graph while a
DGLGraph is always directed. In converting an undirected NetworkX graph into a DGLGraph, DGL internally converts undirected edges to two directed edges. Using directed NetworkX graphs
networkx.DiGraph can avoid such behavior.
>>> nxg = nx.DiGraph([(2, 1), (1, 2), (2, 3), (0, 0)])
>>> dgl.from_networkx(nxg)
Graph(num_nodes=4, num_edges=4,
DGL internally converts SciPy matrices and NetworkX graphs to tensors to construct graphs. Hence, these construction methods are not meant for performance critical parts.
See APIs: dgl.from_scipy(), dgl.from_networkx().
Loading Graphs from Disk¶
There are many data formats for storing graphs and it isn’t possible to enumerate every option. Thus, this section only gives some general pointers on certain common ones.
Comma Separated Values (CSV)¶
One very common format is CSV, which stores nodes, edges, and their features in a tabular format:
age, title
43, 1
23, 3
src, dst, weight
0, 1, 0.4
0, 3, 0.9
There are known Python libraries (e.g. pandas) for loading this type of data into python objects (e.g., numpy.ndarray), which can then be used to construct a DGLGraph. If the backend framework also
provides utilities to save/load tensors from disk (e.g., torch.save(), torch.load()), one can follow the same principle to build a graph.
See also: Tutorial for loading a Karate Club Network from edge pairs CSV.
JSON/GML Format¶
Though not particularly fast, NetworkX provides many utilities to parse a variety of data formats which indirectly allows DGL to create graphs from these sources.
DGL Binary Format¶
DGL provides APIs to save and load graphs from disk stored in binary format. Apart from the graph structure, the APIs also handle feature data and graph-level label data. DGL also supports
checkpointing graphs directly to S3 or HDFS. The reference manual provides more details about the usage.
See APIs: dgl.save_graphs(), dgl.load_graphs().
|
{"url":"https://docs.dgl.ai/en/0.7.x/guide/graph-external.html","timestamp":"2024-11-14T13:33:57Z","content_type":"text/html","content_length":"24315","record_id":"<urn:uuid:9940357e-6161-425c-94b1-e8ce091c98c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00185.warc.gz"}
|
Question 25 and 26, Exercise 4.3
Solutions of Question 25 and 26 of Exercise 4.3 of Unit 04: Sequence and Series. This is unit of Model Textbook of Mathematics for Class XI published by National Book Foundation (NBF) as Federal
Textbook Board, Islamabad, Pakistan.
Question 25
A family saves money in an arithmetic sequence: Rs. 6,000 in the first year, Rs. 70,000 in second year and so on, for 20 years. How much do they save in all?
From the statement, we have the following series: $$ 6000+70,000+...+a_{20}.$$ This is arithmetic series with $a_1=6,000$, $d=70,000-6,000=64,000$, $n=20$. We have to find $S_n$.
As S_n&=\frac{n}{2}[2a_1+(n-1)d]\\ \implies S_{20}& =\frac{20}{2}[2(6,000)+(20-1)(64,000)]\\ & =10 \times [12,000+1,216,000]\\ & =12,280,000. Hence the family will save Rs. 12,280,000.
Question 26
Mr. Saleem saves Rs. 500 on October 1, Rs. 550 on October 2, and Rs. 600 on October 3 and so on. How much is saved during October? (October has 31 days)
From the statement, we have the following series: $$ 500+550+600+...+a_{31}.$$ This is arithmetic series with $a_1=500$, $d=550-500=50$, $n=31$. We have to find $S_n$.
As S_n&=\frac{n}{2}[2a_1+(n-1)d]\\ \implies S_{31}& =\frac{31}{2}[2(500)+(31-1)(50)]\\ & =frac{31}{2} \times [1000+1500]\\ & =31 \times 1250\\ & =38750. Hence Mr. Saleem will save Rs. 38,750.
Go to
|
{"url":"https://www.mathcity.org/math-11-nbf/sol/unit04/ex4-3-p12","timestamp":"2024-11-06T08:53:08Z","content_type":"application/xhtml+xml","content_length":"24140","record_id":"<urn:uuid:8b7ac2eb-4580-470b-8a50-95d4fe4161c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00067.warc.gz"}
|
Corsi di studio e offerta formativa - Università degli Studi di Parma
Learning objectives
The course has been designed to provide fundamental concepts indispensable for understanding the mechanisms and the laws that rule the nature and that underlie the properties of matter, with special
emphasis on those aspects useful in the comprehension of chemical and biological processes to better understand the concepts necessary to some of the following biology and chemistry courses. In
addition, the course want to give the ability to formulate and solve the problems.
|
{"url":"https://corsi.unipr.it/en/ugov/degreecourse/143214","timestamp":"2024-11-15T01:07:54Z","content_type":"text/html","content_length":"46789","record_id":"<urn:uuid:64c0dd3e-d4b0-4375-bf4f-6419a92375b6>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00127.warc.gz"}
|
Manual Hyperparameter Optimization
Machine learning models have hyperparameters that you must set in order to customize the model to your dataset.
Often, the general effects of hyperparameters on a model are known, but how to best set a hyperparameter and combinations of interacting hyperparameters for a given dataset is challenging.
A better approach is to objectively search different values for model hyperparameters and choose a subset that results in a model that achieves the best performance on a given dataset. This is called
hyperparameter optimization, or hyperparameter tuning.
A range of different optimization algorithms may be used, although two of the simplest and most common methods are random search and grid search.
• Random Search . Define a search space as a bounded domain of hyperparameter values and randomly sample points in that domain.
• Grid Search . Define a search space as a grid of hyperparameter values and evaluate every position in the grid.
Grid search is great for spot-checking combinations that are known to perform well generally. Random search is great for discovery and getting hyperparameter combinations that you would not have
guessed intuitively, although it often requires more time to execute.
Grid and random search are primitive optimization algorithms, and it is possible to use any optimization we like to tune the performance of a machine learning algorithm. For example, it is possible
to use stochastic optimization algorithms. This might be desirable when good or great performance is required and there are sufficient resources available to tune the model.
|
{"url":"https://discuss.boardinfinity.com/t/manual-hyperparameter-optimization/5865","timestamp":"2024-11-10T06:43:03Z","content_type":"text/html","content_length":"16256","record_id":"<urn:uuid:37b2019f-19b2-4baf-8452-94e586cd515a>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00384.warc.gz"}
|
Search Results
Perceptibility and Ambiguity of Camera Motion
The relationship between world coordinates, image coordinates and camera spatial velocity has some interesting ramifications. Some very different camera motions cause identical motion of points in
the image, and some camera motions leads to no change in the image at all in some parts of the image. Let’s explore at these phenomena and how we […]
|
{"url":"https://robotacademy.net.au/?s=image%20Jacobian","timestamp":"2024-11-15T04:44:35Z","content_type":"text/html","content_length":"38451","record_id":"<urn:uuid:5d8404cf-28f8-4271-ae7d-cf3a05aacb01>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00150.warc.gz"}
|