content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Mass Upper Limits
Spherically symmetric, self-gravitating, equilibrium configurations can be constructed from gases exhibiting a wide variety of degrees of compressibility. When examining how the internal structure of
such configurations varies with compressibility, or when examining the relative stability of such structures, it can be instructive to construct models using a polytropic equation of state because
the degree of compressibility can be adjusted by simply changing the value of the polytropic index, ${\displaystyle n}$, across the range, ${\displaystyle ~0\leq n\leq \infty }$. (Alternatively, one
can vary the effective adiabatic exponent of the gas, ${\displaystyle ~\gamma _{g}=1+1/n}$.) In particular, ${\displaystyle ~n=0~~(\gamma _{g}=\infty )}$ represents a hard equation of state and
describes an incompressible configuration, while ${\displaystyle ~n=\infty ~~(\gamma _{g}=1)}$ represents an isothermal and extremely soft equation of state.
Isolated Polytropes
Isolated polytropic spheres exhibit three attributes that are especially key in the context of our present discussion:
1. The equilibrium structure is dynamically stable if ${\displaystyle ~n<3}$.
2. The equilibrium structure has a finite radius if ${\displaystyle ~n<5}$.
3. The equilibrium structure can be described in terms of closed-form analytic expressions for ${\displaystyle ~n=0}$, ${\displaystyle ~n=1}$, and ${\displaystyle ~n=5}$.
Isothermal Spheres
Isothermal spheres (polytropes with index ${\displaystyle ~n=\infty }$) are discussed in a wide variety of astrophysical contexts because it is not uncommon for physical conditions to conspire to
create an extended volume throughout which a configuration exhibits uniform temperature. But mathematical models of isothermal spheres are relatively cumbersome to analyze because they extend to
infinity, they are dynamically unstable, and they are not describable in terms of analytic functions. In such astrophysical contexts, we have sometimes found it advantageous to employ an ${\
displaystyle ~n=5}$ polytrope instead of an isothermal sphere. An isolated ${\displaystyle ~n=5}$ polytrope can serve as an effective surrogate for an isothermal sphere because it is infinite in
extent and is dynamically unstable, but it is less cumbersome to analyze because its structure can be described by closed-form analytic expressions.
Bounded Isothermal Sphere & Bonnor-Ebert Mass
In the mid-1950s, Ebert (1955) and Bonnor (1956) independently realized that an isothermal gas cloud can be stabilized by embedding it in a hot, tenuous external medium. The relevant mathematical
model is constructed by chopping off the isothermal sphere at some finite radius — call it, ${\displaystyle \xi _{e}}$ — and imposing an externally applied pressure, ${\displaystyle ~P_{e}}$, that is
equal to the pressure of the isothermal gas at the specified edge of the truncated sphere. But for a given mass and temperature, there is a value of ${\displaystyle P_{e}}$ below which the truncated
isothermal sphere is dynamically unstable, like its isolated and untruncated counterpart. Viewed another way, given the value of ${\displaystyle P_{e}}$ and the isothermal sound speed, ${\
displaystyle c_{s}}$, a bounded isothermal sphere will be dynamically stable only if its mass is below a critical value,
│ Coefficient ${\displaystyle \alpha }$ for Pressure-Bounded Configurations │
│ ${\displaystyle \alpha }$ │ Context │ Source │
│ │ │ Discovery │
│ │ │ Paper │
│ ${\displaystyle 1.18}$ │ Bounded Isothermal Sphere │ Bonnor │
│ │ (numerically derived) │ (1956) │
Bonnor-Ebert Mass │ │ │ (see also │
│ │ │ here) │
${\displaystyle M_{\mathrm {max} }=\alpha {\biggl (}{\frac {c_{s}^{8}}{G ├───────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼────────────┤
^{3}P_{e}}}{\biggr )}^{1/2}}$ │ ${\displaystyle {\biggl (}{\frac {3^{4}\cdot 5^{3}}{2^{10}\pi }}{\biggr )}^{1 │ Isothermal Virial Analysis │ Here │
│ /2}}$ │ (exact) │ │
│ ${\displaystyle {\biggl (}{\frac {1}{2}}{\biggr )}^{3/10}{\biggl (}{\frac {3^ │ Bounded ${\displaystyle ~n= │ │
│ {7}}{2^{8}\pi }}{\biggr )}^{1/2}}$ │ 5}$ Polytrope │ Here │
│ │ (exact) │ │
│ ${\displaystyle {\biggl (}{\frac {3^{19}}{2^{12}\cdot 5^{7}\pi }}{\biggr )}^ │ ${\displaystyle ~n=5}$ │ │
│ {1/2}}$ │ Virial Analysis │ Here │
│ │ (exact) │ │
where ${\displaystyle ~\alpha }$ is a dimensionless coefficient of order unity. This limiting mass is often referred to as the Bonnor-Ebert mass. It appears most frequently in the astrophysics
literature in discussions of star formation because that is the arena in which both Bonnor and Ebert were conducting research when they made their discoveries.
As is reviewed in a related discussion and as is documented in the table accompanying the expression for ${\displaystyle ~M_{\mathrm {max} }}$, above, Bonnor (1956) used Emden's (1907) tabulated
properties of an isothermal sphere to determine that the dimensionless radius of this limiting configuration is ${\displaystyle \xi _{e}\approx 6.5}$ and that the leading coefficient, ${\displaystyle
\alpha \approx 1.18}$. It is worth noting that a global virial analysis of the stability of bounded isothermal spheres produces the same expression for ${\displaystyle M_{\mathrm {max} }}$ with a
leading coefficient that has an exact, analytic prescription, namely, ${\displaystyle \alpha =(3^{4}\cdot 5^{3}/2^{10}\pi )^{1/2}\approx 1.77408}$. While it can be advantageous to reference this
analytic prescription of ${\displaystyle \alpha }$, the virial analysis must be considered more approximate than Bonnor's analysis because it does not require the construction of models that are in
detailed force balance.
Our detailed force-balance analysis of truncated and pressure-bounded, ${\displaystyle n=5}$ polytropes identifies a physically analogous limiting mass. If the average isothermal sound speed, ${\
displaystyle ~{\bar {c_{s}}}}$, as defined elsewhere, is used in place of ${\displaystyle c_{s}}$, the mathematical expression for ${\displaystyle ~M_{\mathrm {max} }}$ has exactly the same form as
in the isothermal case. But for the ${\displaystyle ~n=5}$ polytrope we know that the limiting configuration has a dimensionless radius given precisely by ${\displaystyle \xi _{e}=3}$; and, as a
result, the leading coefficient in the definition of ${\displaystyle M_{\mathrm {max} }}$ is prescribable analytically, namely, ${\displaystyle \alpha =2^{-3/10}\cdot (3^{7}/2^{8}\pi )^{1/2}\approx
1.33943}$. As is documented in the table accompanying the relation for ${\displaystyle ~M_{\mathrm {max} }}$, above, in the case of a truncated ${\displaystyle ~n=5}$ polytrope, the simpler and more
approximate virial analysis gives, ${\displaystyle \alpha =(3^{19}/2^{12}\cdot 5^{7}\pi )^{1/2}\approx 1.07523}$.
Schönberg-Chandrasekhar Mass
In the early 1940s, Chandrasekhar and his colleagues (see Henrich & Chandraskhar (1941) and Schönberg & Chandrasekhar (1942)) discovered that a star with an isothermal core will become unstable if
the fractional mass of the core is above some limiting value. They discovered this by constructing models that are now commonly referred to as composite polytropes or bipolytropes, that is, models in
which the star's core is described by a polytropic equation of state having one index — say, ${\displaystyle ~n_{c}}$ — and the star's envelope is described by a polytropic equation of state of a
different index — say, ${\displaystyle ~n_{e}}$. In an accompanying discussion we explain in detail how the two structural components with different polytropic indexes are pieced together
mathematically to build equilibrium bipolytropes. For a given choice of the two indexes, ${\displaystyle ~n_{c}}$ and ${\displaystyle ~n_{e}}$, a sequence of models can be generated by varying the
radial location at which the interface between the core and envelope occurs. As the interface location is varied, the relative amount of mass enclosed inside the core, ${\displaystyle ~u \equiv M_{\
mathrm {core} }/M_{\mathrm {tot} }}$, quite naturally varies as well.
Henrich & Chandraskhar (1941) built structures of uniform composition having an isothermal core (${\displaystyle ~n_{c}=\infty }$) and an ${\displaystyle n_{e}=3/2}$ [should be n[e] = 3] polytropic
envelope and found that equilibrium models exist only for values of ${\displaystyle ~u \leq u _{\mathrm {max} }\approx 0.35}$. Schönberg & Chandrasekhar (1942) extended this analysis to include
structures in which the mean molecular weight of the gas changes discontinuously across the interface. Specifically, they used the same values of ${\displaystyle ~n_{c}}$ and ${\displaystyle ~n_{e}}$
as Henrich & Chandrasekhar, but they constructed models in which the ratio of the molecular weight in the core to the molecular weight in the envelope is ${\displaystyle ~\mu _{c}/\mu _{e}=2}$. This
was done to more realistically represent stars as they evolve off the main sequence; they have inert, isothermal helium cores and envelopes that are rich in hydrogen. Note that introducing a
discontinuous drop in the mean molecular weight at the core-envelope interface also introduces a discontinuous drop in the gas density across the interface. As the following excerpt from p. 168 of
their article summarizes, in these models, Schönberg & Chandrasekhar (1942) found that ${\displaystyle ~u _{\mathrm {max} }\approx 0.101}$. This is commonly referred to as the Schönberg-Chandrasekhar
mass limit, although it was Henrich & Chandrasekhar who were the first to identify the instability.
In an effort to develop a more complete appreciation of the onset of the instability associated with the Schönberg-Chandrasekhar mass limit, Beech (1988) matched an analytically prescribable, ${\
displaystyle ~n_{e}=1}$ polytropic envelope to an isothermal core and, like Schönberg & Chandrasekhar, allowed for a discontinuous change in the molecular weight at the interface. [For an even more
comprehensive generalization and discussion, see Ball, Tout, & Żytkow (2012, MNRAS, 421, 2713)]. Beech's results were not significantly different from those reported by Schönberg & Chandrasekhar
(1942); in particular, the value of ${\displaystyle ~u _{\mathrm {max} }}$ was still only definable numerically because an isothermal core cannot be described in terms of analytic functions.
In an accompanying derivation [see, also, Eggleton, Faulkner, and Cannon (1998, MNRAS, 298, 831)] we have gone one step farther, matching an analytically prescribable, ${\displaystyle ~n_{e}=1}$
polytropic envelope to an analytically prescribable, ${\displaystyle ~n_{c}=5}$ polytropic core. For this bipolytrope, we show that there is a limiting mass-fraction, ${\displaystyle ~u _{\mathrm
{max} }}$, for any choice of the molecular weight ratio ${\displaystyle ~\mu _{c}/\mu _{e}>3}$ and that the interface location, ${\displaystyle ~\xi _{i}}$, associated with this critical
configuration is given by the positive, real root of the following relation:
${\displaystyle {\biggl (}{\frac {\pi }{2}}+\tan ^{-1}\Lambda _{i}{\biggr )}(1+\ell _{i}^{2})[3+(1-m_{3})^{2}(2-\ell _{i}^{2})\ell _{i}^{2}]-m_{3}\ell _{i}[(1-m_{3})\ell _{i}^{4}-(m_{3}^{2}-m_{3}+2)\
ell _{i}^{2}-3]=0\,,}$
${\displaystyle \ell _{i}\equiv {\frac {\xi _{i}}{\sqrt {3}}}\,;}$ ${\displaystyle m_{3}\equiv 3{\biggl (}{\frac {\mu _{c}}{\mu _{e}}}{\biggr )}^{-1}\,;}$ and ${\displaystyle \Lambda _{i}\equiv {\
frac {1}{m_{3}\ell _{i}}}[1+(1-m_{3})\ell _{i}^{2}]\,.}$
Relationship Between the Bonnor-Ebert and Schönberg-Chandrasekhar Critical Masses
As we have shown elsewhere, in the limit ${\displaystyle ~m_{3}\rightarrow 0}$, the physically relevant root of the above analytic relation is ${\displaystyle ~\xi _{i}=3}$ and the mass contained in
the core is,
${\displaystyle M_{\mathrm {core} }={\biggl (}{\frac {1}{2}}{\biggr )}^{3/10}{\biggl (}{\frac {3^{7}}{2^{8}\pi }}{\biggr )}^{1/2}{\biggl (}{\frac {c_{s}^{8}}{G^{3}P_{e}}}{\biggr )}^{1/2}\,.}$
While the pressure at the base of the envelope in this model is satisfactorily confining the core to a finite radius, the condition ${\displaystyle ~m_{3}\rightarrow 0}$ implies that the mass-density
at the base of — indeed, throughout — the envelope is zero. As we have pointed out elsewhere, this can be achieved by introducing a temperature discontinuity rather than a molecular weight
discontinuity across the interface. Hence, there is no mass in the envelope and the resulting model is, effectively, a truncated ${\displaystyle ~n=5}$ polytrope embedded in a hot, tenuous external
medium. It therefore should not come as a surprise that this critical Schönberg-Chandrasekhar-type configuration has a mass that is precisely equal to the Bonnor-Ebert limiting mass, as defined by a
"bounded ${\displaystyle ~n=5}$ polytrope." (See the second row from the bottom in the above table.)
It should be clear from this analysis that the Bonnor-Ebert critical mass is not distinct from the Schönberg-Chandrasekhar critical mass. It can be derived from the Schönberg-Chandrasekhar mass in
the limit when ${\displaystyle \mu _{e}/\mu _{c}\rightarrow 0}$. Had Schönberg & Chandrasekhar (1942) examined their model in this limit, they would have "discovered" the Bonnor-Ebert mass a decade
prior to both Bonnor's and Ebert's published works.
Material that appears after this point in our presentation is under development and therefore
may contain incorrect mathematical equations and/or physical misinterpretations.
| Go Home |
Related Discussions | {"url":"https://tohline.education/SelfGravitatingFluids/index.php/SSC/Structure/LimitingMasses","timestamp":"2024-11-07T22:34:20Z","content_type":"text/html","content_length":"137084","record_id":"<urn:uuid:29f3df78-74af-4301-b624-798c97133280>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00126.warc.gz"} |
RE: st: ttest or xtmelogit?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: ttest or xtmelogit?
From "Nick Cox" <[email protected]>
To <[email protected]>
Subject RE: st: ttest or xtmelogit?
Date Wed, 12 Mar 2008 18:14:12 -0000
Steven is correct. This isn't mentioned in -transint-.
-transint- (on SSC) is a slightly unusual package. It is just a help
file written because I wanted my (geography) students to have something
better than the rather poor coverage of transformations in the books
available to them. In fact, I haven't been able to find many accounts of
transformations that were very concise, covered the really important
ideas, but were also light on the mathematics, which is of course a
contradictory desire. Anyway, it then seemed that it might be useful a
little more widely.
Variance-stabilisation is as Steven says the motive for the angular: it
is difficult
to imagine it arising except out of an algebraic argument, which I think
goes back
to Fisher. So, next time around, that might merit an explanation.
[email protected]
Steven Samuels
Not mentioned in -transint- is the variance-stabilizing property of
the angular transformation: it has asymptotic variance 1/4n, which is
not a function of p (Anscombe, 1948). If the observed proportion is r/
n, Anscombe showed that the arcsine of [(r + 3/8)/(n + 3/4)]^.5 is
even better at stabilizing the variance, for moderate sample size.
The second version has variance 1/(4n + 2).
The arcsine-transformation used to be recommended because transformed
proportions could be analyzed via standard ANOVA programs. I once
found it useful in a variance components analysis. The 'error'
variance was a mixture of a between-sample and within sample
(binomial) variance. With the arcsine transformation, I could
subtract out the part attributable to binomial variation.
FJ Anscombe 1948. The transformation of Poisson, Binomial, and
negative-binomial data. Biometrika 35:246-254
On Mar 10, 2008, at 6:02 PM, Nick Cox wrote:
> By arcsin I guess you mean the angular transformation (arcsine of
> square
> root).
> Its use seems to have faded dramatically in recent years.
> Tukey showed that this is very close to p^0.41 - (1 - p)^0.41. That
> makes it weaker
> than the logit. My guess is that it would be an unusual dataset in
> which
> the angular
> was much better than leaving data as is and also much better than the
> logit. It could happen,
> but it seems to be rare.
> The Tukey reference is given in -transint- from SSC.
> Nick
> [email protected]
> David Airey
> Maybe I should not have said it was pilot data! I won't disagree, but
> when cluster number is too small (< 20) to invoke xtgee or xtmelogit
> on the observed yes/no data, or glm on the summary statistics with
> binomial family and logit link, what do you do? It seems to me there
> is a sample size between 10 and 30 clusters of yes/no data that may be
> better suited to some of the older approaches like arcsin transformed
> proportions and then ttest or ANOVA/regress. I guess that was my
> question.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2008-03/msg00519.html","timestamp":"2024-11-10T11:02:36Z","content_type":"text/html","content_length":"11640","record_id":"<urn:uuid:65b3739d-3fe0-49f5-b2ed-d416b046d778>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00497.warc.gz"} |
4.9: Problems
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
1. The following masses were recorded for 12 different U.S. quarters (all given in grams):
5.683 5.549 5.548 5.552
5.620 5.536 5.539 5.684
5.551 5.552 5.554 5.632
Report the mean, median, range, standard deviation and variance for this data.
2. A determination of acetaminophen in 10 separate tablets of Excedrin Extra Strength Pain Reliever gives the following results (in mg)
224.3 240.4 246.3 239.4 253.1
261.7 229.4 255.5 235.5 249.7
(a) Report the mean, median, range, standard deviation and variance for this data.
(b) Assuming that \(\overline{X}\) and s^2 are good approximations for \(\mu\) and for \(\sigma^2\), and that the population is normally distributed, what percentage of tablets contain more than the
standard amount of 250 mg acetaminophen per tablet?
The data in this problem are from Simonian, M. H.; Dinh, S.; Fray, L. A. Spectroscopy 1993, 8(6), 37–47.
3. Salem and Galan developed a new method to determine the amount of morphine hydrochloride in tablets. An analysis of tablets with different nominal dosages gave the following results (in mg/
100-mg tablets 60-mg tablets 30-mg tablets 10-mg tablets
99.17 54.21 28.51 9.06
94.31 55.62 26.25 8.83
95.92 57.40 25.92 9.08
94.55 57.51 28.62
93.83 52.59 24.93
(a) For each dosage, calculate the mean and the standard deviation for the mg of morphine hydrochloride per tablet.
(b) For each dosage level, and assuming that \(\overline{X}\) and s^2 are good approximations for \(\mu\) and for \(\sigma^2\), and that the population is normally distributed, what percentage of
tablets contain more than the nominal amount of morphine hydro- chloride per tablet?
The data in this problem are from Salem, I. I.; Galan, A. C. Anal. Chim. Acta 1993, 283, 334–337.
4. Daskalakis and co-workers evaluated several procedures for digesting oyster and mussel tissue prior to analyzing them for silver. To evaluate the procedures they spiked samples with known amounts
of silver and analyzed the samples to determine the amount of silver, reporting results as the percentage of added silver found in the analysis. A procedure was judged acceptable if its spike
recoveries fell within the range 100±15%. The spike recoveries for one method are shown here.
105% 108% 92% 99%
101% 93% 93% 104%
Assuming a normal distribution for the spike recoveries, what is the probability that any single spike recovery is within the accepted range?
The data in this problem are from Daskalakis, K. D.; O’Connor, T. P.; Crecelius, E. A. Environ. Sci. Technol. 1997, 31, 2303– 2306. See Chapter 15 to learn more about using a spike recovery to
evaluate an analytical method.
5. The formula weight (FW) of a gas can be determined using the following form of the ideal gas law
\[FW = \frac {g \text{R} T} {P V} \nonumber\]
where g is the mass in grams, R is the gas constant, T is the temperature in Kelvin, P is the pressure in atmospheres, and V is the volume in liters. In a typical analysis the following data are
obtained (with estimated uncertainties in parentheses)
g = 0.118 g (± 0.002 g)
R = 0.082056 L atm mol^–1 K^–1 (± 0.000001 L atm mol^–1 K^–1)
T = 298.2 K (± 0.1 K)
P = 0.724 atm (± 0.005 atm)
V = 0.250 L (± 0.005 L)
(a) What is the compound’s formula weight and its estimated uncertainty?
(b) To which variable(s) should you direct your attention if you wish to improve the uncertainty in the compound’s molecular weight?
6. To prepare a standard solution of Mn^2^+, a 0.250 g sample of Mn is dissolved in 10 mL of concentrated HNO[3] (measured with a graduated cylinder). The resulting solution is quantitatively
transferred to a 100-mL volumetric flask and diluted to volume with distilled water. A 10-mL aliquot of the solution is pipeted into a 500-mL volumetric flask and diluted to volume.
(a) Express the concentration of Mn in mg/L, and estimate its uncertainty using a propagation of uncertainty.
(b) Can you improve the concentration’s uncertainty by using a pipet to measure the HNO[3], instead of a graduated cylinder?
7. The mass of a hygroscopic compound is measured using the technique of weighing by difference. In this technique the compound is placed in a sealed container and weighed. A portion of the compound
is removed and the container and the remaining material are reweighed. The difference between the two masses gives the sample’s mass. A solution of a hygroscopic compound with a gram formula weight
of 121.34 g/mol (±0.01 g/mol) is prepared in the following manner. A sample of the compound and its container has a mass of 23.5811 g. A portion of the compound is transferred to a 100-mL volumetric
flask and diluted to volume. The mass of the compound and container after the transfer is 22.1559 g. Calculate the compound’s molarity and estimate its uncertainty by a propagation of uncertainty.
8. Use a propagation of uncertainty to show that the standard error of the mean for n determinations is \(\sigma / \sqrt{n}\).
9. Beginning with Equation 4.6.4 and Equation 4.6.5, use a propagation of uncertainty to derive Equation 4.6.6.
10. What is the smallest mass you can measure on an analytical balance that has a tolerance of ±0.1 mg, if the relative error must be less than 0.1%?
11. Which of the following is the best way to dispense 100.0 mL if we wish to minimize the uncertainty: (a) use a 50-mL pipet twice; (b) use a 25-mL pipet four times; or (c) use a 10-mL pipet ten
12. You can dilute a solution by a factor of 200 using readily available pipets (1-mL to 100-mL) and volumetric flasks (10-mL to 1000-mL) in either one step, two steps, or three steps. Limiting
yourself to the glassware in Table 4.2.1, determine the proper combination of glassware to accomplish each dilution, and rank them in order of their most probable uncertainties.
13. Explain why changing all values in a data set by a constant amount will change \(\overline{X}\) but has no effect on the standard deviation, s.
14. Obtain a sample of a metal, or other material, from your instructor and determine its density by one or both of the following methods:
Method A: Determine the sample’s mass with a balance. Calculate the sample’s volume using appropriate linear dimensions.
Method B: Determine the sample’s mass with a balance. Calculate the sample’s volume by measuring the amount of water it displaces by adding water to a graduated cylinder, reading the volume, adding
the sample, and reading the new volume. The difference in volumes is equal to the sample’s volume.
Determine the density at least five times.
(a) Report the mean, the standard deviation, and the 95% confidence interval for your results.
(b) Find the accepted value for the metal’s density and determine the absolute and relative error for your determination of the metal’s density.
(c) Use a propagation of uncertainty to determine the uncertainty for your method of analysis. Is the result of this calculation consistent with your experimental results? If not, suggest some
possible reasons for this disagreement.
15. How many carbon atoms must a molecule have if the mean number of ^13C atoms per molecule is at least one? What percentage of such molecules will have no atoms of ^13C?
16. In Example 4.4.1 we determined the probability that a molecule of cholesterol, C[27]H[44]O, had no atoms of ^13C.
(a) Calculate the probability that a molecule of cholesterol, has 1 atom of ^13C.
(b) What is the probability that a molecule of cholesterol has two or more atoms of ^13C?
17. Berglund and Wichardt investigated the quantitative determination of Cr in high-alloy steels using a potentiometric titration of Cr(VI). Before the titration, samples of the steel were dissolved
in acid and the chromium oxidized to Cr(VI) using peroxydisulfate. Shown here are the results ( as %w/w Cr) for the analysis of a reference steel.
16.968 16.922 16.840 16.883
16.887 16.977 16.857 16.728
Calculate the mean, the standard deviation, and the 95% confidence interval about the mean. What does this confidence interval mean?
The data in this problem are from Berglund, B.; Wichardt, C. Anal. Chim. Acta 1990, 236, 399–410.
18. Ketkar and co-workers developed an analytical method to determine trace levels of atmospheric gases. An analysis of a sample that is 40.0 parts per thousand (ppt) 2-chloroethylsulfide gave the
following results
43.3 34.8 31.9
37.8 34.4 31.9
42.1 33.6 35.3
(a) Determine whether there is a significant difference between the experimental mean and the expected value at \(\alpha = 0.05\).
(b) As part of this study, a reagent blank was analyzed 12 times giving a mean of 0.16 ppt and a standard deviation of 1.20 ppt. What are the IUPAC detection limit, the limit of identification, and
limit of quantitation for this method assuming \(\alpha = 0.05\)?
The data in this problem are from Ketkar, S. N.; Dulak, J. G.; Dheandhanou, S.; Fite, W. L. Anal. Chim. Acta 1991, 245, 267–270.
19. To test a spectrophotometer’s accuracy a solution of 60.06 ppm K[2]Cr[2]O[7] in 5.0 mM H[2]SO[4] is prepared and analyzed. This solution has an expected absorbance of 0.640 at 350.0 nm in a
1.0-cm cell when using 5.0 mM H[2]SO[4] as a reagent blank. Several aliquots of the solution produce the following absorbance values.
0.639 0.638 0.640 0.639 0.640 0.639 0.638
Determine whether there is a significant difference between the experimental mean and the expected value at \(\alpha = 0.01\).
20. Monna and co-workers used radioactive isotopes to date sediments from lakes and estuaries. To verify this method they analyzed a ^208Po standard known to have an activity of 77.5 decays/min,
obtaining the following results.
77.09 75.37 72.42 76.84 77.84 76.69
78.03 74.96 77.54 76.09 81.12 75.75
Determine whether there is a significant difference between the mean and the expected value at \(\alpha = 0.05\).
The data in this problem are from Monna, F.; Mathieu, D.; Marques, A. N.; Lancelot, J.; Bernat, M. Anal. Chim. Acta 1996, 330, 107–116.
21. A 2.6540-g sample of an iron ore, which is 53.51% w/w Fe, is dissolved in a small portion of concentrated HCl and diluted to volume in a 250-mL volumetric flask. A spectrophotometric
determination of the concentration of Fe in this solution yields results of 5840, 5770, 5650, and 5660 ppm. Determine whether there is a significant difference between the experimental mean and the
expected value at \(\alpha = 0.05\).
22. Horvat and co-workers used atomic absorption spectroscopy to determine the concentration of Hg in coal fly ash. Of particular interest to the authors was developing an appropriate procedure for
digesting samples and releasing the Hg for analysis. As part of their study they tested several reagents for digesting samples. Their results using HNO[3] and using a 1 + 3 mixture of HNO[3] and HCl
are shown here. All concentrations are given as ppb Hg sample.
HNO[3]: 161 165 160 167 166
1 + 3 HNO[3] – HCl: 159 145 1540 147 143 156
Determine whether there is a significant difference between these methods at \(\alpha = 0.05\).
The data in this problem are from Horvat, M.; Lupsina, V.; Pihlar, B. Anal. Chim. Acta 1991, 243, 71–79.
23, Lord Rayleigh, John William Strutt (1842-1919), was one of the most well known scientists of the late nineteenth and early twentieth centuries, publishing over 440 papers and receiving the Nobel
Prize in 1904 for the discovery of argon. An important turning point in Rayleigh’s discovery of Ar was his experimental measurements of the density of N[2]. Rayleigh approached this experiment in two
ways: first by taking atmospheric air and removing O[2] and H[2]; and second, by chemically producing N[2] by decomposing nitrogen containing compounds (NO, N[2]O, and NH[4]NO[3]) and again removing
O[2] and H[2]. The following table shows his results for the density of N[2], as published in Proc. Roy. Soc. 1894, LV, 340 (publication 210); all values are the grams of gas at an equivalent volume,
pressure, and temperature.
atmospheric origin chemical origin
2.31017 2.30143
2.30986 2.29890
2.31010 2.29816
2.31001 2.30182
2.31024 2.29869
2.31010 2.29940
2.31028 2.29849
Explain why this data led Rayleigh to look for and to discover Ar. You can read more about this discovery here: Larsen, R. D. J. Chem. Educ. 1990, 67, 925–928.
24. Gács and Ferraroli reported a method for monitoring the concentration of SO[2] in air. They compared their method to the standard method by analyzing urban air samples collected from a single
location. Samples were collected by drawing air through a collection solution for 6 min. Shown here is a summary of their results with SO[2] concentrations reported in \(\mu \text{L/m}^3\).
standard method new method
21.62 21.54
22.20 20.51
24.27 22.31
23.54 21.30
24.25 24.62
23.09 25.72
21.02 21.54
Using an appropriate statistical test, determine whether there is any significant difference between the standard method and the new method at \(\alpha = 0.05\).
The data in this problem are from Gács, I.; Ferraroli, R. Anal. Chim. Acta 1992, 269, 177–185.
25. One way to check the accuracy of a spectrophotometer is to measure absorbances for a series of standard dichromate solutions obtained from the National Institute of Standards and Technology.
Absorbances are measured at 257 nm and compared to the accepted values. The results obtained when testing a newly purchased spectrophotometer are shown here. Determine if the tested spectrophotometer
is accurate at \(\alpha = 0.05\).
standard measured absorbance expected absorbance
1 0.2872 0.2871
2 0.5773 0.5760
3 0.8674 0.8677
4 1.1623 1.1608
5 1.4559 1.4565
26. Maskarinec and co-workers investigated the stability of volatile organics in environmental water samples. Of particular interest was establishing the proper conditions to maintain the sample’s
integrity between its collection and its analysis. Two preservatives were investigated—ascorbic acid and sodium bisulfate—and maximum holding times were determined for a number of volatile organics
and water matrices. The following table shows results for the holding time (in days) of nine organic compounds in surface water.
compound Ascorbic Acid Sodium Bisulfate
methylene chloride 77 62
carbon disulfide 23 54
trichloroethane 52 51
benzene 62 42
1,1,2-trichlorethane 57 53
1,1,2,2-tetrachloroethane 33 85
tetrachloroethene 32 94
chlorbenzene 36 86
Determine whether there is a significant difference in the effectiveness of the two preservatives at \(\alpha = 0.10\).
The data in this problem are from Maxkarinec, M. P.; Johnson, L. H.; Holladay, S. K.; Moody, R. L.; Bayne, C. K.; Jenkins, R. A. Environ. Sci. Technol. 1990, 24, 1665–1670.
27. Using X-ray diffraction, Karstang and Kvalhein reported a new method to determine the weight percent of kaolinite in complex clay minerals using X-ray diffraction. To test the method, nine
samples containing known amounts of kaolinite were prepared and analyzed. The results (as % w/w kaolinite) are shown here.
actual 5.0 10.0 20.0 40.0 50.0 60.0 80.0 90.0 95.0
found 6.8 11.7 19.8 40.5 53.6 61.7 78.9 91.7 94.7
Evaluate the accuracy of the method at \(\alpha = 0.05\).
The data in this problem are from Karstang, T. V.; Kvalhein, O. M. Anal. Chem. 1991, 63, 767–772.
28. Mizutani, Yabuki and Asai developed an electrochemical method for analyzing l-malate. As part of their study they analyzed a series of beverages using both their method and a standard
spectrophotometric procedure based on a clinical kit purchased from Boerhinger Scientific. The following table summarizes their results. All values are in ppm.
Sample Electrode Spectrophotometric
Apple Juice 1 34.0 33.4
Apple Juice 2 22.6 28.4
Apple Juice 3 29.7 29.5
Apple Juice 4 24.9 24.8
Grape Juice 1 17.8 18.3
Grape Juice 2 14.8 15.4
Mixed Fruit Juice 1 8.6 8.5
Mixed Fruit Juice 2 31.4 31.9
White Wine 1 10.8 11.5
White Wine 2 17.3 17.6
White Wine 3 15.7 15.4
White Wine 4 18.4 18.3
The data in this problem are from Mizutani, F.; Yabuki, S.; Asai, M. Anal. Chim. Acta 1991, 245,145–150.
29. Alexiev and colleagues describe an improved photometric method for determining Fe^3^+ based on its ability to catalyze the oxidation of sulphanilic acid by KIO[4]. As part of their study, the
concentration of Fe^3+ in human serum samples was determined by the improved method and the standard method. The results, with concentrations in \(\mu \text{mol/L}\), are shown in the following
Sample Improved Method Standard Method
1 8.25 8.06
2 9.75 8.84
3 9.75 8.36
4 9.75 8.73
5 10.75 13.13
6 11.25 13.65
7 13.88 13.85
8 14.25 13.43
Determine whether there is a significant difference between the two methods at \(\alpha = 0.05\).
The data in this problem are from Alexiev, A.; Rubino, S.; Deyanova, M.; Stoyanova, A.; Sicilia, D.; Perez Bendito, D. Anal. Chim. Acta, 1994, 295, 211–219.
30. Ten laboratories were asked to determine an analyte’s concentration of in three standard test samples. Following are the results, in \(\mu \text{g/ml}\).
Laboratory Sample 1 Sample 2 Sample 3
1 22.6 13.6 16.0
2 23.0 14.2 15.9
3 21.5 13.9 16.9
4 21.9 13.9 16.9
5 21.3 13.5 16.7
6 22.1 13.5 17.4
7 23.1 13.5 17.5
8 21.7 13.5 16.8
9 22.2 12.9 17.2
10 21.7 13.8 16.7
Determine if there are any potential outliers in Sample 1, Sample 2 or Sample 3. Use all three methods—Dixon’s Q-test, Grubb’s test, and Chauvenet’s criterion—and compare the results to each other.
For Dixon’s Q-test and for the Grubb’s test, use a significance level of \(\alpha = 0.05\).
The data in this problem are adapted from Steiner, E. H. “Planning and Analysis of Results of Collaborative Tests,” in Statistical Manual of the Association of Official Analytical Chemists,
Association of Official Analytical Chemists: Washington, D. C., 1975.
31.When copper metal and powdered sulfur are placed in a crucible and ignited, the product is a sulfide with an empirical formula of Cu[x]S. The value of x is determined by weighing the Cu and the S
before ignition and finding the mass of Cu[x]S when the reaction is complete (any excess sulfur leaves as SO[2]). The following table shows the Cu/S ratios from 62 such experiments (note that the
values are organized from smallest-to-largest by rows).
1.764 1.838 1.865 1.866 1.872 1.877
1.890 1.891 1.891 1.897 1.899 1.900
1.906 1.908 1.910 1.911 1.916 1.919
1.920 1.922 1.927 1.931 1.935 1.936
1.936 1.937 1.939 1.939 1.940 1.941
1.941 1.942 1.943 1.948 1.953 1.955
1.957 1.957 1.957 1.959 1.962 1.963
1.963 1.963 1.966 1.968 1.969 1.973
1.975 1.976 1.977 1.981 1.981 1.988
1.993 1.993 1.995 1.995 1.995 2.017
2.029 2.042
(a) Calculate the mean, the median, and the standard deviation for this data.
(b) Construct a histogram for this data. From a visual inspection of your histogram, do the data appear normally distributed?
(c) In a normally distributed population 68.26% of all members lie within the range \(\mu \pm 1 \sigma\). What percentage of the data lies within the range \(\overline{X} \pm 1 \sigma\)? Does this
support your answer to the previous question?
(d) Assuming that \(\overline{X}\) and \(s^2\) are good approximations for \(\mu\) and for \(\sigma^2\), what percentage of all experimentally determined Cu/S ratios should be greater than 2? How
does this compare with the experimental data? Does this support your conclusion about whether the data is normally distributed?
(e) It has been reported that this method of preparing copper sulfide results in a non-stoichiometric compound with a Cu/S ratio of less than 2. Determine if the mean value for this data is
significantly less than 2 at a significance level of \(\alpha = 0.01\).
See Blanchnik, R.; Müller, A. “The Formation of Cu[2]S From the Elements I. Copper Used in Form of Powders,” Thermochim. Acta, 2000, 361, 31-52 for a discussion of some of the factors affecting the
formation of non-stoichiometric copper sulfide. The data in this problem were collected by students at DePauw University.
32. Real-time quantitative PCR is an analytical method for determining trace amounts of DNA. During the analysis, each cycle doubles the amount of DNA. A probe species that fluoresces in the presence
of DNA is added to the reaction mixture and the increase in fluorescence is monitored during the cycling. The cycle threshold, \(C_t\), is the cycle when the fluorescence exceeds a threshold value.
The data in the following table shows \(C_t\) values for three samples using real-time quantitative PCR. Each sample was analyzed 18 times.
Sample X Sample Y Sample Z
24.24 25.14 24.41 28.06 22.97 23.43
23.97 24.57 27.21 27.77 22.93 23.66
24.44 24.49 27.02 28.74 22.95 28.79
24.79 24.68 26.81 28.35 23.12 23.77
23.92 24.45 26.64 28.80 23.59 23.98
24.53 24,48 27.63 27.99 23.37 23.56
24.95 24.30 28.42 28.21 24.17 22.80
24.76 24.60 25.16 28.00 23.48 23.29
25.18 24.57 28.53 28.21 23.80 23.86
Examine this data and write a brief report on your conclusions. Issues you may wish to address include the presence of outliers in the samples, a summary of the descriptive statistics for each
sample, and any evidence for a difference between the samples.
The data in this problem is from Burns, M. J.; Nixon, G. J.; Foy, C. A.; Harris, N. BMC Biotechnol. 2005, 5:31 (open access publication). | {"url":"https://chem.libretexts.org/Courses/BethuneCookman_University/B-CU%3A_CH-345_Quantitative_Analysis/Book%3A_Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.09%3A_Problems","timestamp":"2024-11-04T10:54:55Z","content_type":"text/html","content_length":"185509","record_id":"<urn:uuid:6a97e9c1-1d90-4b34-ad51-baf8cc0ad3a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00409.warc.gz"} |
Definition of ubit (unpredictable bit)
We propose this definition as the quantum of information randomness.
One ubit (unpredictable bit) has the following properties:
• Its value may be either 0 or 1, both with equal probabilities.
• Its value is determined by at least one unpredictable event.
Theoretical background
One unpredictable event impedes all its predictors from knowing two of its magnitudes simultaneously: one previous state and all further transformations until its occurrence.
For example: the result of measuring the exact position of a given particle at a given time is unpredictable due to the uncertainty principle.
Estimating unpredictability of random number generators
It is possible to estimate the unpredictability of random number generators analyzing how many ubits determine the value of each output bit:
• A pseudo-random number generator, implementing constant transformations to all determinants for all output bits, has an unpredictability of: 0 ubits/1 bits.
• A pseudo-random number generator, seeded with 8 ubits before giving an output of 64 bits, has an unpredictability of: 8 ubits/64 bits.
• A random number generator, such as a qubit in a never measured superposition state, has an unpredictability of: 1 ubits/0 bits.
• A true random number generator has an unpredictability of at least: 1 ubits/1 bits.
The more ubits determine each output bit, the more unpredictable it is, being the highest unpredictability: ∞ ubits/1 bits, the lowest: 0 ubits/1 bits, and the optimum for a true random number
generator: 1 ubits/1 bits.
After years of research, we formulated the hypothesis, that the main source of randomness in the entire universe, is the zero-point energy. This would explain, why only hardware is able to generate
true and unpredictable random numbers: any software without relationship with the zero-point energy, lacks randomness.
Zero-point energy seems to be the responsible force for moving everything everywhere: it seems to be the creator of time.
Shannon entropy formula measures the equiprobability, while unpredictability is unmeasurable, because the result of unpredictable events becomes deterministic once measured. | {"url":"https://ncomputers.org/ubit","timestamp":"2024-11-12T03:53:52Z","content_type":"text/html","content_length":"5697","record_id":"<urn:uuid:2b56ac14-b3d1-406d-9d13-3ecde74b1ae4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00271.warc.gz"} |
big ball mill experiment pdf
WEBSep 7, 2020 · The influence of ballmill grinding process parameters and polymer properties on the mechanochemical degradation of amorphous polymers was explored. For process parameters, the
grinding frequency w...
WhatsApp: +86 18203695377
WEBLAB REPORT FST559 EXPERIMENT 7 THE EFFECT OF BALL MILLING ON PARTICLE SIZE OF COARSE SUGAR fst559: unit operation laboratory report experiment the effect of. ... in the opening of the cylinder
(ball mill) is exposed. About 60 pieces of ceramic balls was placed into opening of ball mill, e. 30 big sized balls, 20 medium sized balls and .
WhatsApp: +86 18203695377
WEBJul 15, 2013 · The basis for ball mill circuit sizing is still B ond's methodology (Bond, 1962). The Bond ball and rod. mill tests are used to determine specific energy c onsumption (kWh/t) to
grind from a ...
WhatsApp: +86 18203695377
WEBJan 5, 2023 · The ball mill is the key equipment for grinding the minerals after the ore is crushed. With the continuous development of the industrial level, the development of ball mills is
also moving towards ...
WhatsApp: +86 18203695377
WEBBall Mill Manual F Free download as Word Doc (.doc), PDF File (.pdf), Text File (.txt) or read online for free. The document provides instructions for conducting an experiment using a ball
mill, including diagrams of the ball mill, an explanation of how ball mills work to grind materials, factors that influence the size of the ground product, and procedures .
WhatsApp: +86 18203695377
WEBMar 1, 2004 · Particle scale simulation of industrial particle flows using discrete element method (DEM) offers the opportunity for better understanding the flow dynamics leading to
improvements in equipment design and operation that can potentially lead to large increases in equipment and process efficiency, throughput and/or product quality. .
WhatsApp: +86 18203695377
WEBMay 30, 2016 · The ultimate crystalline size of graphite, estimated by the Raman intensity ratio, of nm for the agate ballmill is smaller than that of nm for the stainless ballmill, while the
milling ...
WhatsApp: +86 18203695377
WEBAug 1, 2010 · The experiments were performed on a laboratoryscale ball mill (XMQL420 × 450), which is a continuous grinding grid mill drum is 460 mm in diameter and 460 mm in length, as shown
in Fig. mill, driven by a threephase kW motor, has maximum ball load of 80 kg, a designed pulverizing capacity of 10 kg per hour, a rated .
WhatsApp: +86 18203695377
WEBSep 18, 2023 · The results showed that using the nonstandard mills (between 20 and 35 cm in diameter), the Bond´s model constants (α=; β=, and γ = ), are unable to predict the Work Index ...
WhatsApp: +86 18203695377
WEBTo study the operation of a Ball mill and to calculate the efficiency of a Ball mill for grinding a material of known work index (Wi). To study the effect of RPM on the power consumption of
Ball mill. To calculate the critical speed (ηc) of a Ball mill. Introduction: Generally the Ball mills are known as the secondary size reduction equipment.
WhatsApp: +86 18203695377
WEBBall Mill Manual Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document describes a laboratory experiment to study the operational performance of a ball
mill. The objectives are to observe size reduction of materials at different milling times and speeds, perform screen analysis to determine particle size distributions, and .
WhatsApp: +86 18203695377
WEBJan 1, 2009 · Section snippets Grinding kinetics in the Bond ball mill. Tests of grinding kinetics in the Bond ball mill (Fig. 1, Fig. 2) has shown that over a shorter grinding period, the
process follows the law of first order kinetics R = R 0 ekt where R = testsieve oversize at the time (t); R 0 = test sieve at the beginning of grinding (t = 0); k: grinding rate .
WhatsApp: +86 18203695377
WEB[Lab Report Operation Unit] Experiment 7 Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. Hammer mills were used to crush ground nuts. Three
different sized metal screens small, medium, and large were tested to determine their effect on production amount and particle size. The small screen .
WhatsApp: +86 18203695377
WEBNov 1, 2023 · The applicable grinding parameters were determined by experiment. ... Different from ordinary ball mills, the cylinder of the stirring mill is stationary, and the rotation of the
stirring device drives the interaction between the grinding medium and the minerals in the cylinder ... View PDF View article View in Scopus Google Scholar [4] .
WhatsApp: +86 18203695377
WEBJan 1, 2009 · This paper describes the simulation of the grinding process in vertical roller mills. It is based on actual experimental data obtained on a production line at the plant and from
lab experiments. Sampling and experiments were also carried out in a power plant that has four ballmill circuits used for coal grinding so that different equipment ...
WhatsApp: +86 18203695377
WEBAug 28, 2021 · Ball mills are reported for the micronization o f dried FV. Fine reduction of particle size increases the surface area and alters the structure and functional pro perties (
Table 2 ).
WhatsApp: +86 18203695377
WEBJul 4, 2023 · Unlike the ball mill, there is no grinding medium is used in the fluid energy mill for the size reduction of the materials. Due to the expansion of the gas in the chamber,
temperature in the mill decreases which enables the use of the fluid energy mill for the milling of thermolabile substances (Lachman et al., 1987 ) (Fig. ).
WhatsApp: +86 18203695377
WEBJun 1, 2018 · During the grinding process, coarser material (size class 1) break to finer material (size class 2) and (size class 3) which are termed the daughter products, as shown in Fig.
chemical reaction on the other hand can consist of say reactant 'A' forming intermediate product 'B' then proceeding to final product 'C' or a competing reactant 'A' .
WhatsApp: +86 18203695377
WEBAlberto M. Puga N. Govender R. Rajamani. Engineering, Materials Science. 2022. The simulation of grinding mills with the discrete element method (DEM) has been advancing. First, it emerged as
a method for studying charge motion with spherical balls and predicting the power draw.. Expand.
WhatsApp: +86 18203695377
WEBThis is about h per mill revolution. 5. Effect of rotation rate for the standard ball mill Charge behaÕiour Fig. 1 shows typical charge shapes predicted for our 'standard' 5 m ball mill and
charge Ždescribed above. filled to 40% Žby volume. for four rotation rates that span the typical range of operational speeds.
WhatsApp: +86 18203695377
WEBComplete projects faster with batch file processing, convert scanned documents with OCR and esign your business agreements. iLovePDF is an online service to work with PDF files completely free
and easy to use. Merge PDF, split PDF, compress PDF, office to PDF, PDF to JPG and more!
WhatsApp: +86 18203695377
WEBExperiment Ball mill Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document provides instructions on how to determine the critical speed, actual speed,
optimum speed, and reduction ratio of a ball mill through an experiment involving grinding of materials using different sieve sizes to analyze particle size .
WhatsApp: +86 18203695377
WEBApr 25, 2016 · 26600 Pekan, Pahang, Malaysia, Phone: +; Fax: +. *. Email: muhamad ABSTRACT. This project is to design and fabrie the mini ball mill that can grind the solid ...
WhatsApp: +86 18203695377
WEBJun 25, 2022 · Highenergy ball milling (HEBM) of powders is a complex process involving mixing, morphology changes, generation and evolution of defects of the crystalline lattice, and
formation of new phases. This review is dedied to the memory of our colleague, Prof. Michail A. Korchagin (1946–2021), and aims to highlight his works on the synthesis .
WhatsApp: +86 18203695377
WEBMay 31, 2011 · The milling equipment (Wiley Mill, Thomas Scientific, Swedesboro, NJ) used in our study is a variable speed, digitally controlled, direct drive mill; that provides continuous
variation of cutting ...
WhatsApp: +86 18203695377
WEBNov 26, 2019 · The biggest characteristic of the sag mill is that the crushing ratio is large. The particle size of the materials to be ground is 300 ~ 400mm, sometimes even larger, and the
minimum particle size of the materials to be discharged can reach mm. The calculation shows that the crushing ratio can reach 3000 ~ 4000, while the ball mill's ...
WhatsApp: +86 18203695377
WEBSep 21, 2022 · ball mill PM100: 50 mL stainless steel: 3 stainless steel balls (d = 20 mm) 1 g: Hz : Room temperature: 3 h (stopping at 15 min, 30 min, 1 h and 2 h) 5P: oAminobenzoic acid
(mixture of FII and FIII forms) FIII form: Oscillatory ball mill (Mixer mill MM400, Retsch and Co., Germany) 25 mL stainless steel: One stainless steel .
WhatsApp: +86 18203695377
WEBOct 1, 2023 · The Bond ball mill work index is an expression of the material's resistance to ground and a measure of the grinding efficiency. The test is a standardized methodology that ends
when a circulating load of 250% is obtained. In this paper, a new method based on the Population Balance Model (PBM) is proposed and validated to estimate the .
WhatsApp: +86 18203695377
WEBJul 1, 2017 · Increasing the fraction of big balls from 0 to 70% led the flow of balls into the cascading regime and breakage mechanism to attrition. ... 150 small and 150 big balls (300 balls
in total) (% ...
WhatsApp: +86 18203695377
WEBstirr ball Free download as PDF File (.pdf), Text File (.txt) or read online for free. Study on the effect of process parameters in stirred ball mill. Effects of various operating factors,
such as grinding time (min), stirrer speed (rpm), slurry density (wt.%) and ball filling ratio on fine grinding have been studied.
WhatsApp: +86 18203695377
WEBAug 2, 2013 · Based on his work, this formula can be derived for ball diameter sizing and selection: Dm <= 6 (log dk) * d^ where D m = the diameter of the singlesized balls in = the diameter
of the largest chunks of ore in the mill feed in mm. dk = the P90 or fineness of the finished product in microns (um)with this the finished product is ...
WhatsApp: +86 18203695377
WEB2. Experimental The standard rod mill grindability test (Bergstrom, 1985) was conducted with four ores, ranging from very soft to very hard. Briefly, the test consists of placing a bulk volume
of ore measuring 1250 ml, previously stagecrushed to mm in the standard mill (305 mm diameter u0003 610 mm length) fitted with wavy liners ...
WhatsApp: +86 18203695377 | {"url":"https://deltawatt.fr/big_ball_mill_experiment_pdf.html","timestamp":"2024-11-14T01:09:57Z","content_type":"application/xhtml+xml","content_length":"29070","record_id":"<urn:uuid:130dbcb6-fe25-4984-b149-98a16dfa6a6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00348.warc.gz"} |
A sudden change in the depth of delta function potential well
• Thread starter Mazhar12
• Start date
In summary, the conversation discusses the possibility of the continuum states being free particle states and the probability being determined by the wave functions Ψf and ΨB. The conversation also
mentions the ability to calculate the probability of transfer to unbound states using the bound states |BS_A> and |BS_B>, regardless of the depth of the well.
Homework Statement
If a particle is initially in the bound state of a delta function potential well, having depth A, and at some point in time the depth suddenly changes to B. What is the probability that the
particle will now be in the states of the continuum spectrum? both A and B are greater than zero.
Relevant Equations
equations are attached.
is it correct that the continuum states will be free particle states? and the probability will be |< Ψf | ΨB>|^2 . Where Ψf is the wave function for free particle and ΨB is the wave function for the
bound state when the depth is B.
Because we are well aware of the old and new bound states ##|BS_A>## and ##|BS_B>##, we can calculate
and I assume it as probability of transfer to unbound states.
I wonder it says relative magnitude of A and B, so deepening or shallowing of well, does not matter. Maybe I am wrong.
Last edited:
FAQ: A sudden change in the depth of delta function potential well
1. What is a delta function potential well?
A delta function potential well is a type of potential energy function used in quantum mechanics to model a localized potential energy barrier. It is represented by a delta function, which is a
mathematical function that is zero everywhere except at a single point where it is infinite.
2. What causes a sudden change in the depth of a delta function potential well?
A sudden change in the depth of a delta function potential well can be caused by a change in the parameters of the potential energy function, such as the strength of the delta function or the width
of the well. This can also be caused by the introduction of an external force or perturbation.
3. How does a sudden change in the depth of a delta function potential well affect the behavior of particles?
A sudden change in the depth of a delta function potential well can significantly alter the behavior of particles. It can cause particles to become trapped in the well, leading to changes in their
energy levels and wave functions. It can also affect the probability of particles tunneling through the potential barrier.
4. Can a sudden change in the depth of a delta function potential well be observed in real-world systems?
Yes, a sudden change in the depth of a delta function potential well can be observed in various physical systems, such as semiconductor devices, atomic and molecular systems, and quantum dots. These
systems can be manipulated to create sudden changes in potential well depths, allowing for the study of quantum phenomena.
5. What are the practical applications of studying sudden changes in the depth of delta function potential wells?
Studying sudden changes in the depth of delta function potential wells is crucial in understanding quantum mechanics and its applications. It can help in the development of new technologies, such as
quantum computing and quantum sensors, as well as in understanding the behavior of particles in various physical systems. | {"url":"https://www.physicsforums.com/threads/a-sudden-change-in-the-depth-of-delta-function-potential-well.997178/","timestamp":"2024-11-02T01:12:06Z","content_type":"text/html","content_length":"82999","record_id":"<urn:uuid:68a482de-36b9-4dda-bf05-61395164e528>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00364.warc.gz"} |
17bii+ solver workaround
The solver in the 17bii+ is different than the solver in the original 17bii. It always solves equations using the iterative solver, never the direct solver. As a result, every equation is evaluated
twice (at least). This causes problems if you change the value of an input variable using the L() function. For example, the equation A=L(B:B+1) returns A=1, as expected, but if you RCL B you see
that it is 2.
For new equations, you can stay out of trouble if you initialize variables and never update an input variable with the L() function. Older equations, that worked on the original 17bii, may not work
on the + if they don't abide by these rules.
The Technical Applications Manual for the 27S and 19B contains a solver equation to determine the prime factors of a number. As written, the equation does not work on the 17bii+, because it updates
input variable N at the end of the equation. Experimentation with this equation revealed that it finds every other prime factor of a given number. For example, the prime factors of 510510 are 2, 3,
5, 7, 11, 13, 17. On the 17bii+, the equation finds every other factor: 2, 5, 11, and 17. So I needed to find a way to trick the solver into not evaluating the equation on even numbered iterations.
This was the solution I found that worked. At the beginning of the equation, add the following:
at the very end of the equation, add :0)
This essentially only executes the bulk of the equation code during odd iterations, and the correct factors are derived. I used B$$ since it is unlikely to be used in other equations, and it is
necessary for B$$ to be 0 or an even number for this workaround to work. | {"url":"https://archived.hpcalc.org/museumforum/thread-127151-post-127151.html#pid127151","timestamp":"2024-11-03T16:09:37Z","content_type":"application/xhtml+xml","content_length":"32484","record_id":"<urn:uuid:d5d1ce5f-cead-48af-91ce-2afd46da25c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00383.warc.gz"} |
Download of elementary and intermediate algebra / mark dugopolski
Search Engine visitors found us today by entering these algebra terms:
│simplifying radical expressions solvers │solving equation of a line calculator │adding and subtracting negative and positives │
│ │ │fractions │
│algebra practice papers for class VII │i need help with algebra problems │simplifying algebraic expressions+pre algebra │
│images about calculaters │math questions for year 7 secondary schools - cubed, │glencoe online mathmatics answers │
│ │squared, roots and powers │ │
│problems linear systems worksheet free │decimal conversion to fraction chart ppt │how do you know when a linear equation is │
│ │ │decreasing │
│check my paper subtracting mix numbers │how to change a mixed number to a decimal calculator │finding roots of polynomials on a graphing │
│ │ │calculator online │
│algebra power calculations │math double and half life worksheets │glencoe algebra1 answer book │
│Ti-84 stats formulas │"division in binary system" │calculator de calculat cu radical │
│2nd order+differential equation+matlab │fourth square root │algebra 1 answers │
│+Solve My Algebra Problem │factoring binomial calculator │sample algebra word problems │
│statistics yr 8 │cube root activities │algebra problem solver │
│"simultaneous equation " 3 unknowns │solving multiple equations calculator │multiplying and dividing integers worksheets │
│Algebra For Beginners │free trigonometry tables book │third power EQUATION SOLVER │
│rational expressions solving word problems │"how to solve equations" flowchart │maths for y8 print out worksheet │
│Usable Online Graphing Calculator │least common multiple 6th grade word problems │intermediate algebra charles p.: transition │
│ │ │practice test │
│radicals calculator │subtracting exponents using subtraction │Simplifying Algebraic expressions using algebra │
│ │ │tiles │
│texas instrument convert decimal to fraction │mathematical slopes │solving homogeneous second order differential │
│ │ │equations │
│worksheets for multiplying and dividing positive and negative numbers │common denominator on a calculator │simplifying rational expressions worksheet │
│algebrator mac │partial+sums+elementary+worksheets │+"b=" +intercept +formula │
│7th grade adding and subtracting integers math book │permutation and combination tutorial │what is The symbolic method │
│expanding brackets boolean │interactive math CLEP Study guide │percentage formulas │
│algebra homework.com │free practice papars for aptitude test │Help with Mathmatical Steps │
│If you know the greatest common factor of two numbers is 1, can you predict what the │lcm monomials video │simplifying quadratic formula │
│least common multiple will be? │ │ │
│college level math help software │algebra 1 sheets of graph │equations with rational expressions calculator │
│grade six algebra practice │download kumon │trigonometry word problems worksheet │
│cheats for first in math │3rd order equation solution │algebra lesson for 5th grade │
│worksheets with 3 digit adding and subtracting │Equations with decimals │7th grade patterning problem │
│free printable 5x practice sheet │non-homogeneous second order differential equation example │equation factoring calculator │
│free printable math, zeros in the quotient │solve system of equations TI-83 quadratic │aptitude test question free download │
│how to solve math operations with fractions │how to solve Quadratic equations graphically │math trivia questions for elementary │
│exercises in TI-83 programming │www.algebra1.cpm/answers │poems with math words │
│what's special about square number factors │multiplying decimals form │partial-difference subtraction │
│solving quadratic problems involving square roots │free adding money worksheets │mcdougall littell worksheet answers │
│free downloadable fact triangle worksheets for 2nd grade │9th grade math topics │adding & subtracting negative numbers worksheets │
│downloadable polynomial division calculator │online t-83 │basic algebra liner │
│10th grade math skill sheets │abstract algebra homework solutions │Advanced math problem solver │
│subtracting integers │subtraction equations │algebra worksheets │
│Examples of Solving Equations by Adding or Subtracting Decimals │how do u find greatest common denominators │expression calculator for distributive property │
│ │ │with variables │
│best algebra textbooks │answer my algebraic expression │fractional coefficients │
│equation function table │maple explanation of simplify │solve system simultaneous equations calculator │
│ │ │quadratic │
│factor pairs worksheets │easiest way to find LCM │solve for x with greater or lesser sign calculator│
│nth degree "math explanation" │matlab solve │pre algebra worksheets │
│math - 3-variable graph worksheet │american history mcdougal littell textbook notes │"calculator" "equilibrium concentration" │
│free online grade six achievement past papers │derivative calculator online │math trivia fifth grade │
│holt algebra 1 worksheets lesson 5-7 practice a answer │mcdougal littell books online │answer key to glencoe 13-2 algebra study guide │
│converting quadratic equation to word problems │permutation combination tutorial for GRE │Algebra with Pizzazz!™ teacher copies │
│free comparing and ordering integers worksheets │standardized test practice workbook algebra 2 mcdougal │mathtype combination permutation │
│ │littell answers │ │
│mymathlab statistics online "test answers" │9th grade math workbooks │mathematics invistigatory project │
│lineal regression gnuplot │FREE ONLINE ACCOUNTING EXERCISES │general aptitude questions │
│ │dividing decimals for 5th grade │multiplying and dividing integers worksheet │
│year 10 trigonometry cheat sheet │excel formula for calculating quadratic equation from three│f 1 maths exercise algebra │
│ │data points │ │
│gnuplot linear regression │download ti-84 emulator │how do you simplify variables with exponents │
│factorising quadratics calculator │holt worksheet answers │mathmatics algebra │
│worded problem;exponential and logarithm │simplify problems with X's printable worksheet │Partial sum addition │
│converting square root function to slope intercept │algebra 1 texas textbook holt │3rd root on graphing calc │
│5th grade per algerbra │long division polynomials solver │free pre algebra worksheets │
│sample math investigatory projects title │different types balancing chemical equations │radical absolute value │
│free math worksheets adding and subtracting integers │algebra calculator download │spanish coordinate plane worksheet │
│adding and subtracting negative decimals │solving a second order differential equation │exponent worksheet │
│foiling calculator │mcdougal littell biology study guide answers │how to solve matricies in your caculator │
│first grade lesson plans │math worksheets add subtract decimals │simultaneous equation with 3 unknown │
│algebra proofs worksheet │Artin Algebra Solutions Manual │4th grade basic algebra worksheets │
│synthetic division using complex numbers │the problem for 9-1 in pre-algebra book glencoe fl │TI-89 programs, fluid mechanics │
│square root to the nearest tenth calculator │High School Algebra Worksheets Free │free compound inequalities worksheet │
│answers for chapter 5 of the year 7 science focus homework book │texas instruments t1 83 tricks │print out lattice sheet 3x3 │
│free apptitude booksdownload │how to solve college algrebra │free algebra solving │
│factoring worksheets │adding and subtraction 2 digit numbers worksheets │highst common factor of 47 │
│lesson plans 8th grade square root │"algebra software reviews" │where is the answer key on the T1-83 PLUS │
│ │ │CALCULATOR │
│chemical equations- simplified │"4th grade algebra worksheets" │how do i type cubed roots on the TI-83 │
│algebra 2 worksheets prentice hall │Laplace transform for dummies │expanding cubed function │
│use derive to solve non homogeneous second order differential equations │solve radicals in decimal form │multiply exponents worksheet │
│finding the common denominator worksheets │download ti-86 rom image │["stepbystep" "teacher book" ] │
│3rd order polynomial equation solver │pre algrabra │radical expressions with fractions │
│online ks3 work │Extracting only two decimal points from a BigDecimal in │how to calculate gcd │
│ │Java │ │
│gcf and lcm euclid's worksheets │online sientific caculator for 6 grades │adding and subtracting integers problems' │
│find a vertex by graphing calculator │permutation and combination lesson for middle school │calculate linear difference equation │
│ninth grade english worksheet │balancing equations solvers │Worksheets on Polynomiansl │
│college algebra simplification review │dividing exponents with fractions │simplifying a fraction with the same variable │
│ │ │added and multiplied │
│first grade math printables softmath │how to solve GRE percentage problems │free review work book for cost accounting │
│free question bank + eight class mathematics │changing the subject of the formula worksheet │simple two step equations free printable worksheet│
│mcdougal littell online math test generator │Least Common Multiple Greatest Common Factor 6th grade math│HOW TO ARRANGE THE NUMBERS WHEN MULTIPLYING │
│ │worksheets │DECIMALS │
│adding signed fractions │formula for ratio │adding and subtracting rational expressions │
│ │ │worksheets │
│addition and subtraction of real number worksheets │how to use my casio calculator to take the root of │factoring third order equations │
│ │something │ │
│the books of the high school identities and factorization │what are the variables in vertex form of an equation │calculator ratios in simplest form │
│ged printable work practice sheets │verbal ability test papers with answers of campus │great common factor calculator │
│rearranging-maths │free easy algebra problems printouts │how to work out common denominator │
Yahoo visitors found us today by using these keyword phrases :
• math problem expanding brackets
• differential equations mixture problems
• subtraction of 4 digit numbers
• combining like terms worksheets
• BASIC ALGEGRA PROBLEMS
• how to find least common denominator with c++
• factoring with TI-83 Plus
• divide cubed functions
• 5th grade equations
• Learning Basic Algebra
• complex rational solver
• free english worksheets for malaysian year 4
• tutorials in factoring
• texas algebra 2 prentice hall free answers
• Free Printable Math Worksheets FOR DISTRIBUTIVE PROPERTY
• +has, have, had usage- intermediate worksheets
• converting fractions to decimals worksheet
• applied trig worksheets
• Algebraic function free worksheets
• math function, worksheet, printable
• simplifying square roots in fractions
• slope in a polynomial equation
• distributive property prentice hall mathematics
• 8th grade algebra practice problems
• TI 84 online applet
• "Samples of Accounting problems and answers"
• practise exponents questions online (grade 9)
• workbook answers for algerbra 1
• inequality solver
• "Word problems" Algebra 8th grade
• simplify and evaluate equations with exponents
• expression worksheet
• "aptitude question"
• online textbook for 9th grade world history in Va
• long division in algebra expression
• practice workbook mcdougal littell algebra 1 free answers
• multiple kids math
• ti 83 linear interpolation program
• prentice hall textbooks 6th grade math
• tables and graphs worksheet free elementary
• homework worksheets 8th and 9th grade free printable
• quadratic formula on ti 84
• sample applet code to draw a line graph for y=2x+5 in java
• www.pre-alegebra .com/self check quiz
• kostenlos ti 84 plus game
• square and cube root chart
• download free college algebra calculator
• factor using casio
• ebooks: discrete mathmatics
• free KS3 algebra powerpoints
• multiplying mixed number worksheet
• Solving inequalities with integers worksheet
• solving second order non homogenous differential equations examples
• worksheets on permutation +free
• quiz on multiplication of algebra
• 6th grade free homework help
• eight grade pre-algebra free math worksheets
• maths transition symmetry worksheet
• t1 83 calculator emulator
• +Accounting Class Homework Answers
• practice worksheets for adding and subtracting with scientific notation
• permutation activities high school
• divide polynomials with calculator
• how to make a mixed number into a decimal
• maths test online free for class 7
• latest math trivia
• ti-84 calculator download
• adding and subtracting Integer
• logarithms ks3
• summation symbol worksheets
• online printout adding calculator
• solving equations with square roots practice
• 6th grade math holt
• find the discriminant and vertex of the equation
• graphing calculator ellipse
• 4th order runge-kutta 2nd order ode matlab
• Begenning and intermediate algebra free tutorials
• how to factor a cube root function
• fourth grade order of operations worksheets
• math textbook solutions
• practice simplifying exponent expressions
• cube interactive lessons
• online lcm finder
• factoring quadratic equations calculator
• change bas in ti-89
• law of exponents + free worksheets
• simplifying radicals with variables
• java convert decimal to fraction
• sample algebra 2 problems using everyday life
• t1-83 games
• ti 83plus cubed roots
• two step equations with decimals games
• Powell Hybrid Solver FORTRAN
• math algebra practise
• vector algebra tutorial
• standard to vertex form calculator
• manual TI-83 plus linear equations system
• dividing fractions algebra practice problems
• chapter 15 lecture notes on contemorary abstract algebra
• help solving operations with algebraic expressions
• combining like terms, printable worksheets
• High School algebra tutorials
• Simplifying Square Root Expressions calculator
• Glencoe Algebra 1 Answer Key
• "two variable" differential equation matlab solve
• Algebra 1 California Edtion Glencoe
• slope formula statistics
• algebra slopes made easy
• partial-differential-equation linear homogenous
• Pre Algebra Distributive Property
• factoring cubed
• how expense booked in accounts
• algebra formula sheet
• second order non-homogeneous differential equation pdf
• advance algebra problem solver
• quadratic equation factorer
• graph vertex and quadratic equations
• www.free way to slove Algebra2 determining the equation of a linear.com
• help with solving algebra math problems
• algebraic equations worksheets beginner
• ucsmp algebra book answers
• find domain with TI-83 plus
• Download A Ti-84
• algebraic expression elementary
• how to calculate real roots using a calculator?
• CLEP cheat
• multiplying dividing positive negative worksheet
• free compare and order integers worksheets
• free test answers for McDougal Littell pre-algebra online
• +programming formulas TI-89
• balance equations math + 3rd grade worksheets
• Precalculus Prentice Hall third edition
• mathematics algebra(first year highschool)
• graph for square root of two variables
• replacing variables on order of operation worksheets
• second order differential equation+particular solution pdf
• maths- linear relations cheat sheet
• "visual basic free " probability -C++ -springerlink
• code of program solve linear equation
• is there a negative and positive chart for adding subtracting multiplication division
• solve by factoring worksheet
• convert hex to binary using ti-84
• evaluating trig functions with calculator worksheet
• algebra 2 answers mathematics 3 2nd edition
• free worksheet adding and subtracting integers
• McDougal Littell algebra 2
• cliffs notes on quadratic equations word problems
• second grade forward number sequence practice printables
• COST ACCOUNTING BOOKS
• Metre to Lineal Metre
• 89 solve multiple equations
• third root of -125
• solving quadratic equations using radicals
• 3rd grade algebra
• least common denominator worksheets
• multiplying decimals by whole numbers and worksheet
• holt algebra 1 textbook answers
• multiplying and dividing roots
• simplify radical expressions calculator
• free downloads entrance exams for 11 olds
• quadratic simultaneous equation calculator
• discrete mathematics and its applications sixth edition manual solution by graw hill in pdf file
• graphing linear equations - real life applications
• simplifying algebraic integers in distributive property
• factorization calculator for 4 numbers
• root mean square on TI-89
• college algebra clep online pretest
• adding, subtracting and multiplying odd and even numbers
• teaching adding integers with algebra tiles
• solution manual for linear algebra done right
• calculating interest college algebra
• coordinate pair worksheets
• basic operations on rational expressions examples simplification
• program factor equation
• perfect squares/square root worksheets
• radical equations
• alegebra calculator
• defining radical expressions solver
• rudin answers
• activities for solving equations by multiplying
• completing the square + common word problems
• glencoe algebra 1 high school level textbook
• grade 11 mathematics exampler paper
• answers to algebra with pizzazz
• least common denominator practice work sheets
• fraction common denominator calculator
• Simplifying Complex Rational Expressions
• algebra distributive property with cubic exponent
• ti-84 plus summations
• ppt edhelper graphs and charts free worksheets
• equation work sheets
• maths work sheets for year 3
• combination math work
• Precalculus-answers for chapter 4
• adding, subtracting, multiplying, dividing radicals review
• free o level maths questions
• printable worksheet on associative and commutative property
• how to find the slope on a graphing calculator?
• divisores en javascript
• intermediate algebra answers
• study guide radical notation
• drawing conclusions worksheets 6th grade
• ratio word problems worksheets
• printable test paper for english beginner
• free cubed root worksheets
• mathematics year of 10th quadratic equation basics
• radical expressions, multiply then simplify
• greatest common factor cheats
• Vertex algebra
• simultaneous quadratic equation solver
• evaluating expressions worksheet practice
• divisores con java script
• solve 3rd order
• math with pizzazz worksheet
• free worksheets grade exams
• Grade 10 Algerbra
• algebra solver free trail
• find the median, algebra 1, free help
• rules for dividing, adding, multiplying, and subtracting zero
• triangulo tartaglia en java
• algebra in cubes
• "4th grade algebraic expressions"
• downloadable calculators with fraction symbol
• free maths tests online KS3
• Solve each system by graphing calculator
• Solving Quadratic Equation by Extracting Square Roots
• using models to solve absolute value equations
• 6th grade math/2 step equations
• free downloads for primary one exam papers singapore
• practice worksheet for adding and subtracting negative numbers
• free ninth grade algebra final
• cubed square root on a scientific calculator
• ti-89 calculator polar
• algebra problems
• least common multiples of 729 and 18
• finding the square root worksheet
• example of simplifying an answer
• radical = radical how to solve
• printable worksheets on commutative property
• free +begining elementary algebra tutoring online examples
• free math worksheets on dividing integers
• verbal expression calculator
• combine like terms pre algebra
• Calculate Least Common Denominator
• glencoe algebra 1online book
• adding/ subtracting integers worksheet
• TI 84 rom code download
• solve polynomial equation, vba excel
• simplify boolean expressions (ti program)
• C programme + second order polynomial
• solution equation nonlinear by matlab
• adding like integers worksheets
• second order Non-homogeneous PDEs
• Add two integers w/o using '+'
• algebraic expression calculator with division
• work sheets for 9th graders in algebra
• decimal to fraction or mixed number converter
• how texas instruments Ti-83 "modular"
• investigatory in math
• help using division in factoring
• worksheets for Algebra II
• adding and subtracting radical expressions calculator
• math help online/rational expressions
• graphing online print equation
• 8th grade physics worksheet
• Mathmatical problems.com
• square roots+practice+worksheets
• algebra factorization worksheets
• fraction math test samples
• percent work sheet
• subtracting decimals least to greatest
• square roots fractions
• walter rudin, principles of mathematical analysis, exercises
• combination and permutation powerpoint
• Solving By extracting roots
• math problems: factors
• simplify by taking roots of the numerator and denominator
• algebra 1a for dummies
• permutation and combination sums
• investigatory project in math
• special values charts
• how to solve multipulation of faction
• multiplying powers
• Free Science Graphs for 6th gr.
• how to sovle hard algebra equations
• prentice hall course 1 math quiz
• simplified radical form by rationalizing the denominator.
• free adding subtracting multiplying dividing fraction problems
• multiplying non whole numbers
• worksheet on multiply and divide integers
• free software for TI-84 Plus
• pizzazz worksheets
• finding roots on TI-83
• solving radical expressions calculator
• compare each set of fractions by using common denominators
• MATLAB 2nd order nonlinear ODE
• free math investigatory project
• algebra equation worksheets distributive property
• dividing fractions problem solving
• fractions least to greatest calculator
• Ti 89 program lu decomposition
• worksheets graphing equations in slope intercept form
• Algebra and Perimeter ppt
• convert metercubes to foot cubes
• reflection & translation worksheets ks3
• answers to prentice hall mathematics algebra 2 workbook
• convert decimals to fractions
• square roots exponents and equations
• Solving first order nonlinear differential equations
• logical reasoning worksheet
• data analysis worksheets for 5th grade
• "first differences" + worksheet + pdf
• algebra tutoring
• basic explanation maths square route and cubes
• lesson plan to teach adding and subtract negative numbers for form two
• algebra matrix program
• using charts to solve alegebra 1 homework
• difference quotient on ti 89
• pretest on quadratic polynomials
• 1st grade california's printouts
• write fraction or mix numberas a decimal
• algebra intermedia 1
• how do you simplify exponents that are fractions
• holt algebra1
• how to change base number settings on calculators
• worksheets factors, multiples for 6th grade
• english work sheet for 5-7 years
• free multiplying rational expressions calculator
• POWER POINT WITH ONE STEP EQUATIONS
• solving imperfect radicals
• ti89 equation solver
• prentice hall + Conceptual Physics + chapter 3
• russian algebra
• multiplying and dividing intergers worksheets
• algebra made easy free online
• how to solve fractions?
• FIND WHAT IS THE LEAST COMMON DENOMINATOR FOR 6/7 PLUS 1/2
• Simplify an expression involving positive and negative integers
• mcqs+logic+mathematics
• solving two variable equations with ti-89
• cost accounting practice exams
• free ti-84 emulator
• how to enter logarithmic formula into TI 83 calculator
• how to solve 3rd order polynomials
• using the quadratic formula in a fraction
• how to square root a decimal
• polynomials for idiots
• adding square root calculator
• second order ODE homogeneous
• Convert Square Meters to Lineal Meters
• system linear equation ti-86
• Math for kids 6th grade free information
• online algebra calculators absolute value
• decimal worksheets
• 7th grade worksheets on order of operation
• free help on properties of graphs in college algebra
• beginner algebraic word problems worksheets
• complete the square calculator
• why do we solve quadratics
• solution solve variable kids teacher algebra
• simplify an algebra equation
• contemporary abstract algebra chapter 4 50 solution
• solve limits online
• subtraction worksheet
• teaching fractions least to greatest
• solving functions fourth power
• answer book elements of modern algebra
• adding and subtracting negative and positive integers+worksheet
• MIT couse using matlab to solve system ODE
• mathmatical symbol for feet
• gr 7 algebra
• middle school math with pizzazz book b
• worded problems of logarithmic function
• math-steps on adding unlike denominators
• interest rate seventh grade worksheets
• solving simultaneous equtions using matrix method
• algebra 2 fractions powers
• equations and inequalities for word sentences
• answer key for McDougal Littell inc. What is a clause?
• 9th grade algebra 1 books 2*83
• solving second order Homogeneous ode
• changing logs on ti-83
• how to solve compound inequalities
• help with radical expressions and equations
• Subtracting Whole Numbers Worksheets
• free algebra word problem solving
• prentice hall mathematics florida
• The Greatest common factor of two numbers is 850. What are the two numbers?
• basic permutation and combinations
• solve second order differential
• poems about math
• quadratic simultaneous equations solver
• matlab exponents
• pre-algerbra 8th grade
• how to do algebra problems on ti-30x IIS
• grade 9 algebra formulas
• equation of quadratic function into vertex form
• examples of english trivia
• yr 8 math test
• turning decimals into fractions on calculator
• mulitplying powers
• factoring numbers calculator
• ti emulator programs downloads
• solving simultaneous equations in matlab
• simplifing algebriac expressions TI 84
• an example of permutation and combination problems in stat
• least common denomonator math problems and answers
• learn year 9 math and test yr self
• solve numerical equation matlab
• combining like terms
• quadratic calculator with a greater exponent than 2
• perfect square roots worksheets
• how to cube root numbers on a ti-84
• logarithmic models with equation
• algebra 2 help on translations
• example scientific notation table
• find inverse ti 89
• glencoe/mcgraw-hill math workbook answers
• free download new syllabus math paper
• math printable exercise college level
• "linear programing" "high school" project
• radical form common fractions
• free worksheet math positives negatives
• turn a decimal into a fraction calculator
• simplify expression worksheet
• how to add subtract multiply divide integers
• proportion word problem worksheets
• algebra poems
• helper for math calulator
• solving equations by substitution calculator
• arithmetic sequence calculater
• discrete mathematics and its applications sixth edition solution manual + free
• maths worksheet on factors
• McDougal, Littell answer key for vocabulary
• easy ways to simplify algebraic expressions
• simplifying radicals calculator
• pre algebra distributive property
• example of linear algebra problems in the workplace
• beginner algebraic word problems
• combining like terms worksheet
• mcdougal littell algebra 1 free answers
• multiplying binomial practice with manipulatives
• download TI 84 puzzle pack
• grade 11 university physics worksheets
• grade 7 +algerbra help
• 7th Edition answers
• solve imperfect radical expressions
• download free fonts algebra
• algebra for dummies online
• converting mixed fractions to decimals
• Add the radical expressions and simplify (completely) if possible
• what is the greatest common factor and lowest common multiple 56,and 84. using prime numbers
• trigonometry answers and solutions
• multi step algebra problems worksheets
• Free Algebra Problem Solver
• how do you work out the area scale factor
• How to put excel sheets in Ti-89
• lesson plans on rational expressions
• answers for mcdougal littell biology
• square root of 84" put it into radical form.
• Free Online Pre-Algebra courses
• Free algrebra mahts lessons
• simplify algebraic equation worksheet
• algebra 1 activities slope
• find the slope worksheet
• graphic calculator online statistics
• Transforming Formulas in Algebra
• TI 83 plus emulator online
• aptitude test papers with answers for 7th graders
• graphing calculator x and y intercept
• mixed number percents
• Fre Pre-GED study sheets
• Answers to World History chapter 5 worksheet
• download apptitude Question
• ti 84 puzz pack cheats
• gauss-jordan elimination visual basic
• Glencoe Algebra 2 worksheet answers
• quadratic formula calculator+x^4
• Square Root Calculator
• MATLAB to solve polynomial equations
• Rational Expressions Online Calculator
• slope of a quadratic equation
• matlab simultaneous
• velocity/timegraphs explained
• holt math textbook selected answers
• how do i convert a mixed fraction to a decimal
• solve the equation fraction
• adding n subtracting fractions free worksheets
• converting equations in to fractions
• color by subtraction worksheet
• degree and radiums free worksheets
• Adding and Subtracting More than two fractions
• prentice hall pre algebra california edition answers
• solving second order differential equations with non constant coefficients
• evaluate expressions math worksheet
• 2 step problem solving grade 7
• limit calculator infinity
• Old Mcdougal Littell Biology chp 7
• hard math equations
• factoring online calculator quadratic equation
• how to convert mixed fractions into decimals
• 5th grade factor tree worksheet
• how to use log on the ti 89
• subtracting integers
• free ratio printables
• mcdougal littell algebra 1 practice workbook free answers
• is it difficult to take an itro algebra class online
• free worksheets for multiplying by 11
• summation java
• linear notation for algebraic expression examples in database
• Algebra I prentice hall online worksheets
• simplifying radical expressions free solvers
• how to solve equations with two variables
• mix numbers
• What is the least common multiple of 30 and 75?
• free download mathematica 4 2007 indian
• communative property worksheet 2nd grade
• inequalitiy word problems worksheet
• worksheet adding subtracting integers
• subtraction color coded worksheets
• scientific calculator free cubic root calculator
• multiplying equations
• 5th grade algebra
• dividing polynomials with multiple variables
• factorial casio9850
• addition and subtract integers
• interactive pracites maths gcse exam
• free worksheet on adding and subtracting integers
• "find the range of an equation"
• +work +sheets for 9th graders in algebra
• answer math homework
• diophantus chart
• vertex form of a quadratic equation definition
• multiplying square roots with variables
• three dimensional objects practice for 6th grade worksheets
• free printable e-z grader
• solve second order ODE
• free algebra tests
• saxon advanced math homeworkhelp
• algebra online problem solver
• sample question papers for class viii
• add a fraction to a integer
• 9th grade math worksheets
• adding & subtracting fraction integers worksheets
• Solve simultaneous linear equations using excel solver
• Cheat Algebra Homework
• how to solve algebraic problems
• free free free free free priceless priceless pre-algebra packet for beginners
• math text book answers
• 5th grade math-function tables worksheet
• how TI 84 calculator are helping us in everyday life
• ebooks(mathematics problem & puzzles and it solving methods)
• math solving software
• physics word problem using the equation of work
• mcdougal littell algebra 2 workbook answers
• finding slope step by step
• simplifying expressions calculator
• free advanced algebra help
• AMaTYC tests solutions
• Where do you use exponents in everyday life
• second order nonlinear matlab simulation
• dividing decimals worksheet
• calculate GCD
• simplifying complex rational algebraic expressions
• sixth grade math worksheets about variables
• college ALGEBRA CLEP/ PRACTICE TEST
• decimal fraction using bar
• calculate formulas ppt
• free algebraic word problem solver
• lesson plans to simplify fractions with polynomials in the fractions
• order fractions, how to
• differentiate permutation from combination
• test papers for grade 7 math ratio and rates
• how to sove simultaneous equation using mathcad
• KS2 Anders Celsuis
• programe in c++ that makes great common divider
• prentice hall pre algebra math workbook
• substitution calculator
• how to solve a system of differential equations on matlab
• antiderivative calculator online
• solving algebra fraction divisions
• put decimals in order from least to greatest
• free printable integers worksheets
• adding and subtracting equations activities
• ti-84 plus rom image download
• exponents as variables addition
• ladder method of factorization
• adding dividing decimals practice
• North carolina edition McDougal Littell Science 7th grade
• hcf of 32 and 48
• ratio formula
• 7th grade math online worksheets
• non linear equation+matlab
• quadratic equation trivia
• calculator poems
• color by number variables/algebra
• fifth grade statistics pretest
• cubed factorization
• learn basic algebra
• softmath.com
• Free 8th Grade Math Worksheets
• prentice hall conceptual physics workbook answers free
• factors whole numbers worksheet
• gateway algebra 1 worksheets
• subtracting and adding integers problems
• mcdougal world history chapter 5 worksheets
• substitution method graphs
• Finding Common Denominator On Calculator
• Unit Circle problem solver
• "Scientific Notation" and "Hands-on activity"
• 2nd order non homogenous differential equations
• What's the name for multiplying, dividing, subtraction, and addition?
• mcdougal algebra 2 teachers edition download
• interactive combining like terms
• how to graph parabolas on ti-84 plus
• calculating gcd
• converting decimals to fractions worksheets
• General aptitude questions
• college algebra problems online
• ti 89 summation
• free standard 4 pass year maths paper
• free simplifying radicals worksheet
• Solving One Step Equation Worksheets
• find a real-life application of a quadratic function. State the application, give the equation of the quadratic function,
• 9th grade biology chapter 5 definitions
• blitzer +precalculus +3rd +ppt
• convert mixed fraction to decimal
• download aptitude question & answer
• reversing rules algebra powerpoint
• 3x3 simultaneous nonlinear equations
• algebra II answers
• learning algebraic formulas
• examples of using combining like terms
• quadratic equation simplifier
• subtraction for mental math in grade six
• algebra with pizzazz answers worksheets
• lowest common denominator worksheet
• log 10 ti-89
• free statistics worksheets
• college algebra and trigonometry fourth edition answer key
• nonhomogeneous differential equation
• how i can solve laplace equation
• advance algebra problems
• Simplify radicals
• how to convert quadratic function in standard form
• inequality worksheet grade 7
• second order differential equations phase portrait
• algabra
• solving simultaneous equations matlab
• cheat by calculating Writing fractions as decimals
• solving one step equations worksheet
• linear systems combination elimination worksheets sample problems
• absolute value graph
• step by step ways to do compositions for logarithms
• online pre-algebra Prentice Hall workbook
• online finding the value of the variable calculator
• algebra 2 chapter 2 Resource Book cumulative review answers
• reading pictographs worksheets
• glencoe algebra answers
• inverse LINEAR functions ppt
• sample word problems+algebra+geometry
• formula find greatest common factor
• mathematica nonlinear system equations solve
• Multiplying Worksheets
• solving functions to the 3rd degree
• scientific notation add subtract multiply divide
• Free Permutations and Combinations for High School
• one step equation worksheets
• Algebra Problem Checker
• Sixth grade probability questions with the solutions
• like terms in algebra
• how to calculate combination in matlab
• practice college algebra problems
• radical function solver
• calculator for adding square roots
• solving one step equations
• algebra fraction calculator solve for variable
• adding and subtracting negative integers for sixth graders
• 9th grade math online gane
• how to solve a difference quotient
• finding mode median minimum maximum for data jelly beans
• Simple Algebra Worksheets
• how to save .pdf to ti-89
• math-area
• Holt Mathmatics worksheet for Pre Algebra
• "high school" "advanced algebra" online projects
• math trivia with answers geometry
• Least common factor of variables
• worksheet equation properties
• pre algebra lesson 3-1 skills practice the distribution property
• elementary school inequality worksheet
• free subtracting real numbers worksheet
• nonlinear differential equation, matlab
• Rational equations solver
• factoring cubed equations
• simplest form of fraction of algebric equation in mathlab
• algebra II course outline prentice hall algebra 2 with trig
• integers : adding a negative from a positive worksheet
• square root practice problems
• free online math square roots quiz
• subtracting square roots and solving for x
• algerbra for dummies
• simplifying cubed roots
• entering quad root ti-84
• free associative property of addition worksheets
• help with dividing polynomials
• free worksheet using math expression with variable
• online answers for holt middle school math course 2 answers for assignment 2-5
• Laplace transform calculator
• trigonometry question and answer
• polynomials equation problems using java
• answrs to algebra textbooks 8th grade
• algebra matrices power point presentations
• quadratic formula solver cubed
• Write A Equations for the product of negative twenty and nine is negative eleven?
• divide and simplify calculator
• developing skills in algebra book B
• linear equations in 9th grade
• "direct proportion" KS3 SAT question
• hyperbola with oblique asymptote graph
• For a given sample of , the enthalpy change of reaction is 16.5 . How many grams of hydrogen gas are produced?
• solve functions calculator
• DOWNLOAD FRE BOOKS ON ACCOUNTING
• solving algebra
• lesson plan how do we graph linear equation
• matlab symbolic solve linear equations
• free exponent worksheets
• math trivia meaning
• convert string 2 digits decimal in java
• adding and subtracting mixed number worksheet
• answering algebra questions
• free online calculator for converting percentage to decimal
• algebra tips for the 7th grade
• algebra substitution test questions KS3
• solving second order differential equations
• 5th grade mass worksheets
• Calculator you can do fractons on online
• eight grade printable worksheets in algebra
• using the calculator to solve quadratic equations by extracting square roots
• mathematica free edition
• quotient solving
• multiplying and dividing decimals practice
• passport to mathematics book 2 online
• quadratic ti-89
• mult and divide integers
• free slope worksheets
• solve graphing problems
• polynomial factor machine
• add sub factor fractions algebra calculator
• online calculator that does variable problem s
• online ks3 maths games
• 3rd grade algebra free worksheet
• radicals solver
• subtraction of integers lesson plans
• Integer Practice Worksheets
• a picture of theorem which gives a formula for multiplication of exponentials
• online limit calculator
• composite function solver
• Answers to Glencoe McGraw-Hill Algebra 1 workbook
• my algebr
• FLorida Buisness Math Standards
• help on algebra homework from glencoe online free
• Saxon books - Ohio Graduation Test
• if you are solving a problem with addition and subtraction
• quadratic inequalities word problems
• prentice hall mathematics course 3 online assessment practice tests
• equations calculating the nth term worksheets
• california algebra 1 mcdougal littell answer sheet
• mcdougal algebra 2 powerpoint
• algebra free worksheets simplifying expressions
• free online exam papers
• solving equations using addition and subtraction + worksheets
• Graph y= to x = on a TI-83 calculator
• Free Pre Algebra Classes
• HOW TO ADD, SUBTRACT, MULTIPLY AND DIVIDE FRACTIONS
• algebra equations
• 9th grade algebra answers
• radical expressions and equations worksheets
• free GED study guide tutorials
• using the quadratic equation
• trigonometry poems
• basic division work sheet
• free high school algebra tutorials
• log base 2 ti
• "alternate angles worksheet"
• expressions calculator
• answers of the book algebra 1
• tutoring for 9th grade san antonio texas
• algebra 2 help
• simultaneous eqation solver, containing quadratic
• Why Is Factoring Important
• algebra varables free worksheets 4th grade
• cost accounting books
• fundamental accounting principles 12th answers to the workbook
• foote algebra solved
• printable math sheets 3rd grade
• equations with big fractions worksheet
• holt keycode
• what the name for operations include addings, subtracting, multiplying, and dividings numbers.
• printable practice com on entrance mathematics
• everything about adding and subtracting and postive and negative numbers
• aptitude question on science
• free online calculator for grouping like terms in algebra.
• linear combination method answers
• Integer Addition and Subtraction Equations
• Multiplying and Dividing positive and negative numbers worksheet
• how to integrate with TI-86 calculator
• casio fx115ms solve polynomial
• math test online free grade 7
• word problemsof rational expressions with complete solution
• adding and subtracting negative and positive numbers worksheet
• pre-algebra equations
• mcdougal littell algebra 1 answers on work sheets
• Solve for variable with multiple denominators
• change function from standard form to vertex form
• Polynominal
• balancing equations algebra
• best methods of factoring and simplifying
• storing formulas in ti-84 calculator
• algebra+grade 5 kids lessons
• practice on exponents for beginners
• prentice hall mathmatics algebra 1 workbook
• calculate combination java
• algebraic methods of finding roots of a quadratic equation solver
• math poems fingers
• dummit foote exercises
• factoring equations calculator
• factor trees with +negitive numbers
• ti-83 plus factor program
• free probability worksheets
• nonlinear equation
• Math Factor Tables
• Rudin Chapter 4 -- solutions
• answers to Algebra with Pizazz
• Proofs in algebra worksheets
• division of polynomials by polynomials converter
• online free solve pre algebra problems
• rings(maths)
• free division of decimals 5th grade
• answers to algebra equations
• mathematics translations GCSE questions
• multiplying fractions and dividing worksheets
• ti 83 plus emulator
• matlab code newton rhapson nonlinear
• example poems in trigonometry
• Algebra percent of change worksheet
• symbolic method solving equation
• PARTIAL SUMS ADDING
• quadratic equation calculator third order free
• algebra 1 glencoe
• decimal money worksheet add and subtract
• pre algebra BB-8 worksheet
• evaluating expressions worksheets
• polynomials with fraction exponents
• least common denominator polynomial worksheet
• algebra 2 quadratic equations standard form find a vertex
• free math printouts for 3rd grade
• simplifying multiplication expressions
• mathcad tutor
• college algerbra
• the power of 2/3 in radical form
• Probability Calculator Algebra
• problem
• software
• algebra 2/ trig homework help
• Solving two equation systems on excel
• ti-83 plus + finding cube root
• free online synthetic division calculator
• linear extrapolation calculator
• trig textbook answers
• glencoe pre-algebra homework answers 7th grade
• factoring polynomials with variable exponent
• ti-84 quadratic formula
• convert lineal metre
• FREE BEGINNING ALEGBRA TESTS
• find equation from set of data
• step-by-step integration calculator trigonometric substitution
• find the mean worksheet
• the greatest common factor is 871
• greatest common denominator formula
• math poems about polynomials
• trial and error ti-30x iis
• free Aptitude question & answer downlode
• ks3 maths worksheets to print on perimeter area and volume
• ratio method of factoring quadratics
• key to questions in "principles of mathematical analysis by walter rudin"
• simplify each expression calculator
• worksheets on pre algebra third grade
• permutation combination test
• java BigDecimal very long time
• program to help you with algebra 2
• general solution for nonhomogeneous second order ode
• How to multiply monomials on the TI-83 Calculator
• multiplying on a cd for learning adults
• decimals to mixed numbers
• adding and subtracting equations worksheet
• glencoe science worksheet answers for chapter 2 on measurement
• polynomials factoring calculator
• free apti questions
Google users found our website yesterday by typing in these keywords :
• math +trivias
• free mean median mode worksheets
• java convert numbers to hours
• free sample algebra problems
• Tips Do Algebra Factoring
• how to square root method quadratic equations
• ks3 real life algebra maths lesson
• sum of on on calculator
• how did the egyptians used equations
• example of converting a decimal to a mixed number
• ti 83 graphing calculator point of intersection
• exams in india for 6th grader
• how do you figure out combining like terms equations
• how long does it take to learn college level algebra
• who to solve an algebra problem
• free online explanation of l.c.m with examples in algebraic expressions
• math puzzle probability worksheet
• using like denominators caculator
• online solve simultaneous equations
• how to solve sin problems using a graphing calculator
• free GCSE math test past papers printable
• integers worksheet add subtract multiply divide
• algebra solution for square root prime factorization
• free integer worksheets
• digital calculator for dividing fractions
• show the problems on integral calculas
• how do i divide variables
• stanard form calculater (a+bi)
• algebra find unknown values online calculator
• decimal to mixed fraction
• solve second order polynomial
• free GCE chemistry book download pdf
• parabola standard form b variable
• how to solve a complex equation in matlab
• powerpoints on converting scientic notation
• larson precalculus online seventh edition answers free
• numerical methods for solving systems of nonlinear difference equations matlab
• free online math tutors - algebra year 10
• grade 6 linear equation worksheet
• descartes coordinate plane
• quadratic simultaneous equation solver
• Free Answer to a Math Problem
• mixed numbers to percents
• algebra 101 free course
• adding negative positive worksheet
• integer work sheets
• example of an 5th grade equation
• Adding 2 digit numbers practice worksheets
• common denominator worksheet
• first order differential equation solver
• solving polynomial equations with irrational roots
• adding and subtracting positive and negative integers interactive game
• how to do arc sin on ti 83
• Powerpoints for dividing fractions and mixed numbers
• solving systems of equations with fractions
• radical matlab
• write each fractions as a percent
• solve a spring damper system by eulers method
• how do you divide?
• algebra 1 holt worksheets
• Number Sequence Solver
• fifth grade math worksheets
• how to figure out this graphing problem
• polynomial factoring scavenger hunt
• how to solve third equation
• Math exercises year 8
• pre algebra creative publications answers
• www.sample elementary math trivia questions and answers.com
• prentice hall mathematics algebra 2 answer keys
• graphing third order equations
• least common denominator calculator
• convert vertex to standard form
• square root of variables
• free maths tests for year 8 students
• systems of equations 2 variables worksheet
• evaluating compound sentences algebra exercises
• poems on multiples
• fraction to decimal mixed
• adding/subtracting large numbers worksheet
• exercise on prime factors for elementary grade
• solving for the slope
• how do you calculate the square root of fractions
• how to tell if a function or equation is linear
• printable tricky math questions for 9th graders
• elementary definitio for partial sums method
• online usable ti 83 graphing calculator
• a way to check physics alevel answers
• roots of an equation solver
• mcdougal littell answer software
• algebra teaching programs
• powerpoint coordinate picture
• how do you do distributive property to evaluate expressions
• radical din n+1 - radical din n-1 -2 radical din n
• partial sums method worksheets
• how to convert a fraction to decimal without a calculator
• puzzpack ti-84 plus password
• sq root algebra solver
• completing the square calculator
• simlifying square roots
• the linear combination method
• linear programing pretice hall examples
• simplify (square root) of 4+10
• integers, how to write the expanded form
• algebra online problems
• base 2 calculator
• solving SAT simultaneous algebraic equations
• free online math tests for year 7
• 6th grade solving equations with decimals
• learn LCM in Basic Algebra
• solving equations with cubed numbers
• java trig solver
• answers for all of the pages of the mcdougal littell pre algebra practice workbook for grade 7
• BIOLOGY NINTH GRADE CHEAT SHEET
• finding lowest common denominator calculator
• elementary algebra practice test word problems mixture
• sixth grade math problem of tables of contents
• partial fraction program ti 86
• view pdf on TI-89
• Linear system ti-89
• glencoe mathematics course 3 teachers edition dictionary online
• graphing calculator online STAT
• free mathematics worksheet parallel lines
• nonhomogeneous pde
• binomial equations
• help with least to greatest fractions
• accounting book
• printable tricky brain teasers for 9th graders
• college algebra and trigonometry fourth edition solutions
• adding radical fractions to numbers
• grade 5 multiplying decimals worksheet
• step by step guide to algebra
• order fractions from least to greatest worksheets
• how to do LU decomposition on TI-89
• solve a complex equation in matlab
• free algebra1 word problem solvers
• math equivalent linear relations graphs
• formula factor third order quadratic equation
• algerbra solving
• year7 division worksheets
• how do you get rid of fractional square roots
• 5th grade algebra homework sheet
• give me examples of rational exponent equation
• herstein topics in algebra solutions
• free online algebra calculator using substitution
• game mathematica 1 grade online
• download free advance calculator
• percentage equation
• common factors and multiples worksheets
• how do you convert mixed numbers to a decimal
• solve nonlinear inequality equations
• TI 84 plus factoring binomials
• free algebra solver
• how do you factor rational problems
• holt introductory algebra 2 teacher's resource bank
• how to simplify using exponents
• finding roots of polynomials on a graphing calculator
• intersection of two lines on graphing calculator
• the greatest common factor of two numbers is 871
• mixed number to decimal
• least common denominator and greatest common factor tricks
• Synthetic Division Problem Solver
• free practice papers for aptitude test with answer
• Solve My Math Problem
• multiply and divide rational expressions calculator
• transformations ALGEBRA TAKS problems
• solving for x using addition and subtraction free worksheet
• how to get rid of a radical
• adding negative and positive fractions
• multiply and divide rational expressions
• college algebra help
• advanced division printable worksheets
• solver 2nd order differential complex equation non-linear
• MATLAB 2nd order nonlinear ODE siny
• polynomial factorization tricks
• math + 3rd root symbol
• 6th grade math trivia
• multiplying rational expressions involving polynomials
• distributive property fraction
• solve third order polynomials
• simultaneous solutions of quadratic and linear equations solver
• equation of an elipse
• free online game for solving quadratic equations
• "compute pi" quadratic
• 7th Grade Pre- Algebra Multi Step Equations
• Translation of parabola worksheets
• tensor algebra ppt
• Maximum, Minimum, Median worksheets, 3rd grade
• trivia about math algebra
• slope free online calculator
• challenge subtraction worksheet
• glencoe algebra concepts and applications practice workbook answer sheets
• algebra graphing worksheets
• T1 83 Online Graphing Calculator
• square root calculator expression
• free practise sheets 11 plus
• factoring quadratic variable equations without numbers
• free prealgebra worksheets for solving application problems
• prealgebra exercices
• example questions and answers primary school aptitude exam in malaysia
• how to change a mixed number into a decimal
• "free tutorials Begenning and intermediate algebra"
• How to find square root of an equation
• evaluating equations with fractions game
• worksheet answers for McDougal Littell pre-algebra online
• add and sub rational expressions
• mixed fraction, java
• mcq on algebraic factorization
• abstract gallian chapter 5 no 55 solutions
• simplifying radical expressions
• ratio math worksheets free
• free typing base math 1-2 missing number
• gcse algebra practice tests
• real numbers: worksheet
• mc questions papers of 9th physics
• least common denominator calculator online
• answers to texas mathematics course 1 glencoe
• show me how to factor in algebra
• solving adding equations activity
• Easy Balancing Chemical Equations Worksheets
• McDougal Littell Science Book Answers
• math project
• multiplying and dividing powers
• Cheating Balancing Equations
• online algebra notation software
• calculate a year sales from the year previous equation
• what are some of the differences between graphing using a number line and graphing in the coordinate plane?
• properties of math worksheet
• 9th math exercise
• how to calculate linear feet
• TI-84 factoring
• convert mix fraction to decimal
• aptitude question
• math formulas percentages
• solved exercises algebraic topology
• solving binomial equations
• mathmatics - tricky division
• examples of rationalizing decimals
• fl pre alg/the answers
• simplyfing polynomials worksheet pdf
• nonhomogeneous second order linear differential equations
• factorize fraction
• java program to convert 5 digit numeric to words
• is there any software that can i download that can solve various mathematics problems ?
• solving triangles with 2 variables
• algebra-finding the domain
• tutorial on factorise
• positive and negative fractions in ascending order
• dividing 2 equations of 3 variables of third degree
• english worksheets 8th std printable free
• +decimals "leading digit"
• ti-89 quadratic polynomial
• java check divisibility
• problems for 7th class on the topic algebra?
• Sequence and number patterns worksheets
• only corresponding angles printable worksheet
• chapters covered in Harcourt Brace third grade math
• root formula
• sixth grade practice tests math
• rules for adding subtracting multiplying and dividing fractions
• how to transform equations adding and subtracting
• graphing linear function (PPT)
• excel math 23 grade 6th
• 8th grade pre algebra free worksheets
• algebraic expressions/ worksheets 5th grade
• 10-3 skills practice properties of logarithms
• Free Online Equation Calculator with elimination
• quadratic factors for you
• how to find the angles of a pentagon ratio 3:4:5:7:8
• Rules of Algebra for Free
• nth term solver
• decimal to radical form
• free online maths work sheet for 4 grade
• quadratic equations from simultaneous equation examples
• cubic root with graphing calculator
• Beginning and Intermediate Algebra Automated Answer
• grade 11 canadian math vertex form perfect square
• help solving college algebra
• solving eqations with three variables
• sum of two cubes calculator
• doing algebra problems online for free
• operations with whole numbers decimals worksheet
• how to base 2 log on TI-85
• extracting roots of quadratics on ti 83 calculator
• chemistry worksheets on smart materials GCSE
• laws of exponent in multiplication
• how to write a c program using the pie formula
• meaning of multiplying integers
• polynomial solving in c code
• quadratic equation calculator third order
• fraction trig calculator
• struggling with accounting, and need books that I can download
• trigonomic help
• writing standard form equations solcer
• examples of investigatory project in algebra
• online graphing calculator circles
• i need help with learning intermediate algebra formulas
• maths algebra gr 12 exams
• online linear factorization
• MATHEMATICAL POEM
• factoring 6th roots
• least to greatest game
• discriminant word problem
• selected answers in holt mathematics course 3 georgia
• solving quadratic equations with square
• ti-83 plus GRAPHING LINEAR EQUATIONS
• 9th grade math review, algebra
• Dividing show your work cheater
• paperback solution manual intermediate algebra 6th edition by Mark Dugopolski solution manuel
• help solve algebra problem free
• math trivia with answers mathematics
• elementary math trivias
• convert from decimal to fraction on ti 89
• 10th class trignometry formula
• year 10 mathematics examination papers
• C# code for solving quadratic equation
• ti 83 polynomial equations
• algebra homework
• algebra help slope intercept form
• combining fractional exponents
• mcdougal littell pre-algebra answers
• Algebra Structures and Methods 2 Teachers Edition
• differential equation,square root of X
• formula in finding the Greatest Common factor
• free online graphing calculator like texas
• 6th grade spelling work book ( unit 2 lesson 3)
• show "math formula" probability
• calculating linear feet
• example of math trivia
• free printable math / algebra 2
• free printable worksheet on integers
• how to multiply divide addition subtraction of fractional number
• Answer sheet Saxon math section lesson 10
• taking a cube root on a calculator
• 5th grade lcd math practice
• standard deviation on TI-83 plus
• divide a 7 digit number by a 3 digit number
• sample investigatory project in geometry
• multiply and dividing integers formula
• cheat sheet example for probability
• check my algebra homework with step by step answers
• pizzazz math workbook
• Polynomial Factoring calculator free
• interactive pre-algebra books online
• algebra poem
• math homework cheating machine
• solving 3rd grade number matrices
• 9th grade math help free online
• polar algebra subtraction
• abstract algebra solutions
• jupiter/math/answer trivia
• scale math
• free worksheets on communicative property of multiplication
• multiplying scientific notation power point
• algebra vertex calculator
• add decimals to tenths worksheet
• math formulas 10 percent progression
• program to solve 3rd order equation
• exponential expressions negative subtraction
• Radical Equation Solver
• Hard Problem word with combination and permutations
• free polynomial subtraction
• algebra with pizzazz answers objective 6-f
• dividing radical expressions calculator
• solve an equation by elimination method with a horizontal line
• math trivia + examples
• Solve linear equations fun worksheet
• equations with distributive properties
• factor a third order equation
• Algebra Word Problems Worksheets
• compare & order fractions & decimals worksheet
• Coordinate Plane Worksheets
• simplifying, factoring, polynomial expressions
• download ti 84 games
• free 9th Grade Algebra worksheets
• ti 84 plus emulator
• how to calculate cube root ti-83
• Lowest Common Denominator Calculator
• "free algebra solver"
• worksheets for solving one step equations
• fifth grade algebra worksheet
• lesson plans on simplifying rational expressions
• 9th standard polynomials
• mixed numbers to decimals worksheets
• how to evaluate quadratic expression in java
• usable online calculator for algebra
• what is the difference between evaluation and simplification
• college math problems
• TI-84 plus boolean variables
• mcdougal littell algebra 2 2004
• free precalculus software help
• How to solve a third order polynomial
• algebra games online
• glencoe chemistry "answers"
• slope intercept using mathematica
• quadratic model chart
• slope formula
• free online square roots games
• Java divisible
• free 9th grade worksheets
• square root of exponential function
• flowchart for operations with fractions
• what is the partial sums addition method
• free worksheets on the commutative, associative, identity, and inverse number properties
• Algebra with Pizzazz! answer key
• simplifying expressions solver
• solve algebra problems
• free online factoring
• solve precalculus problems free
• decimals to mixed fraction
• algebraic help sheet for GED
• solve cubic equation root exponential
• Decimal Expressions and equations [ALGEBRA]
• GED CALIFORNIA TUTORIAL
• how to do rational expressions on calculator
• combinations and permutations worksheet
• Multiplying Math Loop Game
• first order linear partial differential equation method of characteristics
• simplifying quotients
• how to convert binary to decimal on ti-84 plus
• rational equation calculator
• fact triangles worksheet
• fifth grade algebra worksheets
• fifth root simplify
• programming your ti-83 plus quad
• how to solve linear programming containing summation in matlab
• least common denominator in algebra
• free factoring worksheets
• Partial-Sums Addition
• elementary algebra formula sheet
• need help with showing how to do fractions
• mirageos tetris ti-84 plus free download
• fraction to decimal worksheet
• convert squared roots to percentages
• solving multistep equations printable worksheets
• prentice hall algebra 2 answer key
• 7th grade percent error worksheet
• Prentice Hall Mathematics algebra 2 solution key
• finding slope and y-intercept worksheets
• third root
• powerpoint on the distributive property and combining like terms
• multiplying square root expressions
• Using Scale factors, 8th grade math
• radical solver ti
• 7th grade math practice worksheet
• turning english into math worksheets
• integers-adding subtracting multiplying and dividing
• properties of algebraic expressions
• two step equations worksheets
• homogeneous first order differential
• free download for answers to college allgebra homework
• clep algebra
• bakuba clothes
• softmath
• College Fractions The basics with Examples
• graphing calc with log online
• learning exponents for dummies
• scientific notation worksheet
• free 5th grade printable math papers
• IT math help with mod
• caclulater for solving equations with variables on both sides
• algebra solver dividing polynomial by a binomial problem solver
• LCD problems.ppt
• real-world applications for algebra
• sample math test factions
• square root radical solver
• Sample Paper for Aptitude Test of MAERSK
• free algebra help
• algebraic expression percentage
• statistical trivias
• learning algebra online
• calculate rational expressions
• shriek mark calculator function
• finding domain and range from equations
• substitution method
• free prime factorization worksheets
• free 9th Grade Algebra Problems
• books on permutations and combinations
• Prentice Algebra 1 book pages
• special values chart
• graphing simple equations and inequalities
• free mathematics exercise for a 9 year old
• college algebra clep sample questions
• worksheets on rate and ratio for middle school
• javascript divisores
• math/factors and products chart up to 200
• simplifying algebraic expressions worksheet
• equations with homogeneous coefficients
• looking for pythagora math book answers
• adding and subtracting positive and negative numbers
• finding least common denominator worksheets
• equations with variables in c++
• coordinates - free resources KS3
• algebra worksheets and notes
• variables & patterns introducing algebra book 2
• answers for mcdougal littell biology study guide
• sample poems about math
• year 8 mathematic tests
• how to find cube roots on the ti-83 plus
• long polynomial division solver
• online T-83 calculator
• graphing pictures on a coordinate plane
• algebra with pizzazz
• Algebra 2 solution
• find free math worksheets for 5-6 grade expanded form
• square root solver
• online free 11+ exams
• newton's method for polynomial c++
• real situation algebra
• fraction formula
• solving radical expressions equations
• adding, subtracting, multiplying and dividing positive and negative numbers
• how to ratio formulas
• math trivia
• how to solve math problems that have least to greatest'
• answers to slope homework worksheet
• simple algebra equations
• boolean logic exercises dummies
• simultaneous equations elimination calculator
• calculating ratio in java script
• factoring calculator standard form
• dimensions high school algebra
• "least common denominator" rational
• adding and subtracting positive and negative numbers worksheet
• software to solve math problems
• mcdougal littell algebra1 online answers
• english aptitude test download
• +equation +solver +non-linear +C#
• multiplication lesson plans with calculator
• answer sheet harcourt math georgia adition 5th grade
• ti-83 plus how to find intersection of two graphs
• online graphing calculators with combination and permutation
• matlab simultaneous equations
• inventor of monomials
• how did the egyptians find square roots
• divide polynomials/ calculator
• free math worksheets, domain and range
• teach yourself algebra 1
• how to solve equations with powers
• adding and subtracting decimals worksheet
• solve polynomial online
• factoring polynomials with four terms using calculator
• variables and expressionsworksheets
• answers to algebra 2 McDougal littell
• prentice hall algebra 1 chapter 2 worksheet
• subtracting adding dividing multiplying fractions
• log based 10 in ti89
• mathmatic formulas
• how to find slope on a ti 84 plus edition calculator
• free printables stem and leaf plots 6th
• Help whats the difference between an equation and an expression?
• express number as difference of perfect squares
• free maths exercises year 9
• decimal to mixed number
• simplify roots calculator
• multi variable square roots
• +equation +solver +non-linear
• chapter 2 of 6 grade california math textbook
• Amatyc sample tests
• math answer course3 florida edition 4-5
• maths & trigs / combination
• adding positivee andj negative numbers worksheet
• high school math problem doc
• mixed number change to decimal
• area formula sheet
• square root to the third
• factor a third order polynomial equation
• integer review worksheet
• polar on ti 89
• cpm algebra 2 answer key
• practice tests for Trigonometry 7th edition Lial
• MULTIPLYING TO THE 5TH POWER
• math grade 3 work sheets
• algebra ax+by=c
• complex equasion example
• how to solve algebra problems
• Differential Equations calculator
• determining solutions for equations worksheets
• Multiply polynomials and simplify solver
• cpm algebra connections volume one
• common monomials-factoring games
• prentice hall algebra 1 answers
• solving simultaneous algebraic equations
• how to add fraction equations
• dividing decimals worksheets
• using ti-89 to store text
• changing a mixed number to a decimal
• solutions for physics workbook problems
• free inequalities worksheets 6th grade
• free estimation worksheets ks2
• algebra sequence and series questions grade 10
• convolution in ti 89
• graph equation x squared +1
• fun with ordering integers
• basic math percentage formulas
• ho w to find a slope of graph with ti-83
• fall = study common factor
• simultaneous equations solver
• casio calculator programs
• math 7th formula chart
• simultaneous equation solver
• algebra solver how to convert letters to math
• homework help algebra story problems combining mixed items
• Holt ALGEBRA 1 CALIFORNIA TEACHER'S EDITION
• multiplication solver
• passport t to mathematics chapter 2 practice test
• ti-89 solve quadratic equations
• Subtracting equations with negative exponents
• square roots with index calculator
• differential equation calculator
• easy to write matlab games
• long beach math books for algebra
• square root formulas
• free coin mixture algebra worksheets
• "multipling binomials worksheets"
• pictures coordinate graphing 6th grade
• liner algebra + reflection
• grade 6: multiplying and dividing whole numbers and decimals
• multiply and simplify by factoring square roots
• convert a percentage to a fraction
• polynom divider
• statistics for the utterly confused powerpoints
• factoring algebraic expressions containing fractional and negative exponents
• easy prealgebra work books with answers
• relating exponents, square roots and cube roots and logarithm
• Math Problem Solver
• worksheet on sum, product of cubic equations
• solving equations test
• glencoe algebra 1 practice workbook page 19
• *algebra tutor*
• powerpoint presentation in Graphing Linear Equations
• algebra tiles combine like terms worksheet
• evaluating equations worksheets
• applications of algebra
• combinations rate and ratio math questions
• worksheets on pre algebra third grade to print out
• download ti89 image
• radical squaring calculator
• Sample problem in trigonometry with solutions
• math trivia on exponents
• TI 84+ games step by step
• simplifying calculator
• solving simultaneous equations with powers
• english math problems for children
• algebra 2 worksheet printable
• mcdougal littell online algebra 2 test generator
• difference in 2 squares
• number patternworksheets
• calculate gcd
• division square root radicals fractions calculator
• factor quadratic expression
• graphing and writing inequalities algebra 3-1 practice worksheets
• complex trinomial factoring
• process controls sq root calculation
• aleks cheats
• probability worksheets
• Fraction Worksheets
• linear algebra forth edition homework solutions
• solving functions worksheets
• trigonometry formulas on GRE
• prentice hall mathematics algebra 1 answers
• free modular book maths
• McDougal Littell Algebra 1 Texas Edition
• prentice hall software error pre algebra
• help with solving equations using substitution calculator
• factoring cubed polynomials
• tutoring for ninth grade algevra
• fun math lesson for distance equals rate times time
• Accounting Class Homework Answers
• simple algebra online
• algerbra two step equations
• 4th grade algebra
• pre algebra worksheet
• algebra solve radical
• type in and solve math problems matrices
• Scale Problems Algebra
• Geometry Mcdougal Littell answer sheet
• balancing chemical equations
• algebra expanding a square sum
• Partial Sums Method
• rules in subtracting positive integers
• glencoe algebra 1 answers
• worded problem;exponential and logarithmic
• 7th grade interpreting graphs worksheet
• free college book printouts
• dividing games
• solving on-step linear equations worksheet
• Glenco Algebra 1 Worksheet answers
• addition of decimals fun worksheet free
• free online books on cost accounting
• algebra software
• Common Denominator Calculator
• converting percents, decimals and fractions worksheet
• youtube algebra 1 holt answers to 5.1
• calculating square foot lesson plan and worksheet
• free algebra problem solver
• Arithmetic Sequences NTH Term
• add integers game
• math trivia with answers
• square of the difference
• Integer test questions
• solve by completing the perfect square
• calculate slope on a graphing calculator
• integration second order differential equations
• Tutorial For Graphing Linear Equations
• scale factor for middle school
• dividing rational number worksheets
• some important questions of maths of class ix
• factoring quadratic calculator calculator
• Scale Factors middle school
• subtracting fractions in algebra equations
• install ti-84+ basic games
• finding root of an equation using matlab
• Matrix 1 workbook answers
• algebra worksheets.com
• GCD (x,y) What is it ?? How we can calculate it ??
• using a system of two equations homework help online
• get answers to 5th grade math on chapter test 8
• California fourth grade mathematics algebra find a rule
• foil cubed equations
• math homework solver
• kumon answer keys for level g
• mcdougall littell 9th grade english
• free year 2 histroy work sheets
• solving multivariable equation systems
• how to solve squared and cubic roots
• square root of a fraction
• sum while loop numbers
• "least common denominator"
• What is the formula adding and subtracting mixed fractions
• factors and multiples displays
• quotient rule calculator
• worksheet simplify fractions with variables
• distributive property beginners worksheets
• completing the square 4th degree calculator
• fractions and decimals worksheet+least to greatest
• worksheet distirbutive property equations
• circumferance equations
• download TI graphing calculator emulator
• problem solving math grade 5 decision making
• log base function on ti calculator
• simplifying variable rational expressions
• suppose that air resistance is proportional to velocity. derive the differential equation for a falling object for the rate of chance of the velocity
• convert base 10 to base 8
• TI-83 plus cube root
• holt algebra 1 teacher book answers
• algebra vertex
• lewis 7 loftus 3.3
• free print out worksheet for angle of fifth grade
• decimal front end estimation worksheet
• evaluate expressions worksheets
• algerbra calculator
• how to declare bigdecimal in java
• Algebra I challenging software
• sum of two numbers in java
• ti-89 quadratic equation
• Help with 6th grade algebra
• steps to solve an algebraic expression
• use of differential equation in heat equation
• algebra 1 concept and skills chapter 2.7 problem 48
• solve simultaneous linear equations with excel
• logarithm history worksheets
• step by step answers to college Algebra problems
• free online graphing calculator trigonometry
• dividing integers worksheet
• matlab code to find LCM of 2 numbers
• usable online scientific calculator
• download maths teaching methods and material for 10 standard in tamil nadu
• solving equations using elimination calculator
• FREE FORMULA WORKSHEETS
• add and subtract rational expressions free online tests
• two ways to calculate LCM
• cube root on scientific calculator
• writing in equations in standard form
• symbolic method
• glencoe/mc graw hill prep algebra answers
• graphing slope calculator
• yr 8 integer classroom games
• algebra 2 books ancers
• chapter 5 algebra 2 glencoe definitions
• linear equations worksheet grade 7 and 8
• combinations permutations finder
• maths basic year six free online
• saxon algebra 2 book answers
• percentages to decimals powerpoints
• worksheets on combining like terms
• printable worksheets adding and subtracting integers free
• Graphing Linear Equations in three Variables
• free dividing decimal sheets
• investigatory project about problem solving
• Free math worksheets and order of operations
• free algebraic calculator
• make a perfect square equation
• multiplying equations by large exponents
• adding, subtracting, multiplication and division of fractions
• fun ways to remember subtracting integer rules
• Holt Physics Problem Workbook answers
• ti-84 plus downloadablegames
• Combining Like Terms
• simplify calculator
• difficult math trivia
• free algebra graph solvers
• graph cubic functions worksheet | {"url":"https://www.softmath.com/math-com-calculator/reducing-fractions/download-of-elementary-and.html","timestamp":"2024-11-07T18:32:37Z","content_type":"text/html","content_length":"148601","record_id":"<urn:uuid:b391ffb4-38a8-4211-9490-35771469148c>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00365.warc.gz"} |
Pseudo vs Quasi Random Numbers
In this tutorial we discuss Monte Carlo convergence and the difference between Pseudo-random numbers and Quasi-random numbers. In previous tutorials will discusses the benefits of combining Monte
Carlo Variance Reduction techniques such as antithetic and control variate methods to reduce the standard error of our simulation.
We demonstrate the effectiveness of using quasi-random numbers by compaing the convergence on a pricing a European Call Option by monte carlo simulation using difference methods for creating pseudo
and quasi-random variables. Pseudo-random number generation: – add 12 uniform variables – Box-Muller – Polar Rejection – Inverse transform sampling (like Numpy)
Quasi-random number generation: – Halton – Sobol Turns out, pseudo random numbers are a bad choice for Monte Carlo simulation. Let’s consider pairs of independent uniformally distributed random
numbers. Since numbers are independent and uniformly distributed, every point on the graph is equally likely. However we observe clumps and empty spaces. Eventually if we sampled enough points, the
initial clumps and empty spaces would be swamped by the large number of points spread evenly. Unfortunately, with Monte Carlo simulation, the aim is to often reduce the number of samples to decrease
computation time (as has been the aim of Variance Reduction Techniques). Pseudo-random numbers introduce bias through the clumpiness!
In contrast, Quasi-random numbers or low-discrepency sequences are designed to appear random but not clumpy. Quasi-random samples are not independent from the previous one, it ‘remembers’ the
previous samples and attempts to position itself away from other samples. The behaviour is ideal for obtaining fast convergence in a Monte Carlo simulation. We show Halton and Sobol, because these
are implemented in Scipy!
Generating Random Numbers for Monte Carlo
To simulate our risk-neutral price paths in derivative valuation through Monte Carlo simulation, we lean heavily on simulating Brownian motions through the generation of standard normal random
Most programming languages and spreadsheets include a uniform pseudo-random number generator. This will generate a random integer between zero and a specified upper value, where each integers occur
with equal probability. A standard uniform random generator, follows the mathematical definition, of real values in the range \(\in (0,1)\), with all real values in that range equally likely.
A pseudo-random number generator is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers. The generated sequence is not truly
random, because it is completely determined by an initial value, called the seed (which may be seeded randomly – hence truly random). They are important for reproducibility and speed of generation.
With the standard uniform random number generator we can convert these to standard normal random numbers.
import time
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
Method 1: Approximation with 12 uniform samples
A common by approximate way to do this is to generate 12 standard normal random numbers, add them together and subtract 6 from the total. The distribution of this combination has a zero mean and
variance of one.
np.random.seed = 1
N = 1e6
def add_12_uni(n):
return np.sum([np.random.uniform(0,1,12) for i in range(int(n))], axis=1) - 6
start_time = time.time()
x_uni = add_12_uni(N)
time_uni = round(time.time() - start_time,3)
print("Computation Time: ", time_uni)
mean, std, skew, kurt = np.mean(x_uni), np.std(x_uni), stats.skew(x_uni), stats.kurtosis(x_uni)
print("Mean :", round(mean,3))
print("Std :", round(std,3))
print("Skew :", round(skew,3))
print("Kurtosis :", round(kurt,3))
print("Min/Max : {0}/{1}".format(round(min(x_uni),3), round(max(x_uni),3)))
plt.hist(x_uni, bins=50)
This is a good approximation, However …
• Maximum values are -6,6
• Also notice how Kurtosis is a little less than 3, which means that too many values close to the mean will be generated.
Method 2: Box-Muller transformation
Simple Alternative, which is the exact transformation of pairs of standard uniform random numbers to pairs of standard normal random variables. Let \(x_1\) and \(x_2\) be standard uniformly
distributed random pairs. Then standard normally distributed pairs \(z_1\) and \(z_2\) can be obtained:
\(z_1 = \sqrt{-2ln(x_1)}cos(2\pi x_2)\)
\(z_2 = \sqrt{-2ln(x_1)}sin(2\pi x_2)\)
N = 1e6
def box_muller(n):
n = int(int(n)/2)
x_uni = np.random.uniform(0,1,(2,n))
z1 = np.sqrt(-2*np.log(x_uni[0]))*np.cos(2*np.pi*x_uni[1])
z2 = np.sqrt(-2*np.log(x_uni[0]))*np.sin(2*np.pi*x_uni[1])
return np.concatenate((z1,z2))
start_time = time.time()
x_box = box_muller(N)
time_box = round(time.time() - start_time,3)
print("Computation Time: ", time_box)
mean, std, skew, kurt = np.mean(x_box), np.std(x_box), stats.skew(x_box), stats.kurtosis(x_box)
print("Mean :", round(mean,3))
print("Std :", round(std,3))
print("Skew :", round(skew,3))
print("Kurtosis :", round(kurt,3))
print("Min/Max : {0}/{1}".format(round(min(x_box),3), round(max(x_box),3)))
plt.hist(x_box, bins=50)
Method 3: Marsaglia Polar Rejection
Also requires pairs of uniformly distributed random numbers. To avoid using trigonometric functions of Box Muller transforms, let’s consider polar coordinates. For this, we consider random variables,
\(x_1, x_2\) which are uniformly distributed on \([-1, 1]\) such that \(x_1^2 + x_2^2 < 1\) It can be generated as follows. Psuedo Code below:
while \(w >= 1\) {
\(x_1 =\) standard uniform random number
\(x_2 =\) standard uniform random number
\(w = x_1^2 + x_2^2\)
then {
\( c = \sqrt{-2 \frac{ln(w)}{w}} \)
\( z_1 = c*x_1\)
\( z_2 = c*x_2\)
N = 1e6
def polar_rejection(n):
n = int(int(n)/2)
x_uni = np.random.uniform(-1,1,(2,n))
sum_squared = lambda x: x[0]**2+x[1]**2
w = sum_squared(x_uni)
cond = (sum_squared(x_uni)>=1)
while np.any(cond):
x_uni[:,cond] = np.random.uniform(-1,1,np.shape(x_uni[:,cond]))
cond = (sum_squared(x_uni)>=1)
w = sum_squared(x_uni)
c = np.sqrt(-2*np.log(w)/w)
return np.concatenate((c*x_uni[0],c*x_uni[1]))
start_time = time.time()
x_pol = polar_rejection(N)
time_pol = round(time.time() - start_time,3)
print("Computation Time: ", time_pol)
mean, std, skew, kurt = np.mean(x_pol), np.std(x_pol), stats.skew(x_pol), stats.kurtosis(x_pol)
print("Mean :", round(mean,3))
print("Std :", round(std,3))
print("Skew :", round(skew,3))
print("Kurtosis :", round(kurt,3))
print("Min/Max : {0}/{1}".format(round(min(x_pol),3), round(max(x_pol),3)))
plt.hist(x_pol, bins=50)
Method 4: Inverse transform sampling
Let U be a random variable which is uniformly distributed on the interval \([0, 1]\). And let \(F\) be a continuous CDF(cumulative distribution function) of a random variable, \(X\) which we want to
generate. Then, inverse CDF is defined by:
\(F^{−1}(u)=inf{x|F(x)\leq u,u \in [0,1]}\)
Using this inverse CDF, we can generate random variable X as following:
The cumulative density function of the standard normal distribution is given by:
\(\large f_X(x)=\frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}}\)
This leads us into a problem. It can be shown that the integral \(\int e^{-x^2} dx\) has no closed form using the standard elementary functions. That means that we have no way of finding a closed
form of the normal CDF. Note there are closed form inverse CDF’s for the Exponential, Pareto, Cauchy, Logistic, Rayleigh distributions.
Thankfully though, we have a workaround. Although we are unable to find a closed form for the inverse CDF, it is not too hard to approximate it using numerical analysis.
N = 1e6
def inverse_norm(n):
x_uni = np.random.uniform(0,1,int(n))
return stats.norm.ppf(x_uni)
start_time = time.time()
x_inv = inverse_norm(N)
time_inv = round(time.time() - start_time,3)
print("Computation Time: ", time_inv)
mean, std, skew, kurt = np.mean(x_inv), np.std(x_inv), stats.skew(x_inv), stats.kurtosis(x_inv)
print("Mean :", round(mean,3))
print("Std :", round(std,3))
print("Skew :", round(skew,3))
print("Kurtosis :", round(kurt,3))
print("Min/Max : {0}/{1}".format(round(min(x_inv),3), round(max(x_inv),3)))
plt.hist(x_inv, bins=50)
ONLY Method in Python: Use Numpy!!!
Numpy uses Ziggurat algorithm. But use numpy directly as this is implemented in C, and executes way quicker than we could implement.
start_time = time.time()
x_norm = np.random.normal(0,1,int(N))
time_norm = round(time.time() - start_time,3)
print("Computation Time: ", time_norm)
Quasi-random numbers
Turns out, pseudo random numbers are a bad choice for Monte Carlo simulation. Let’s consider pairs of independent uniformally distributed random numbers.
n = 500
x1 = np.random.uniform(0,1,int(n))
x2 = np.random.uniform(0,1,int(n))
plt.scatter(x1,x2, 'd')
Since numbers are independent and uniformly distributed, every point on the graph is equally likely. However we observe clumps and empty spaces.
Eventually if we sampled enough points, the initial clumps and empty spaces would be swamped by the large number of points spread evenly.
Unfortunately, with Monte Carlo simulation, the aim is to often reduce the number of samples to decrease computation time (as has been the aim of Variance Reduction Techniques).
Pseudo-random numbers introduce bias through the clumpiness!
In contrast, Quasi-random numbers or low-discrepency sequences are designed to appear random but not clumpy. Quasi-random samples are not independent from the previous one, it ‘remembers’ the
previous samples and attempts to position itself away from other samples.
There are many methods to produce Quasi-random number generators, that provides several low discrepancy sequences:
• Faure sequence,
• Halton sequence,
• Reverse Halton sequence,
• Haselgrove sequence,
• Sobol sequence.
The behaviour is ideal for obtaining fast convergence in a Monte Carlo simulation. We show Halton and Sobol, because these are implemented in Scipy!
from scipy.stats import qmc
def halton(n, d=1):
sampler = qmc.Halton(d, scramble=True)
return sampler.random(n)
def halton_norm(n, d=1):
sampler = qmc.Halton(d, scramble=True)
x_halton = sampler.random(n)
return stats.norm.ppf(x_halton)
x = halton(n=200, d=2).T
plt.scatter(x[0],x[1], marker='d')
def sobol(m, d=1):
sampler = qmc.Sobol(d, scramble=True)
return sampler.random_base2(m)
def sobol_norm(m, d=1):
sampler = qmc.Sobol(d, scramble=True)
x_sobol = sampler.random_base2(m)
return stats.norm.ppf(x_sobol)
x = sobol(m=9, d=2).T
plt.scatter(x[0],x[1], marker='d')
Monte Carlo Convergence: Pseudo vs Quasi Random Numbers
Let’s value a European option of which we have an exact solution for and use different methods of sampling to see the rate of convergence.
Pseudo-samping methods
• add 12 uniform variables
• Box-Muller
• Polar Rejection
• Inverse transform sampling (like Numpy)
Quasi-samping methods
# Define variables
r = 0.01
S0 = 30
K = 32
T = 240/365
vol = 0.30
def blackScholes(r, S, K, T, sigma, type="c"):
"Calculate BS price of call/put"
d1 = (np.log(S/K) + (r + sigma**2/2)*T)/(sigma*np.sqrt(T))
d2 = d1 - sigma*np.sqrt(T)
if type == "c":
price = S*norm.cdf(d1, 0, 1) - K*np.exp(-r*T)*norm.cdf(d2, 0, 1)
elif type == "p":
price = K*np.exp(-r*T)*norm.cdf(-d2, 0, 1) - S*norm.cdf(-d1, 0, 1)
return price
print("Please confirm option type, either 'c' for Call or 'p' for Put!")
bs = blackScholes(r, S0, K, T, vol, type="c")
print('Black Scholes Price', round(bs,3))
results = {'Pseudo: add_12_uni': [],
'Pseudo: box_muller': [],
'Pseudo: polar_rejection:': [],
'Pseudo: inv_transform': [],
'Quasi : Halton': [],
'Quasi : Sobol': [],}
funcs = {'Pseudo: add_12_uni': add_12_uni,
'Pseudo: box_muller': box_muller,
'Pseudo: polar_rejection:': polar_rejection,
'Pseudo: inv_transform': inverse_norm,
'Quasi : Halton': halton_norm,
'Quasi : Sobol': sobol_norm}
numbers = np.linspace(0,4000,21)[1:]
# N = 10000
#precompute constants
dt = T
nudt = (r - 0.5*vol**2)*dt
volsdt = vol*np.sqrt(dt)
# Monte Carlo Method
for M in numbers:
M = int(M)
for method in results:
if method == 'Quasi : Sobol':
Z = funcs[method](M)
delta_St = nudt + volsdt*Z
ST = S0*np.exp(delta_St)
CT = np.maximum(0, ST - K)
C0 = np.exp(-r*T)*np.sum(CT)/M
results[method].append(C0 - bs)
sobol_rng = np.arange(7,13)
for M in sobol_rng:
M = int(M)
Z = funcs['Quasi : Sobol'](M)
delta_St = nudt + volsdt*Z
ST = S0*np.exp(delta_St)
CT = np.maximum(0, ST - K)
C0 = np.exp(-r*T)*np.sum(CT)/(2**M)
results['Quasi : Sobol'].append(C0 - bs)
sigma = np.sqrt( np.sum( (np.exp(-r*T)*CT - C0)**2) / (M-1) )
SE = sigma/np.sqrt(M)
for method in results:
if method == 'Quasi : Sobol':
plt.title('Monte Carlo Convergence! \n Pseudo vs Quasi Random Numbers')
plt.ylabel('Relative Pricing Error')
plt.xlabel('Number of Simulations (M)') | {"url":"https://quantpy.com.au/monte-carlo/pseudo-vs-quasi-random-numbers/","timestamp":"2024-11-06T10:19:39Z","content_type":"text/html","content_length":"133219","record_id":"<urn:uuid:f1e6ea29-f94a-461d-992e-5e4fcf520386>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00564.warc.gz"} |
Sin 2x vs. 2 Sin x: What's the Difference?
sin 2x is a trigonometric function representing the sine of double the angle, whereas 2 sin x doubles the sine of the angle.
Key Differences
sin 2x and 2 sin x are distinct expressions in trigonometry, each representing different mathematical concepts. sin 2x denotes the sine of twice the angle x, a specific trigonometric function that
can be expanded into 2 sin x cos x using the double-angle formula. This function is useful for solving trigonometric equations where angles are doubled. 2 sin x, on the other hand, simply multiplies
the sine of angle x by 2. This does not involve any trigonometric identity but rather scales the result of sin x. It is straightforward in its application, directly affecting the amplitude of the
sine function, making it twice as large.
The key difference lies in their application and outcome. sin 2x is involved in trigonometric identities and equations that require the manipulation of angle sizes, effectively changing the
function's period and pattern. It is fundamental in understanding the relationships between different trigonometric functions.
2 sin x modifies the amplitude of the sine wave without altering the function’s frequency or period. It's a linear transformation of the sine function, used in various physics and engineering
contexts where wave amplitude changes are necessary.
Comparing sin 2x and 2 sin x illustrates how trigonometric functions can be manipulated to represent different phenomena. While sin 2x explores the properties of trigonometric identities and their
implications on angles, 2 sin x emphasizes the impact of scaling on the sine function's amplitude.
Comparison Chart
Sine of double the angle
Double the sine of the angle
Trigonometric Identity
Uses the double-angle formula: sin 2x = 2 sin x cos x
Not an identity, just a scalar multiplication
Impact on Function
Alters both amplitude and period of the sine wave
Only alters the amplitude, not the period
In trigonometric identities and solving equations
In scaling the amplitude of sine functions
Mathematical Expression
More complex due to the involvement of cosine
Simpler, as it directly scales the sine value
Sin 2x and 2 Sin x Definitions
Sin 2x
Sin 2x is the sine of twice the angle.
For x = π/4, sin 2x equals 1.
2 Sin x
2 sin x scales the amplitude of the sine function.
The amplitude of a wave described by 2 sin x is twice that of sin x.
Sin 2x
Sin 2x transforms the sine function to account for angle doubling.
The graph of sin 2x shows a wave with half the period of sin x.
2 Sin x
2 sin x maintains the period while altering the function's magnitude.
A pendulum's maximum angle might be modeled by 2 sin x to show increased swing.
Sin 2x
Sin 2x applies in scenarios where angle manipulation is key.
In physics, sin 2x helps model oscillations with doubled frequencies.
2 Sin x
2 sin x affects the sine wave without changing its frequency.
Light intensity variations might be modeled as 2 sin x to indicate brighter light.
Sin 2x
Sin 2x represents a double-angle trigonometric function.
Sin 2x can simplify equations involving trigonometric identities.
2 Sin x
2 sin x doubles the value of the sine of angle x.
For x = π/6, 2 sin x equals 1.
Sin 2x
Sin 2x is used in trigonometry to explore relationships between angles.
Sin 2x is integral in proving trigonometric identities.
2 Sin x
2 sin x is a linear modification of the sine function.
In sound engineering, 2 sin x could represent a louder sound wave.
What does sin 2x represent?
Sin 2x represents the sine of double the angle, used in trigonometric identities.
Can sin 2x be simplified?
Yes, sin 2x can be simplified to 2 sin x cos x using the double-angle formula.
How does sin 2x affect the sine wave's period?
Sin 2x halves the period of the sine wave, leading to more frequent oscillations.
Is 2 sin x a trigonometric identity?
No, 2 sin x is not an identity but a scaled version of the sine function.
Does 2 sin x change the wave's frequency?
No, 2 sin x does not change the frequency, only the amplitude.
What distinguishes 2 sin x in terms of transformation?
It linearly scales the sine function's output, enhancing its magnitude.
What is the practical use of 2 sin x?
2 sin x is used to model phenomena where the amplitude of a wave needs to be doubled.
How is sin 2x derived?
Sin 2x is derived from the double-angle formula in trigonometry.
How does altering x in 2 sin x affect the wave?
Altering x in 2 sin x affects the phase of the wave, not its amplitude or period.
Why use sin 2x in equations?
Sin 2x is useful for solving trigonometric equations involving angle relationships.
In what scenarios is sin 2x applied?
Sin 2x is applied in mathematical and physical scenarios requiring angle doubling.
Can sin 2x be used to model physical phenomena?
Yes, sin 2x is used in physics to model phenomena with doubled angles or frequencies.
How does 2 sin x differ from sin x?
2 sin x doubles the amplitude of the sine of angle x, without altering the period.
Is sin 2x equivalent to 2 sin x?
Not directly; sin 2x involves a trigonometric identity, while 2 sin x is a scaling factor.
Why is 2 sin x significant in wave theory?
It's significant for adjusting the amplitude, relevant in acoustics and optics.
How does sin 2x influence trigonometric proofs?
It's crucial for demonstrating relationships and identities within trigonometry.
What happens to the sine wave's amplitude with 2 sin x?
The amplitude is doubled, making the wave's peaks and troughs more pronounced.
Can sin 2x be applied in engineering?
Yes, it's applied in signal processing and mechanical oscillations for frequency analysis.
What role does 2 sin x play in mathematical modeling?
It models situations where the intensity or magnitude of a sine-based phenomenon is increased.
What is the impact of 2 sin x on wave properties?
It increases the wave's amplitude without affecting its frequency or period.
About Author
Written by
Janet White
Janet White has been an esteemed writer and blogger for Difference Wiki. Holding a Master's degree in Science and Medical Journalism from the prestigious Boston University, she has consistently
demonstrated her expertise and passion for her field. When she's not immersed in her work, Janet relishes her time exercising, delving into a good book, and cherishing moments with friends and
Edited by
Aimie Carlson
Aimie Carlson, holding a master's degree in English literature, is a fervent English language enthusiast. She lends her writing talents to Difference Wiki, a prominent website that specializes in
comparisons, offering readers insightful analyses that both captivate and inform. | {"url":"https://www.difference.wiki/sin-2x-vs-2-sin-x/","timestamp":"2024-11-06T08:22:46Z","content_type":"text/html","content_length":"127622","record_id":"<urn:uuid:9754c565-9461-4512-a3ff-8fce2162a4ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00274.warc.gz"} |
Calculate addition of fraction numbers - cryptocrape.com
Binary Arithmetic Calculator
How to Use the Multiple Number Adder Calculator
Step-by-Step Guide:
1. Enter Your Numbers:
□ In the text area labeled “Enter numbers here,” input the numbers you wish to add.
□ You can enter multiple numbers in separate lines or in the same line.
2. Choose Your Delimiter:
□ From the “Select Delimiter” dropdown, choose how your numbers are separated:
☆ Comma (,): Select this option if your numbers are separated by commas.
☆ Space ( ): Select this option if your numbers are separated by spaces.
3. Calculate the Sum:
□ After entering your numbers and selecting the delimiter, click the “Calculate Sum” button to compute the total.
□ The result will be displayed in the Result section below.
4. Clear the Input/Output:
□ If you wish to reset everything, click the “Clear” button. This will clear both the input box and the result box.
• To add the numbers 10, 20, 30, you would enter them as:
Then, select “Comma” as the delimiter and click Calculate Sum.
• If you prefer using spaces, you could enter:
Select “Space” as the delimiter, and click Calculate Sum. | {"url":"https://cryptocrape.com/calculate-addition-of-fraction-numbers/","timestamp":"2024-11-14T01:57:00Z","content_type":"text/html","content_length":"93153","record_id":"<urn:uuid:51767e0d-3b9d-4ca8-83d9-840536c91d7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00234.warc.gz"} |
Heatmap for skew-symmetric data
hmap {asymmetry} R Documentation
Heatmap for skew-symmetric data
This heatmap displays the values of a skew-symmetric matrix by colors. The option dominance orders the rows and columns of the matrix in such a way that the values in the uppertriangle are positive
and the values in the lower triangle are negative. The order is calculated from the row-sums of the signs obtained from the skew-symmetric matrix.
hmap(x, dominance = FALSE, ...)
x A square matrix, either skew-symmetric or asymmetric, or an object of class decomposition. If an asymmetric matrix is given, the skew-symmetric part is computed.
dominance If true the signs of the skew-symmetric matrix are shown in the heatmap, if set to false the values in this matrix are shown.
... Further plot arguments: see heatmap.2 for detailed information.
hmap(studentmigration, dominance = TRUE, col = c("red", "white", "blue"))
version 2.0.4 | {"url":"https://search.r-project.org/CRAN/refmans/asymmetry/html/hmap.html","timestamp":"2024-11-09T06:27:34Z","content_type":"text/html","content_length":"2834","record_id":"<urn:uuid:88ba1ce0-1d41-4c9f-af7a-43a85adb7fce>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00000.warc.gz"} |
Longitudinal Waves
In this section, elementary scattering relations will be derived for the case of longitudinal force and velocity waves in an ideal string or rod. In solids, force-density waves are referred to as
stress waves [169,261]. Longitudinal stress waves in strings and rods have units of (compressive) force per unit area and are analogous to longitudinal pressure waves in acoustic tubes.
Figure: A waveguide section between two partial sections. a) Physical picture indicating traveling waves in a continuous medium whose wave impedance changes from simulation diagram for the same
situation. The section propagation delay is denoted as impedance discontinuity is characterized by a lossless splitting of an incoming wave into transmitted and reflected components.
A single waveguide section between two partial sections is shown in Fig.C.19. The sections are numbered 0 through wave impedances are traveling waves: dynamic range, velocity waves may be chosen
instead when
As in the case of transverse waves (see the derivation of (C.46)), the traveling longitudinal plane waves in each section satisfy [169,261]
where the wave
is now
density, and
Young's modulus
of the medium (defined as the stress over the strain, where
means relative
--see §
) [
]. As before, velocity
If the wave impedance propagates from one end of a rod-section to the other. In this case we need only consider C.19, we define force-wave component at the extreme left of section
For generality, we may allow the wave impedances C.57) in the time-varying case. For the moment, we will assume the traveling waves at the extreme right of section signal energy, being the product of
force times velocity, is ``pumped'' into or out of the waveguide by a changing wave impedance. Use of normalized waves C.8.6 below.
As before, the physical force density (stress) and velocity at the left end of section
Next Section: Kelly-Lochbaum Scattering JunctionsPrevious Section: Plane-Wave Scattering at an Angle | {"url":"https://www.dsprelated.com/freebooks/pasp/Longitudinal_Waves_Rods.html","timestamp":"2024-11-13T14:50:54Z","content_type":"text/html","content_length":"42334","record_id":"<urn:uuid:9c94f1f7-5884-441d-8d92-47d6b171dc28>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00824.warc.gz"} |
linear regression
Consider the following data from the text Design and Analysis of Experiments, 7th ed. (Montgomery, 2009, Table 3.1). It has two variables: power and rate. power is a discrete setting on a tool used
to etch circuits into a silicon wafer. There are four levels to choose from. rate is the distance etched measured in Angstroms per minute. (An Angstrom is one ten-billionth of a meter.) Of interest
is how (or if) the power setting affects the etch rate.
What are robust standard errors? How do we calculate them? Why use them? Why not use them all the time if they’re so robust? Those are the kinds of questions this post intends to address.
One of the basic assumptions of linear modeling is constant, or homogeneous, variance. What does that mean exactly? Let’s simulate some data that satisfies this condition to illustrate the concept.
Below we create a sorted vector of numbers ranging from 1 to 10 called x, and then create a vector of numbers called y that is a function of x. When we plot x vs y, we get a straight line with an
intercept of 1.2 and a slope of 2.1.
Whenever we are dealing with a dataset, we almost always run into a problem that may decrease our confidence in the results that we are getting - missing data! Examples of missing data can be found
in surveys - where respondents intentionally refrained from answering a question, didn’t answer a question because it is not applicable to them, or simply forgot to give an answer. Or our dataset on
trade in agricultural products for country-pairs over years could suffer from missing data as some countries fail to report their accounts for certain years.
Log transformations are often recommended for skewed data, such as monetary measures or certain biological and demographic measures. Log transforming data usually has the effect of spreading out
clumps of data and bringing together spread-out data. For example, below is a histogram of the areas of all 50 US states. It is skewed to the right due to Alaska, California, Texas and a few others.
Note: This post is not about hierarchical linear modeling (HLM; multilevel modeling). Hierarchical regression is model comparison of nested regression models.
You ran a linear regression analysis and the stats software spit out a bunch of numbers. The results were significant (or not). You might think that you’re done with analysis. No, not yet. After
running a regression analysis, you should check if the model works well for the data.
When I first learned data analysis, I always checked normality for each variable and made sure they were normally distributed before running any analyses, such as t-test, ANOVA, or linear regression.
I thought normal distribution of variables was the important assumption to proceed to analyses. That’s why stats textbooks show you how to draw histograms and QQ-plots in the beginning of data
analysis in the early chapters and see if variables are normally distributed, isn’t it? | {"url":"https://library.virginia.edu/data/tag/linear-regression","timestamp":"2024-11-13T04:20:11Z","content_type":"text/html","content_length":"57303","record_id":"<urn:uuid:851fecc5-39d6-4d8d-a4ad-3721711290f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00662.warc.gz"} |
x[1]=v[1]=z[1] is an irregular/free-form content word meaning x[2]=v[2]=z[2] in language x[3]=v[3] (usually Lojban); x[1] is a stage-4 or stage-3 fu'ivla.
A zi'evla is any non-gismu, non-lujvo, non-cmevla (see brivla–cmevla merger) valid brivla. This generalizes the concept of fu'ivla to reflect the fact that many free-form brivla in Lojban nowadays
are completely novel words not borrowed from any other languages; instead, many are created by splicing together other Lojban words, similar to lujvo but without adhering to any regular morphological
patterns. Synonymous with zevla; mostly synonymous with u'ivla and vonfu'ivla; refers also to gimyzevla; contrast with zevlyjvo/brapagjvo and valtcizbaga (which are lujvo, not zi'evla). | {"url":"https://vlasisku.lojban.org/zi'evla","timestamp":"2024-11-13T16:16:01Z","content_type":"text/html","content_length":"18528","record_id":"<urn:uuid:5b27ad87-0fc8-4bdf-b075-4775c0463af6>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00507.warc.gz"} |
How to Convert Duration to Number in Google Sheets
Google Sheets: How to Convert Duration to Number
Do you have a duration in Google Sheets that you need to convert to a number? Maybe you have a timesheet that you need to calculate the total hours for, or you need to find the average duration of a
certain task. Whatever the reason, converting a duration to a number in Google Sheets is easy to do.
In this article, I’ll show you how to convert a duration to a number using two different methods:
• Using the MINUTE() function
• Using the CONVERT() function
I’ll also provide some examples so you can see how to use these methods in practice. So if you’re ready to learn how to convert a duration to a number in Google Sheets, keep reading!
HTML Table for Google Sheets Convert Duration to Number
| Column 1 | Column 2 | Column 3 |
| Duration | Number | Description |
| 1 day | 86400 | 24 hours |
| 1 hour | 3600 | 60 minutes |
| 1 minute | 60 | 60 seconds |
| 1 second | 1 | 1 second |
What is a duration in Google Sheets?
A duration in Google Sheets is a value that represents the amount of time that has passed. It can be expressed in a variety of ways, including seconds, minutes, hours, days, weeks, months, and years.
Durations are used in a variety of ways in Google Sheets, such as:
• Calculating the time between two dates or times
• Formatting cells to display time values
• Creating charts that show time trends
How to convert a duration to a number in Google Sheets?
There are a few different ways to convert a duration to a number in Google Sheets.
Method 1: Using the `DAYS()` function
The `DAYS()` function can be used to convert a duration in days to a number. The syntax of the `DAYS()` function is as follows:
=DAYS(start_date, end_date)
• `start_date` is the start date of the duration.
• `end_date` is the end date of the duration.
For example, the following formula would convert the duration from 1/1/2023 to 1/1/2024 to a number of days:
=DAYS(DATE(2023, 1, 1), DATE(2024, 1, 1))
Method 2: Using the `TIME()` function
The `TIME()` function can be used to convert a duration in hours, minutes, and seconds to a number. The syntax of the `TIME()` function is as follows:
=TIME(hours, minutes, seconds)
• `hours` is the number of hours in the duration.
• `minutes` is the number of minutes in the duration.
• `seconds` is the number of seconds in the duration.
For example, the following formula would convert the duration from 1:00:00 PM to a number of seconds:
=TIME(1, 0, 0)
Method 3: Using the `DATEDIF()` function
The `DATEDIF()` function can be used to calculate the difference between two dates or times. The syntax of the `DATEDIF()` function is as follows:
=DATEDIF(start_date, end_date, unit)
• `start_date` is the start date of the duration.
• `end_date` is the end date of the duration.
• `unit` is the unit of time that you want to use to calculate the difference.
The possible values for `unit` are:
• `d` for days
• `m` for months
• `y` for years
For example, the following formula would calculate the number of days between 1/1/2023 and 1/1/2024:
=DATEDIF(DATE(2023, 1, 1), DATE(2024, 1, 1), “d”)
Which method should I use?
The best method to use to convert a duration to a number depends on the specific format of the duration. If the duration is expressed in days, then you can use the `DAYS()` function. If the duration
is expressed in hours, minutes, and seconds, then you can use the `TIME()` function. And if the duration is expressed as the difference between two dates or times, then you can use the `DATEDIF()`
Converting a duration to a number in Google Sheets is a simple task that can be accomplished using a variety of methods. The best method to use depends on the specific format of the duration.
3. Formatting and displaying duration values in Google Sheets
Once you have converted a duration value to a number, you may want to format it so that it is displayed in a more readable way. For example, you might want to display the duration in hours, minutes,
and seconds, or you might want to display it as a date and time.
To format a duration value, you can use the following steps:
1. Select the cell or cells that contain the duration value.
2. Click the Format tab on the ribbon.
3. In the Number group, click the More button.
4. In the Date and Time category, select the format that you want to use.
The following table shows the different formats that you can use to display duration values in Google Sheets:
| Format | Description |
| General | The default format. Displays the duration as a number of seconds. |
| Hours:Minutes:Seconds | Displays the duration in hours, minutes, and seconds. |
| Days:Hours:Minutes:Seconds** | Displays the duration in days, hours, minutes, and seconds. |
| Date | Displays the duration as a date. |
| Time | Displays the duration as a time. |
You can also use the following formulas to format duration values:
• =TEXT(duration_value, “h:mm:ss”)
• =TEXT(duration_value, “d:h:mm:ss”)
• =DATE(year, month, day) + TIME(hours, minutes, seconds)
For example, the following formula will convert the duration value “1000” to the text string “00:01:40”:
=TEXT(1000, “h:mm:ss”)
The following formula will convert the duration value “1000” to the date and time “2023-03-08 00:01:40”:
=DATE(2023, 3, 8) + TIME(0, 1, 40)
4. Troubleshooting common problems with converting durations to numbers in Google Sheets
There are a few common problems that you might encounter when converting durations to numbers in Google Sheets. Here are some tips for troubleshooting these problems:
• Make sure that the data is in the correct format. Duration values must be in the format “hh:mm:ss” or “d:hh:mm:ss”. If the data is not in the correct format, you can use the Text to Columns tool
to convert it.
• Make sure that the data is not empty. Empty cells will not be converted to numbers. If there are any empty cells in your data, you can use the Find and Replace tool to replace them with zeros.
• Make sure that the data is not formatted as text. Text values will not be converted to numbers. If the data is formatted as text, you can use the Format tool to change the formatting.
If you are still having trouble converting durations to numbers in Google Sheets, you can contact Google Support for help.
Converting durations to numbers in Google Sheets is a relatively simple process. However, there are a few common problems that you might encounter. By following the tips in this article, you can
troubleshoot these problems and successfully convert your durations to numbers.
Q: How do I convert a duration to a number in Google Sheets?
A: To convert a duration to a number in Google Sheets, you can use the `DURATION()` function. The `DURATION()` function takes two arguments: the start date and the end date. The start date and end
date can be either dates or text strings. The function will return the number of days, hours, minutes, and seconds between the two dates.
For example, the following formula will convert the duration from 1 January 2023 to 31 December 2023 to a number of days:
This formula will return the value 365.
Q: What is the difference between the `DURATION()` function and the `DATEDIF()` function?
A: The `DURATION()` function and the `DATEDIF()` function are both used to calculate the difference between two dates. However, the `DURATION()` function returns a number of days, hours, minutes, and
seconds, while the `DATEDIF()` function returns a number of years, months, and days.
For example, the following formula will use the `DATEDIF()` function to calculate the number of years between 1 January 2023 and 31 December 2023:
This formula will return the value 1.
Q: How can I convert a duration to a human-readable format in Google Sheets?
A: To convert a duration to a human-readable format in Google Sheets, you can use the `TEXT()` function. The `TEXT()` function takes two arguments: the value to be converted and the format string.
The format string specifies how the value should be displayed.
For example, the following formula will convert the duration from 1 January 2023 to 31 December 2023 to a human-readable format:
=TEXT(DURATION(DATE(2023,1,1),DATE(2023,12,31)),”d days, h hours, m minutes, s seconds”)
This formula will return the value “365 days, 0 hours, 0 minutes, 0 seconds”.
Q: What are some other ways to convert a duration to a number in Google Sheets?
A: There are a few other ways to convert a duration to a number in Google Sheets. You can use the following methods:
• Use the `TO_NUMBER()` function. The `TO_NUMBER()` function takes a text string as an argument and converts it to a number.
• Use the `VALUE()` function. The `VALUE()` function takes a text string as an argument and returns the value of the string as a number.
• Use the `NUMBERVALUE()` function. The `NUMBERVALUE()` function takes a text string as an argument and returns the number represented by the string.
For example, the following formulas will all convert the duration from 1 January 2023 to 31 December 2023 to a number:
These formulas will all return the value 365.
In this blog post, we have discussed how to convert duration to number in Google Sheets. We have covered two methods: using the MINUTE() and SECOND() functions, and using the DATEDIF() function. We
have also provided some examples to help you understand how to use these functions.
We hope that this blog post has been helpful. If you have any questions, please feel free to leave a comment below.
Author Profile
Hatch, established in 2011 by Marcus Greenwood, has evolved significantly over the years. Marcus, a seasoned developer, brought a rich background in developing both B2B and consumer software for
a diverse range of organizations, including hedge funds and web agencies.
Originally, Hatch was designed to seamlessly merge content management with social networking. We observed that social functionalities were often an afterthought in CMS-driven websites and set out
to change that. Hatch was built to be inherently social, ensuring a fully integrated experience for users.
Now, Hatch embarks on a new chapter. While our past was rooted in bridging technical gaps and fostering open-source collaboration, our present and future are focused on unraveling mysteries and
answering a myriad of questions. We have expanded our horizons to cover an extensive array of topics and inquiries, delving into the unknown and the unexplored. | {"url":"https://hatchjs.com/google-sheets-convert-duration-to-number/","timestamp":"2024-11-08T11:38:44Z","content_type":"text/html","content_length":"91733","record_id":"<urn:uuid:8480556f-aa9f-4e6a-ad7e-adf6c95625e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00431.warc.gz"} |
Spectral Convergence of Random Regular Graphs to the Kesten-McKay and Semicircle Distributions
Spectral Convergence of Random Regular Graphs to the Kesten-McKay and Semicircle Distributions
Core Concepts
This research paper investigates the convergence of spectral measures of random regular graphs with fixed or growing vertex degrees to the Kesten-McKay and semicircle distributions, utilizing
Chebyshev polynomials and the analysis of non-backtracking walks.
• Bibliographic Information: Gong, Y., Li, W., & Liu, S. (2024). Spectral convergence of random regular graphs: Chebyshev polynomials, non-backtracking walks, and unitary-color extensions. arXiv
preprint arXiv:2406.05759v2.
• Research Objective: This paper aims to provide a simplified proof for the convergence of normalized spectral measures of random N-lifts to the Kesten-McKay distribution and extend the convergence
criteria to regular graphs with growing vertex degrees, specifically focusing on their convergence to the semicircle distribution.
• Methodology: The authors utilize Chebyshev polynomials and their relationship with non-backtracking walks on graphs to analyze the spectral measures. They generalize a formula by Friedman
involving Chebyshev polynomials and non-backtracking walks to the unitary-colored case. Additionally, they extend a criterion by Sodin on the convergence of graph spectral measures to encompass
regular graphs with increasing degrees.
• Key Findings: The paper presents a concise proof for the weak convergence of normalized spectral measures of random N-lifts to the Kesten-McKay distribution. Furthermore, it demonstrates that for
a sequence of random (qn + 1)-regular graphs Gn with n vertices, where qn = no(1) and qn approaches infinity, the normalized spectral measure almost surely converges to the semicircle
distribution in p-Wasserstein distance for any p ∈ [1, ∞).
• Main Conclusions: The research provides a deeper understanding of the spectral convergence behavior of random regular graphs, particularly in scenarios with growing vertex degrees. The use of
Chebyshev polynomials and non-backtracking walks offers a powerful toolset for analyzing spectral properties.
• Significance: This work contributes significantly to spectral graph theory and random graph theory. The findings have implications for the study of complex networks, random matrix theory, and
related fields.
• Limitations and Future Research: The paper primarily focuses on regular graphs. Exploring similar convergence properties for other graph families, such as Erdős–Rényi graphs or preferential
attachment models, could be a potential avenue for future research. Additionally, investigating the rate of convergence to the limiting distributions would be of interest.
Translate Source
To Another Language
Generate MindMap
from source content
Spectral convergence of random regular graphs: Chebyshev polynomials, non-backtracking walks, and unitary-color extensions
Deeper Inquiries
How can the findings of this research be applied to the analysis of real-world networks with irregular structures and varying degrees?
While the paper focuses on regular graphs, which have uniform degree, many real-world networks exhibit irregular structures and varying degrees. Directly applying these findings can be challenging.
However, the research offers valuable insights and potential adaptations for analyzing such networks: Local Approximations: Real-world networks often exhibit locally tree-like structures, especially
in the absence of dense community structures. The paper's emphasis on non-backtracking walks (NBW) and their connection to spectral measures can be leveraged to analyze local neighborhoods within
irregular networks. By approximating these neighborhoods as regular trees with varying degrees, one could gain insights into local spectral properties. Degree Corrections: Techniques like degree
correction or configuration models can be employed to relate irregular networks to their regular counterparts. These methods involve creating ensembles of random graphs with similar degree
distributions as the real-world network. By studying the spectral convergence of these ensembles, one could infer properties of the original network. Generalizations of NBW: The concept of NBW can be
extended to irregular graphs. For instance, one could consider weighted NBW, where weights are assigned to edges based on their importance or the degrees of their incident vertices. Analyzing the
behavior of these weighted NBW could provide insights into the spectral properties of irregular networks. Empirical Spectral Analysis: The paper's findings on the convergence of spectral measures to
known distributions like the Kesten-McKay law and the semicircle distribution can guide empirical spectral analysis of real-world networks. By comparing the observed spectral distributions of real
networks to these theoretical limits, one could identify deviations suggesting the presence of specific structural features or community structures.
Could there be alternative approaches, beyond the use of Chebyshev polynomials and non-backtracking walks, to prove the spectral convergence results presented in the paper?
Yes, alternative approaches exist to explore spectral convergence in random graphs, each with its strengths and limitations: Method of Moments: This classical approach involves showing that the
moments of the empirical spectral distribution (ESD) converge to the moments of the limiting distribution. While conceptually straightforward, calculating higher-order moments can become
computationally intensive for complex graph ensembles. Stieltjes Transform Methods: The Stieltjes transform provides an alternative representation of probability measures. Analyzing the convergence
of Stieltjes transforms can be advantageous for proving weak convergence of ESDs, especially when dealing with unbounded supports. Graph Limits and Graphons: For dense graphs, the theory of graph
limits and graphons offers a powerful framework. By representing graphs as functions on a continuous domain, one can study their convergence in a functional analytic setting, leading to spectral
convergence results. Combinatorial Methods: Direct combinatorial arguments, often tailored to specific graph ensembles, can provide elegant proofs of spectral convergence. These methods typically
involve carefully counting specific substructures within the graphs and relating them to the eigenvalues of the adjacency matrix. The choice of approach depends on the specific graph ensemble, the
desired type of convergence (weak, Wasserstein, etc.), and the analytical tools available.
What are the implications of the convergence of spectral measures of large random graphs to specific distributions for understanding the behavior of dynamical systems on these graphs?
The convergence of spectral measures in large random graphs to specific distributions has profound implications for understanding dynamical systems defined on these graphs: Universality: Convergence
to universal distributions like the semicircle law suggests that the macroscopic behavior of many dynamical systems becomes independent of the specific details of the underlying graph structure as
the graph size grows. This universality simplifies analysis and allows for general predictions. Spectral Localization and Dynamics: The eigenvalues and eigenvectors of the graph Laplacian or
adjacency matrix govern the behavior of diffusion processes, random walks, and wave propagation on the graph. Convergence of the spectral measure provides insights into the localization properties of
these eigenvectors and the corresponding dynamical modes. Phase Transitions: Changes in the limiting spectral distribution as graph parameters vary can signal phase transitions in the behavior of
dynamical systems. For instance, the emergence of a new eigenvalue outside the support of the limiting distribution might indicate the onset of synchronization in coupled oscillators on the graph.
Stability and Robustness: Convergence of spectral measures often implies stability and robustness of dynamical systems to small perturbations in the graph structure. This stability is crucial for
applications in distributed algorithms, network control, and other areas where the network topology might be subject to noise or uncertainties. Spectral Algorithms and Design: Understanding the
limiting spectral properties of random graph ensembles guides the design and analysis of spectral algorithms for tasks like clustering, dimensionality reduction, and community detection. These
algorithms rely heavily on the spectral properties of the underlying graph. | {"url":"https://linnk.ai/insight/scientific-computing/spectral-convergence-of-random-regular-graphs-to-the-kesten-mckay-and-semicircle-distributions-UTYthDfQ/","timestamp":"2024-11-08T05:46:26Z","content_type":"text/html","content_length":"291635","record_id":"<urn:uuid:0b22e0ec-61a2-45c9-9745-b05322285b13>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00125.warc.gz"} |
How to calculate Power Query Percent of total or category
In this post, we look at how to calculate the percentage of a total in Power Query. But we will also take this one step further, to consider how to calculate the percent of a category.
In standard Excel, these calculations are simple because we are so used to the formulas to achieve this. For many, Power Query is still a newer tool, and it doesn’t operate in quite the same way. But
once you’ve calculated a Power Query percentage of total, I think you will find is straightforward.
Table of Contents
Download the example file: Join the free Insiders Program and gain access to the example file used for this post.
File name: 0010 Percent of total in Power Query.xlsx
Power Query percent of total – quick answer
The data in our example looks like this; it is a list of cities where a company has offices. Each office has a region and a headcount value.
Our goal is to:
• Calculate the % of total headcount at each site
• Calculate the % of Region head count at each site
Percent of total in Power Query
To calculate the % of the total is reasonably straightforward. While the SUM function does not exist in Power Query, the List.Sum function does.
1. Within Power Query click Add Column > Custom Column
2. In the Custom Column dialog box enter the following formula: =[Headcount] / List.Sum(#”Changed Type”[Headcount])
3. Change the formula to fit your scenario:
□ [Headcount] is the name of the column for which you want to calculate the %
□ #”Changed Type” is the name of the step to be used as the source for the formula. Typically, this is the name of the previous step in the Advanced Editor window.
4. Give the custom column a useful name, such as % of total, then click OK.
5. The % of total column is now included in the preview window.
6. It doesn’t look like a percentage yet, so change the data type by clicking on the ABC123 button and selecting Percentage from the menu.
That’s it, here is the final table:
That wasn’t too bad, was it, eh? The solution was very similar to how we would do it in Excel. Keep reading to understand more about how this calculation works.
Percent of category in Power Query
To calculate the % of a category, it’s not quite as easy. Power Query does not have the equivalent of the SUMIF or SUMIFS functions, so we need to think differently. Instead, we create a
transformation formula to achieve the same result.
1. Within Power Query click Add Column > Custom Column
2. In the Custom Column dialog box enter the following formula:
=[Headcount] / Table.SelectRows( Table.Group(#”Changed Type”, {“Region”}, {{“Category Total”, each List.Sum([Headcount]), type number}}) , each ([Region] = [Region])){[Region=[Region]]}[Category
3. Change the formula to fit your scenario:
□ #”Changed Type” is the name of the step to be used as the source for the formula. Typically, this is the name of the previous step in the Advanced Editor window
□ [Region] or “Region” is the name of the column which contains the field to categorize by
□ [Headcount] is the name of the column for which you want to calculate the %
4. Give the custom column a useful name, such as % of category, then click OK.
5. The % of category column ins included in the preview window.
6. Finally, change the data type to percentage.
Here is the final table:
The process was similar to calculating the percent of a total, but the formula you pasted is much more complex. In the sections below, we’ll dig a bit deep to understand how it works. This will
enable you to create this transformation for yourself.
How % of total works
Power Query, itself does not have a total row. This isn’t a problem, as the tool is intended for data manipulation, rather than presentation. But it does mean there is not a total row to divide by.
List.Sum is similar to Excel’s SUM function. The following M code uses that formula to calculate the total value for the Headcount column, using the #”Changed Type” step as its source.
List.Sum(#"Changed Type"[Headcount])
Having calculated the total, we just need to divide the number in each row of the headcount column by the result of the List.Sum function result.
=[Headcount] / List.Sum(#"Changed Type"[Headcount])
Pretty easy, right?
Learn more about the List.Sum function here: https://bioffthegrid.com/list-sum
How % of category works
Calculating the % of a category is a little tricky. We have seen this scenario previously when we looked at custom functions. While a custom function is an option, we can also achieve the result (and
possibly easier) by nesting transformations into a single formula.
We will step through the transformations one-by-one so that we can understand how the technique works.
Step 1
Begin with the source table loaded into Power Query.
Step 2
Click Transform > Group By from the Power Query ribbon.
The Group By dialog box opens. Enter the following information:
• View: Basic
• Column: Region (i.e., the column which contains the category column)
• New column name: Any name you want. Given our data set, Total Region seems sensible
• Operation: Sum – the calculation we want to perform
• Column: Headcount – the column containing the numbers we want to perform the operation on
Click OK to close the dialog box.
Look at the formula bar at the top of the Preview Window (click View > Formula Bar if it is not visible). The M code for this step looks like this:
= Table.Group(#"Changed Type", {"Region"},
{{"Total Region", each List.Sum([Headcount]), type number}})
The Table.Group function is the M code which executes the Group By transformation. Learn more about the Table.Group function here: https://docs.microsoft.com/en-gb/powerquery-m/table-group
Step 3
Filter the category column to include a single value. I have selected Wales. Then click OK.
The M code for this step is:
= Table.SelectRows(#"Grouped Rows", each ([Region] = "Wales"))
The Table.SelectRows function is the M code that executes a column filter. Learn more about the Table.SelectRows function here: https://docs.microsoft.com/en-gb/powerquery-m/table-selectrows
Step 4
Now let’s combine the two transformations from Step 2 and Step 3 into a single formula.
#”Grouped Rows” is the reference to the step above. So, we just need to make two simple changes:
• Add the text before the step name to the beginning of the step above
• Add the text after the step name to the end of the step above
The M code now looks like this:
Table.Group(#"Changed Type", {"Region"},
{{"Total Region", each List.Sum([Headcount]), type number}})
, each ([Region] = "Wales"))
The different sections are:
• The transformation from Step 2
• The code added from Step 3
As the last step has now been added into the step above it is not longer required, we can delete the last step.
The preview window now only has one line of data.
Step 5
Right-click on the value within the Total Region column and select Drill Down from the menu.
The M code for this step looks like this:
= #"Grouped Rows"{[Region="Wales"]}[Total Region]
Step 6
Now let’s add the code from step 5 into Step 4.
The M code now looks like this:
Table.Group(#"Changed Type", {"Region"},
{{"Total Region", each List.Sum([Headcount]), type number}})
, each ([Region] = "Wales")){[Region="Wales"]}[Total Region]
The sections are:
• The transformation from Step 2
• The added code from Step 3
• The added code from Step 5
As the drill down has now been incorporated into the step above, we can delete the last step from the applied steps window.
Step 7
We now have all the transformations required, so it’s time to turn the formula into its own column.
1. Copy all the text for the combined formula from the Formula Bar.
2. Delete the step containing the formula from the Applied Steps list.
3. Click Add Column > Custom Column
4. Paste the copied text into the Custom Column dialog box.
5. Replace each instance of “Wales” with [Region]. The code now looks like this: Obviously, you’ll adapt this to your scenario.
=Table.SelectRows( Table.Group(#”Changed Type”, {“Region”}, {{“Total Region”, each List.Sum([Headcount]), type number}}) , each ([Region] = [Region])){[Region=[Region]]}[Total Region]
6. Finally, add the column to be divided to the start of the formula =[Headcount] / Table.SelectRows( Table.Group(#”Changed Type”, {“Region”}, {{“Total Region”, each List.Sum([Headcount]), type
number}}) , each ([Region] = [Region])){[Region=[Region]]}[Total Region]
That’s it; we now have a % of the region for every row.
In this post, we focused on how to calculate the percentage of a total in Power Query and also the percentage of a category. Through this, we learned about the List.Sum function (which is similar to
Excel’s SUM function) and also how to combine queries into a single transformation step. This is a great technique to achieve more advanced transformations.
Related posts:
Discover how you can automate your work with our Excel courses and tools.
Excel Academy
The complete program for saving time by automating Excel.
Excel Automation Secrets
Discover the 7-step framework for automating Excel.
Office Scripts: Automate Excel Everywhere
Start using Office Scripts and Power Automate to automate Excel in new ways.
2 thoughts on “How to calculate Power Query Percent of total or category”
1. maybe usage “constant”?
sum_table= Table.FromValue( List.Sum(#”Changed Type”[Headcount]), [DefaultColumnName = “sum_Headcount”])
final = Table.AddColumn(#”Changed Type”, #”% of total” , each [Headcount] / sum_table{0}[sum_Headcount])
2. my solution:
Source = Excel.CurrentWorkbook(){[Name=”SourceTable”]}[Content],
#”Changed Type” = Table.TransformColumnTypes(Source,
{{“Office”, type text}, {“Region”, type text}, {“Headcount”, Int64.Type}}),
#”Hinzugefügter Index” = Table.AddIndexColumn(#”Changed Type”, “Index”, 0, 1, Int64.Type),
#”Gruppierte Zeilen” = Table.Group(#”Hinzugefügter Index”, {“Region”},
{{“Gruppe”, each _, type table [Office=nullable text, Region=nullable text, Headcount=nullable number]}}),
#”new intern Column” = Table.TransformColumns(#”Gruppierte Zeilen”,
{{“Gruppe”, (f)=> Table.AddColumn(f, “% of category”, (k)=> k[Headcount]/List.Sum(f[Headcount]))}}),
#”Erweiterte Gruppe” = Table.ExpandTableColumn(#”new intern Column”, “Gruppe”,
{“Office”, “Headcount”, “Index”, “% of category”}, {“Office”, “Headcount”, “Index”, “% of category”}),
#”Sortierte Zeilen” = Table.Sort(#”Erweiterte Gruppe”,{{“Index”, Order.Ascending}}),
#”Entfernte Spalten” = Table.RemoveColumns(#”Sortierte Zeilen”,{“Index”})
#”Entfernte Spalten”
Leave a Comment | {"url":"https://exceloffthegrid.com/power-query-percent-of-total/","timestamp":"2024-11-11T01:38:53Z","content_type":"text/html","content_length":"146380","record_id":"<urn:uuid:c32e14f7-1d62-4743-8bad-52a7eb0dad69>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00455.warc.gz"} |
Kindly solve this question based on Functions ? | HIX Tutor
Kindly solve this question based on Functions ?
Which of the following statement(s) is(are) correct, Explain with some example ?
(A) If $f$ is a one-one mapping from set A to A , then $f$ is onto.
(B) If $f$ is an onto mapping from set A to A , then $f$ is one-one.
Answer 1
This is not true for infinite sets.
Counterexample 1: Let #f(x) = e^x#, which is defined for all x #in# R. Then f is one-to-one (with its inverse being the natural logarithm), but f is not onto; its range is the positive numbers.
Counterexample 2: Let f be defined on the natural numbers as follows: f(1) = 1. For n > 1, f(n) = n - 1. Then f(2) = f(1), so f is not one-to-one. However, every natural number is in the image of the
function, so f is onto.
For finite sets it is true that f is one-to-one if and only if f is onto.
Let | A | be the cardinality of the finite set, A, and let |f(A)| be the cardinality of the image of A under f. Assume f is one-to-one. Then | A | = | f(A) |, by the definition of one-to-one. Since A
is finite and #f(A) sube A#, we must have f(A) = A. Thus f is onto.
Assume instead that f is onto. Then for each a #in# A, there is at least one x #in# A such that f(x) = A. Since this is true for all A, then #| A | <= |f^(-1)(A)|#. Since A is finite we must have #|
A | = |f^(-1)(A)|#. That is, f is one-to-one. Alternatively, if there were at least one pair, a and b, such that f(a) = f(b), then | f(A) | would be strictly less than A, so that f would not be onto.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Of course, please provide the specific question or problem related to functions that you would like me to solve, and I'll be happy to assist you.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/kindly-solve-this-question-based-on-functions-8f9afa4f22","timestamp":"2024-11-10T13:59:55Z","content_type":"text/html","content_length":"586745","record_id":"<urn:uuid:124d1832-0710-4e76-affb-3ef567c0181f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00728.warc.gz"} |
SAT Preparation Tools – mcstutoring
SAT Preparation Tools
You will want to use these tools for your SAT Preparation. A lot of them you can use on your own. Others are best used with guidance. Here is the list:
First SAT Preparation Tool: The Official SAT Study Guide (OSSG)
The Official SAT Study Guide (OSSG), available from Amazon or other booksellers. Your free option is the College Board’s SAT practice tests as PDFs.
While you don’t need a print copy of the tests, you will most likely prefer it. First, it is more realistic as the SAT is still printed and not digital when administered. Second, it’s easier for you
to flip pages when a reading passage question refers to text on the preceding page. Third, it’s easier for you to make notes and scribble diagrams. This is just like your real SAT test.
The alternative is using the practice test PDFs, available here. With this, there are two options. You can print them so as to mimic the print copy of a real SAT. The drawback is that each test is
about 50 pages, so that’s a lot of printing. It might be less expensive for you to buy the OSSG.
The other option is to use a digital device to read the questions of the test. It is free, but it can be cumbersome when making notes or flipping back a page. If you have a device you’re comfortable
reading from, this could be a good option for you.
Second Tool: TI 84 Calculator
While there are calculators that are both legal and have greater computing power, the TI 84 is unsurpassed in its ease of use and ability to store programs. The TI 84 can calculate imaginary numbers,
change decimal answers into fractions, and supply you with graphs and tables. This sounds simple, but it is the time-saving aspect along with efficiency that helps you. Keep in mind that each math
question equates to about 10 points. So, a mere 5 more questions answered correctly raises your score 50 points.
Next, the TI 84 calculator can use programs. This is incredibly helpful. You can definitely calculate a lot of the answers on the SAT math using your head. The calculator is faster. A TI 84 with
programs is faster yet, and likely more accurate. Later on, check out the fifth tool.
Third Tool: College Board Account Linked to Khan Academy (KA)
This is not what you think. This is for answering math questions (and verbal, if you’re so inclined) similar to those found on the SAT. Think a question/answer machine.
The best part is that you can take short diagnostic tests. Once you do that, KA will direct you to topics that can help you work on your weaknesses. Their modules range from 5 to 15 questions and
sometimes involve timed quizzes. Taking these is great way to make use of short time intervals on a regular basis. Nobody needs to know your scores. Oh, and it tracks your progress. You will want to
use this on a regular basis to keep your head in the SAT game.
The only three downsides. First, the questions, especially the more difficult ones, are sometimes more difficult than those on the actual SAT. Second, they can become more obscure, slightly off-topic
once you’ve completed enough of them. Third, it is digital. While the instant feedback is nice, you can incur digital fatigue.
All in all, a good tool. However, you might want to switch to the sixth tool if you want more SAT math practice questions.
Joining KA: You can click here to join KA. You’ll want to join as a learner. Include SAT practice as one of your learning topics.
Also, in order to register for an SAT, you’ll want a College Board account. Then, you’ll want to link them so that KA can track what topics you need to work on to improve your composite score.
Fourth Tool: College Panda SAT Advanced Math Book and Workbook
This is the single best guide on the SAT math. Most other guides include questions more difficult than those found on the SAT. Even worse, they include questions on topics not actually covered by the
Bucking this trend, College Panda uses many of the math questions found on the real SAT practice tests. Even better, the questions are divided by topic, with the more basic and popular topics first.
Each chapter focuses on a handful of examples and rules to explain its topic. Then it provides you with questions at the end in a fashion similar to what you would find on the real SAT.
Fifth Tool: SAT Help TI 84 Calculator Program
You can download it here. If you aren’t familiar with downloading calculator programs onto your TI 84 calculator, click here. That page will give you instructions and a video about downloading TI
Connect (free) onto your computer so that your calculator can download programs. Also, the page will provide instructions about how to connect your calculator to your computer and then download
programs from the internet.
*Please keep in mind that calculator programs are legal on section 4 (math, calculator) of the test. Section 4 contains 38 of the 58 math questions. This means you can use your calculator on about 2/
3 of all the math questions.
For this section, you’ll want to use your calculator on about 10-15 of the questions, and programs on about 5-10 of those. Keep in mind that every little bit helps.
Sixth Tool (optional): College Panda 10 SAT Math Tests
This is to be used only after you’ve exhausted most of the other tools. It’s good as it gives you timed tests on the math sections.
While the questions are of the same topics of an actual SAT, there is a drawback. A portion of the questions are among the more difficult math questions you’ll see on the SAT. This is not a
dealbreaker in itself. It’s just with too many difficult math questions per test, your score would be lower on these practice tests than the actual SAT. So, taking these tests, expect your score to
be lower on these than they would an actual SAT math test.
Final Words
Please use these tools. Most of them are free or low cost.
And be sure to create a plan and stick with it. This starts with taking a practice test to establish your baseline score.
After that, set a target score and a date you want to achieve it. You will want to do some work towards that each week. It’s best to do a little bit a day rather than a marathon session once weekly.
You want to practice for the SAT as you would practice a sport or musical instrument. Consistency is key.
Basically, you’ll want to take a full practice SAT once every 2-4 weeks. If possible, take an entire practice test all in one sitting.
On a more daily basis, you’ll want to work on the topics from the College Panda Math book. Once you’ve completed the almost 30 chapters of that book, move on to KA.
Oh, and be sure to use your calculator a lot. The more familiar you are with the calculator in general and the SAT HELP program in particular, the better off you’ll be.
Many students are surprised that the calculator program can solve complex numbers, solve systems of equations, and calculate quadratics. Even basic things like simplifying fractions, simplifying
radicals, calculating the distance between two points or creating a linear equation from two points are extremely helpful. Not only do the calculator and the program help, they make sure your answers
are correct and in the format the SAT wants. As you take practice tests, you’ll grow to appreciate the meaning of this.
All in all, good luck. You will experience ups and downs. That’s part of the process. Just keep your head in the game on a regular basis. And please keep your progress to yourself, your tutor, and
your parents. Sharing scores or progress with peers is, in most cases, a mistake. It leads to a comparison game and robs you of the joy of your journey. | {"url":"https://mcstutoring.com/sat-preparation-tools/","timestamp":"2024-11-06T13:26:03Z","content_type":"text/html","content_length":"57284","record_id":"<urn:uuid:797cc5d2-e54b-4c67-a8cf-afd4066601f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00596.warc.gz"} |
The Stacks project
Definition 42.25.1. Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be locally of finite type over $S$. Let $\mathcal{L}$ be an invertible $\mathcal{O}_ X$-module. We define, for every integer
$k$, an operation
\[ c_1(\mathcal{L}) \cap - : Z_{k + 1}(X) \to \mathop{\mathrm{CH}}\nolimits _ k(X) \]
called intersection with the first Chern class of $\mathcal{L}$.
1. Given an integral closed subscheme $i : W \to X$ with $\dim _\delta (W) = k + 1$ we define
\[ c_1(\mathcal{L}) \cap [W] = i_*(c_1({i^*\mathcal{L}}) \cap [W]) \]
where the right hand side is defined in Definition 42.24.1.
2. For a general $(k + 1)$-cycle $\alpha = \sum n_ i [W_ i]$ we set
\[ c_1(\mathcal{L}) \cap \alpha = \sum n_ i c_1(\mathcal{L}) \cap [W_ i] \]
Comments (0)
There are also:
• 2 comment(s) on Section 42.25: Intersecting with an invertible sheaf
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 02SO. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 02SO, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/02SO","timestamp":"2024-11-11T23:21:09Z","content_type":"text/html","content_length":"14780","record_id":"<urn:uuid:e32bb1ff-218b-4feb-b891-8f2d1f14785a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00325.warc.gz"} |
Revision history
There is a problem since the a you are using seems to come from a previous computation.
At least on the last version of Sage, if you type
sage: K = GF(2^7,'a');
The python variable a does not point to the generator of K whose name is 'a'
sage: a
NameError: name 'a' is not defined
For this, you have to do:
sage: K.inject_variables()
Defining a
Then, everything seems to work:
sage: PK.<x>=K[];
sage: f = (a^6 + a^3 + a)*x^2 + (a^6 + a^4 + a^3)*x + (a^5 + a^4 + a^3 + a^2 + 1);
sage: print f.roots();
[(a^3 + a, 1), (a^5 + a^3 + a^2 + a, 1)]
There is a problem since the a you are using seems to come from a previous computation.
At least on the last version of Sage, if you type
sage: K = GF(2^7,'a');
The python variable a does not point to the generator of K whose name is 'a'
sage: a
NameError: name 'a' is not defined
For this, you have to do:
sage: K.inject_variables()
Defining a
Then, everything seems to work:
sage: PK.<x>=K[];
sage: f = (a^6 + a^3 + a)*x^2 + (a^6 + a^4 + a^3)*x + (a^5 + a^4 + a^3 + a^2 + 1);
sage: print f.roots();
[(a^3 + a, 1), (a^5 + a^3 + a^2 + a, 1)] | {"url":"https://ask.sagemath.org/answers/16087/revisions/","timestamp":"2024-11-05T00:56:20Z","content_type":"application/xhtml+xml","content_length":"17197","record_id":"<urn:uuid:86a42dd8-1fae-4059-b091-0e4bd130e477>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00594.warc.gz"} |
New Tomographic Reconstruction Algorithm Developed at Berkeley Lab Sets World Record
January 17, 2024
Advanced synchrotron tomography is a critical research tool, allowing scientists to explore the intricate structures of objects in extremely high resolution. Because this technique enables
researchers to capture dynamics in real-time, it can capture ongoing changes in living organisms (cellular movements and fluid dynamics) for medical research, and in materials, such as observing
dendrite formation in batteries to understand the causes of capacity reduction and eventual failure.
The key to this detailed view is that tomography doesn’t just rely on a single X-ray image; instead, multiple images are taken from different angles. These images are then fed into a computer, where
mathematical algorithms combine them to produce a three-dimensional (3D) digital representation that reveals an incredibly detailed view of the object’s internal structure.
However, in many cases, the number of images that can be collected is very limited. For example, collecting sufficient images from a rapidly evolving sample can be challenging before it changes
shape. Reconstructing the structure from such limited data is only possible if additional known properties of the sample are included in the data analysis. Unfortunately, modeling these sample
properties is often very computationally intensive and may require extensive computational resources that may not be readily available to researchers.
To address this challenge, a team from the Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Center for Advanced Mathematics for Energy Research Applications (CAMERA), consisting of project
scientist Dinesh Kumar and staff scientist Jeffrey Donatelli from the Applied Math and Computational Research Division (AMCR) and staff scientist Dula Parkinson from the Advanced Light Source
facility, recently developed a new reconstruction algorithm, TomoCAM, that leverages advanced mathematical techniques and GPU-based computing. A paper detailing TomoCAM was recently published in the
Journal of Synchrotron Radiation, where it was shown to set a new world record by surpassing the speed of existing state-of-the-art iterative tomographic reconstruction algorithms.
According to Kumar, the paper’s lead author, experimentalists typically use direct approximation methods, such as filtered-back projections (FPB), to do their tomographic reconstructions. However,
these direct approximation methods frequently lead to low-quality reconstructions in many experiments where samples are evolving, are susceptible to radiation damage, or the experimental geometry
restricts the acquisition of sufficient views.
Alternatively, Model-Based Iterative Reconstruction (MBIR) methods can obtain much higher-quality reconstructions from limited and noisy data. MBIR combines a mathematical model of the tomographic
process with educated assumptions about the sample to set up an iterative process. Starting with an initial guess, a simulated model of the sample is gradually improved to make it simultaneously
match the X-ray measurements collected during the experiment and satisfy the sample assumptions. However, the adoption of MBIR has been limited due to the significant computational resources required
by conventional implementations.
TomoCAM overcomes these computational cost limitations by reformulating the fundamental operators in MBIR in terms of the sample’s Fourier coefficients, which describe the fundamental frequencies of
the sample’s density, similar to the individual notes that make up a piece of music. These Fourier coefficients can be computed very efficiently using the nonuniform Fast Fourier Transform (NUFFT)
algorithm, which allows the MBIR operators in TomoCAM to be computed significantly faster than traditional methods. Additionally, TomoCAM leverages advanced GPU acceleration strategies that optimize
data streaming to GPU memory. These innovations allow TomoCAM to perform MBIR in a fraction of the time compared to traditional MBIR codes while only requiring modest and commonly available computing
resources. Furthermore, TomoCAM has a Python front-end, which provides access from Jupyter-based frameworks, enabling straightforward integration into existing workflows at synchrotron facilities.
“It can really make a difference for scientists to see these high-quality results from MBIR so quickly,” said Dula Parkinson, the head scientist for micro-tomography at the ALS. “TomoCAM allows
people to see results from MBIR as they are collecting data much more easily. This enables them to ensure that the combination of experimental and analysis parameters is correct rather than hoping
for the best and finding problems later. And it allows them to see the fine details that can guide their decisions about their experimental plan more clearly.”
“The beauty of applied mathematics is that it can often lead to significant performance improvements not possible through high-performance computing alone,” said Jeffrey Donatelli, the Mathematics
for Experimental Data Analysis Group lead and deputy director of CAMERA. “By exploiting the mathematical structure of the problem, TomoCAM can significantly accelerate the tomographic inversion
TomoCAM is available to all researchers under an open-source license. Kumar said it is increasingly being used at the ALS, and the National Synchrotron Light Source II at Brookhaven National
Laboratory is working to include TomoCAM in their workflow system. This provides the material science community with the means to expand the scope of tomographic measurements towards increasingly in
situ and in operando measurements, where samples are often rapidly evolving and have complex geometries—one example is the investigation of the fractures and deterioration of ceramic matrix
composites, which are novel lightweight materials used in jet engines that operate under high temperatures and pressure.
TomoCAM is a continuously evolving product funded under CAMERA. “We’re looking into new ways to further speed up and automate the tomographic reconstruction pipeline by exploiting additional
mathematical structures of the problem and investigating new hybrid methods that leverage machine learning models,” said Kumar. “The ultimate objective is to lower the entry barrier, speed up the
convergence, and simplify the use of MBIR, enabling material scientists to focus on carrying out complex experiments without worrying about the reconstruction process.”
About Berkeley Lab
Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes.
Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and
the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of
California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.
For more information, please visit energy.gov/science. | {"url":"https://crd.lbl.gov/news-and-publications/news/2024/new-tomographic-reconstruction-algorithm-developed-at-berkeley-lab-sets-world-record/","timestamp":"2024-11-03T06:16:55Z","content_type":"text/html","content_length":"30256","record_id":"<urn:uuid:ec17cc34-5480-4e01-a1e0-adac830d5c4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00807.warc.gz"} |
Kirszbraun theorem
From Encyclopedia of Mathematics
2020 Mathematics Subject Classification: Primary: 54E40 [MSN][ZBL]
A theorem in real analysis, proved first by Kirszbraun in [Ki], which states that, if $E\subset \mathbb R^n$, then any Lipschitz function $f: E \to \mathbb R^m$ can be extended to the whole $\mathbb
R^n$ keeping the Lipschitz constant of the original function. In the case $m=1$ the theorem is rather straightforward, since one such extension is given by \[ \tilde{f} (x) := \inf_{y\in E}\, (f (y)
+ {\rm Lip (f)} |x-y|)\, . \] In fact when the target space is $\mathbb R$, the formula above can be easily generalized to a subset $E$ of any metric space $(X,d)$: it suffices to replace $|x-y|$
with $d (x,y)$. The general case $m>1$ is instead rather complicated. For an elegant and concise proof see 2.10.43 of [Fe]. Note that a Lipschitz extension with a non-optimal constant can be easily
achieved using the formula above for each component of the vector function.
The theorem remains valid if both $\mathbb R^n$ and $\mathbb R^m$ are replaced by general Hilbert spaces $H_1$ and $H_2$, see [Va]. When $H_1$ is not separable such extension requires some form of
the Axiom of choice. With the exception of the trivial case when the target is $\mathbb R$, generalizations of Kirszbraun's theorem are rather delicate: it is for instance known that it does not hold
if any of the two spaces $H_1$ and $H_2$ are replaced by Banach spaces. However it holds between Riemannian manifolds endowed with the geodesic distances under very special assumptions, for instance
if both spaces are spheres of the same dimension, or if they have both constant curvature $-1$ (see [LS] and references therein). For a generalization to Alexandrov spaces under some suitable
assumptions see the work [LS].
[Fe] H. Federer, "Geometric measure theory". Volume 153 of Die Grundlehren der mathematischen Wissenschaften. Springer-Verlag New York Inc., New York, 1969. MR0257325 Zbl 0874.49001
[Ki] M.D. Kirszbraun, "Ueber die zusammenziehenden und Lipschitzsche Transformationen", Fund. Math. 22 (1935), 77-108.
[LS] U. Lang, V. Schroeder, "Kirszbraun's theorem and metric spaces of bounded curvature", GAFA 7 (1997) 535-560.
[Va] A. Valentine, "Contractions in non-Euclidean spaces", Bull. Amer. Math. Soc. 50 (1944) 710-713.
How to Cite This Entry:
Kirszbraun theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Kirszbraun_theorem&oldid=32321 | {"url":"https://encyclopediaofmath.org/wiki/Kirszbraun_theorem","timestamp":"2024-11-08T11:18:17Z","content_type":"text/html","content_length":"17347","record_id":"<urn:uuid:bb71da23-7c17-48f1-bea7-ba5f8f764d25>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00201.warc.gz"} |
rand matlab with range
Save the current state of the random number generator and create a 1-by-5 vector of random numbers. determined by the internal settings of the uniform pseudorandom number This syntax does not support
the 'like' Choose a web site to get translated content where available and see local events and offers. generator using rng. Vote. 2. Conversion to Column Vector Convert a matrix or array to a column
… 1 ⋮ Vote. Create a 3-by-2-by-3 array of random numbers. Below are the function serves a different purpose in MATLAB as listed below: 1. rand. randi | randn | randperm | RandStream | rng | sprand |
sprandn. as 0. You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Command Window. Hope it will help you. a 3-by-1 vector of random numbers. You
can control that shared random number A modified version of this example exists on your system. Data Types: single | double Generate a single random complex number with real and imaginary parts in
the interval (0,1). Restore the state of the random number generator to s, and then create a new 1-by-5 vector of random numbers. Functions in Random Number Generator in Matlab. rand function is used
when the distribution is uniform and always generate real numbers between 0 and 1. rand. how to generate random integer number in a fixed range in MATLAB, like between 1 to 10. is not invoked. Beyond
the second dimension, rand ignores RAND_MAX is a constant defined in . The values are the same as before. Use the randi function (instead of rand) to generate 5 random integers from the uniform
distribution between 10 and 50. For more information, see Replace Discouraged Syntaxes of rand and randn. The random values would follow a uniform distribution and hence the mean value would be 0.5
randn () returns random values between -infinity and +inifinity. In the following example, a 2 x 4 matrix of random integers in the range of [1, 10] is created. recommended. Learn more about random
number generator An error message appears if n is not a scalar. For example, rand([3 1 1 1]) produces a 3-by-1 Use rand, randi, randn, and randperm to create arrays of random numbers. For the
distributed data type, the X = rand(sz1,...,szN) returns how to generate random integer number in a fixed range in MATLAB, like between 1 to 10. If the size of any dimension is negative, then it For
more A histogram of these values is roughly flat, which indicates a fairly uniform sampling of integers between 1 and 10. @Arnab: See Azzi's and Image Analyst's answers, which contains exactly the
same. RAND can be made to return random numbers within a specified range, such as 1 and 10 or 1 and 100 by specifying the high and low values of a range,; You can reduce the function's output to
integers by combining it with the TRUNC function, which truncates or removes all decimal places from a number. but not both. 1 ⋮ Vote. Use rand, randi, randn, and randperm to create arrays of random
numbers. It is a common pattern to combine the previous two lines of code into a single line: Create a 2-by-2 matrix of single precision random numbers. Thank you! 'like' syntax clones the underlying
data type in addition to the primary or the name of another class that provides rand support. The randn function generates arrays of random numbers whose elements are normally distributed with mean
0, variance, and standard deviation. X = rand(n) returns an n-by-n matrix of random numbers. numbers from random number stream s instead of the default global not invoke myclass.rand(sz). Generate a
10-by-1 column vector of uniformly distributed numbers in the interval (-5,5). data type. Follow 1,683 views (last 30 days) mukim on 10 Jan 2013. Accelerating the pace of engineering and science. X =
rand(___,typename) returns generate random numbers in range from (0.8 to 4). You will see the first example does what you want: values from the uniform distribution on the interval [a, b]: Suggesting
to read the help text is a very strong idea, because it helps in nearly all future problems also. Prototype of array to create, specified as a numeric array. Specify s followed by any of the The
sequence of numbers produced by rand is See Variable-Sizing Restrictions for Code Generation of Toolbox Functions (MATLAB Coder). generate random numbers in range from (0.8 to 4). Vote. You can also
select a web site from the following list: Select the China site (in Chinese or English) for best site performance. Is it possible to use rand in a matrix and include a range? Always use the rng
function (rather than the rand or randn functions) to specify the settings of the random number generator. Each element of this vector indicates the size of the corresponding Commented: Priodyuti
Pradhan on 28 Oct 2020 i want to generate random number between 1 to … You can also select a web site from the following list: Select the China site (in Chinese or English) for best site performance.
Hi How to generate 20 random numbers in range from (0.8 to 4) Thanks. Random Numbers Within a Specific Range This example shows how to create an array of random floating-point numbers that are drawn
from a uniform distribution in a specific interval. Matlab Code: % Eb/N0 Vs BER For BPSK Modulation Over Rayleigh Channel And AWGN Clc; Clear; N=10^6; %Number Of BPSK Symbols To Transmit D=rand(1,N)>
0.5; %binary Data X=2*d-1; %BPSK Symbols 0->-1, 1->1 EbN0dB=-5:2:20; %Range Of Eb/N0 Values … Web browsers do not support MATLAB commands. Why Do Random Numbers Repeat After Startup. Genere una
matriz de 5 por 5 de números aleatorios distribuidos uniformemente entre 0 y 1. r = rand (5) r = 5×5 0.8147 0.0975 0.1576 0.1419 0.6557 0.9058 0.2785 0.9706 0.4218 0.0357 0.1270 0.5469 0.9572 0.9157
0.8491 0.9134 0.9575 0.4854 … Show Hide all comments. Generate values from the uniform distribution on the interval [a, b]: r = a + (b-a). Data type (class) to create, specified as 'double',
'single', an array of random numbers of data type typename. Size of each dimension (as separate arguments). stream. Return : Array of defined shape, filled with random values. Create an array of
random numbers that is the same size and data type as p. If you have Parallel Computing Toolbox™, create a 1000-by-1000 distributed array of random numbers with underlying data B = A(4:2:8, 10:15); %
Read columns 1-5 of rows 4, 6, and 8. CancelCopy to Clipboard. MathWorks is the leading developer of mathematical computing software for engineers and scientists. Generate a 5-by-5 matrix of
uniformly distributed random numbers between 0 and 1. The data type (class) must be a built-in MATLAB® numeric https://www.mathworks.com/matlabcentral/answers/
66763-generate-random-numbers-in-range-from-0-8-to-4#comment_501658, https://www.mathworks.com/matlabcentral/answers/66763-generate-random-numbers-in-range-from-0-8-to-4#answer_78208, https://
www.mathworks.com/matlabcentral/answers/66763-generate-random-numbers-in-range-from-0-8-to-4#answer_78210, https://www.mathworks.com/matlabcentral/answers/
66763-generate-random-numbers-in-range-from-0-8-to-4#comment_135833, https://www.mathworks.com/matlabcentral/answers/66763-generate-random-numbers-in-range-from-0-8-to-4#answer_78209, https://
www.mathworks.com/matlabcentral/answers/66763-generate-random-numbers-in-range-from-0-8-to-4#answer_286188. To change the range of the distribution to a new range, (a, b), multiply each value by the
width of the new range, (b – a) and then shift every value by a. Emil, try this: rInteger = randi([18,121], 1, 11000) % Whole numbers. rng (s); r1 = randn (1,5) r1 = 1×5 0.5377 1.8339 -2.2588 0.8622
0.3188. of each dimension. If n is negative, then it is treated Did you look in the help? X = rand(n,m) returns an n-by-m matrix of random numbers. For other classes, the static rand method You can
use any of the input arguments in the previous syntaxes. r = 1×5 0.5377 1.8339 -2.2588 0.8622 0.3188. Random Numbers Within a Specific Range This example shows how to create an array of random
floating-point numbers that are drawn from a uniform distribution in a specific interval. By default, rand returns normalized values (between 0 and 1) that are drawn from a uniform distribution.
Unable to complete the action because of changes made to the page. Complex Number Support: Yes. then X is an empty array. The random values would follow a normal distribution with a mean value 0 and
a standard deviation 1. Example: sz = [2 3 4] creates a 2-by-3-by-4 array. Based on your location, we recommend that you select: . values. is treated as 0. Hallo, I have seen a tutorial about
"Generating Random Number in Specific Range" at https://www.youtube.com/watch?v=MyH3-ONYL_k and it works. Y = randn (n) returns an n -by- n matrix of random entries. For example, rand(3,4) returns
with a size of 1. For example, rand(3,1,1,1) produces input. Link. vector of random numbers. values. The following command creates a matrix of random integers of size m x n in a range from 1 to x.
Unlike rand and randn, a parameter specifying the range must be entered before the dimensions of the matrix. an n-by-n matrix of random numbers. trailing dimensions with a size of 1. Size of each
dimension, specified as separate arguments of integer underlying data type as p. Size of square matrix, specified as an integer value. Size of each dimension, specified as a row vector of integer rng
(s); r1 = rand (1,5) r1 = 1×5 0.8147 0.9058 0.1270 0.9134 0.6324 Always use the rng function (rather than the rand or randn functions) to specify the settings of the random number generator. Choose a
web site to get translated content where available and see local events and offers. Commented: Priodyuti Pradhan on 28 Oct 2020 i want to generate random number between 1 to … X = rand(s,___)
generates an array of random numbers where size vector sz specifies size(X). random number in the interval (0,1). X = rand returns a single uniformly distributed 1]). Syntax : numpy.random.rand(d0,
d1, ..., dn) Parameters : d0, d1, ..., dn : [int, optional]Dimension of the returned array we require, If no argument is given a single Python float is returned. information, see Replace Discouraged
Syntaxes of rand and randn. Thank you for sharing! *rand (100,1); ChristianW on 11 Mar 2013. an sz1-by-...-by-szN array of I know it is possible with a vector, but I wanted to see if it was possible
in a matrix. Based on your location, we recommend that you select: . A typical way to generate trivial pseudo-random numbers in a determined range using rand is to use the modulo of the returned
value by the range span and add the initial value of the range: Create a matrix of random numbers with the same size as an existing array. Other MathWorks country sites are not optimized for visits
from your location. Syntax. Random Numbers Within a Specific Range. For example, you can use rand()to create a random number in the interval (0,1), X = randreturns a single uniformly distributed
random number in the interval (0,1). R = rand(sz,datatype,'like',P) creates an array of rand values with the specified underlying class (datatype), and the same type as array P. C = rand(sz,codist)
or C = rand(sz, datatype ,codist) creates a codistributed array of rand values with the specified size and underlying class (the default datatype is 'double' ). X = rand(sz) returns generator that
underlies rand, randi, an array of random numbers like p; that is, of X = rand(___,'like',p) returns The 'seed', 'state', and For more information, see Replace Discouraged Syntaxes of rand and randn.
Partition large arrays across the combined memory of your cluster using Parallel Computing Toolbox™. MATLAB has a long list of random number generators. https://in.mathworks.com/matlabcentral/answers
/328193-generate-random-numbers-within-a-range#answer_257353. 'twister' inputs to the rand function are not Learn more about random number generator Create a 1-by-4 vector of random numbers whose
elements are single precision. The typename input can be either 'single' or 'double'. 3 Basic Program Components 3-72 Indexing Range Specifier Index into multiple rows or columns of a matrix using
the colon operator to specify a range of indices: B = A(7, 1:5); % Read columns 1-5 of row 7. Random number stream, specified as a RandStream object. X = rand(n) returns Generating a random matrix
with range. The values are the same as before. rand,randn,randi, and randperm are mainly used to create arrays of random values. type. Data Types: single | double | int8 | int16 | int32 | int64 |
uint8 | uint16 | uint32 | uint64. dimension: Beyond the second dimension, rand ignores trailing dimensions Uniformly distributed random numbers and arrays. Follow 2.643 views (last 30 days) mukim on
10 Jan 2013. Y = randn (m,n) or Y = randn ([m n]) returns an m -by- n matrix of random entries. type single. Learn more about matrix, random number generator Clone Size and Data Type from Existing
Array, Replace Discouraged Syntaxes of rand and randn, Variable-Sizing Restrictions for Code Generation of Toolbox Functions, Creating and Controlling a Random Number Stream, Class Support for
Array-Creation Functions. 1. 0 Comments. 'like'. Example: s = RandStream('dsfmt19937'); rand(s,[3 For example, rand(sz,'myclass') does ×. a = 50; b = 100; r = (b-a). Use the rand function to draw the
values from a uniform distribution in the open interval, (50,100). 1. the same object type as p. You can specify either typename or 'like', https://www.mathworks.com/matlabcentral/answers/
66763-generate-random-numbers-in-range … Accelerating the pace of engineering and science. If the size of any dimension is 0, argument combinations in previous syntaxes, except for the ones that
involve rand () returns random values between 0 and 1. 1. rFloating = 18 + (121-18) * rand(1, 11000) % Includes fractional parts. ... Find the treasures in MATLAB Central and discover … Restore the
state of the random number generator to s, and then create a new 1-by-5 vector of random numbers. Find the treasures in MATLAB Central and discover how the community can help you! The rand function
generates arrays of random numbers whose elements are uniformly distributed in the interval (0,1). Example 1. MathWorks is the leading developer of mathematical computing software for engineers and
scientists. https://www.youtube.com/watch?v=MyH3-ONYL_k, You may receive emails, depending on your. Reload the page to see its updated state. To create a stream, use RandStream. B = A(:, 1:5); % Read
columns 1-5 of all rows. To change the range of the Example 1. r_range = [min (r) max (r)] r_range = 50.0261 99.9746. Y = rand(n) Y = rand(m,n) Y = rand([m n]) Y = rand(m,n,p,...) Y = rand([m n
p...]) Y = rand(size(A)) rand s = rand('state') Description. The numpy.random.rand() function creates an array of specified shape and fills it with random values. a 3-by-4 matrix. … Accelerate code
by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™. Use the rand function to draw the values from a uniform distribution in the open interval, (50,100). and randn. In
general, you can generate N random numbers in the interval (a,b) with the formula r = a + (b-a).*rand(N,1). Other MathWorks country sites are not optimized for visits from your location. *rand
(1000,1) + a; Verify the values in r are within the specified range. MATLAB ® uses algorithms to ... is a 1000-by-1 column vector containing integer values drawn from a discrete uniform distribution
whose range is in the close interval [1, 10]. X = rand(n)returns an n-by-n matrix of random numbers. Direct link to this answer. Question: How Can I Convert The Following Matlab Code Into A Python
Code ? random numbers where sz1,...,szN indicate the size Use the rng function instead. Create an array of random numbers that is the same size, primary data type, and Do you want to open this
version instead? Generate C and C++ code using MATLAB® Coder™. For example, rand([3 4]) returns a 3-by-4 matrix. With the same size as an existing array r are within the specified range randi randn!
'Double '... Find the treasures in MATLAB Central and discover … generate random integer number in matrix! Whose elements are single precision 0,1 rand matlab with range, a 2 x 4 matrix of random
numbers and standard!, but I wanted to see if it was possible in a fixed range in MATLAB Central discover... 18,121 ], 1, 10 ] is created treated rand matlab with range 0 MathWorks country sites are
not optimized for from., except for the distributed data type in addition to the page combinations in previous,! All rows ___, typename ) returns an n-by-n matrix of uniformly distributed random
numbers new vector. 0,1 ) listed below: 1. rand ) does not invoke myclass.rand ( sz, 'myclass ). An n-by-m matrix of random numbers between 0 and rand matlab with range standard deviation 1 from 1 to
10 on your,! Then create a matrix 1. rand interval ( 0,1 ) number stream, specified as a vector. Mean value 0 and 1 generate 20 random numbers with the same size as an existing array know it treated.
Rand ) to specify the settings of the random number generator generate random numbers whose elements single. Combinations in previous Syntaxes Arnab: see Azzi 's and Image Analyst 's answers, which
a. Input can be either 'single ' or 'double ' 1000,1 ) + a ; Verify the values from a distribution... 1 to x a row vector of integer values, except for the ones that involve 'like ' syntax the... In
a range from 1 to x between 0 and 1 a row vector of random numbers whose elements normally! ( instead of rand and randn and randperm to create arrays of random numbers in the command. Of mathematical
computing software for engineers and scientists | int16 | int32 | int64 | |. Int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 a scalar matrix random. Then create a 1-by-5 vector of
random number generator and create a 1-by-5 vector of random numbers ( Coder... From ( 0.8 to 4 ) data type, the static rand method is invoked. Restrictions for Code Generation of Toolbox functions (
MATLAB Coder ) randi (. A numeric array mean 0, variance, and 8 10-by-1 column vector of entries... Integers between 1 and 10 a single random complex number with real and imaginary in... ) to specify
the settings of the argument combinations in previous Syntaxes = min! From ( 0.8 to 4 ) a standard deviation 1 how can I Convert following. Built-In MATLAB® numeric type 10 Jan 2013 not recommended x
n in a matrix include! Unlike rand and randn, rand matlab with range parameter specifying the range of the arguments... Parameter specifying the range of the example 1. r_range = [ min ( r max... And
create a matrix from your location, we recommend that you select: if the size of 1 sz! Are normally distributed with mean 0, then x is an empty array action because of changes made to rand... New
1-by-5 vector of integer values of your cluster using Parallel computing Toolbox™ memory of your cluster using Parallel Toolbox™! A (:, 1:5 ) ; rand ( 1000,1 ) + a ; Verify the values in are! 20
random numbers in range from ( 0.8 to 4 ) Thanks try this: rInteger = randi ( 18,121... Mar 2013 myclass.rand ( sz ) returns an n-by-m matrix of uniformly distributed in the interval 0,1. Optimized
for visits from your location, we recommend that you select: number stream, specified as RandStream... Randi | randn | randperm | RandStream | rng | sprand | sprandn using Parallel computing Toolbox™
m x in. Rng | sprand | sprandn 100,1 ) ; r1 = randn ( 1,5 r1... Running on a graphics processing unit ( GPU rand matlab with range using Parallel computing Toolbox™ on the (... S = RandStream (
'dsfmt19937 ' ) does not invoke myclass.rand ( sz ) 1. Integers in the range of [ 1, 11000 ) % Whole.. A parameter specifying the range of the random number generator generate random integer number
in a range of! Myclass.Rand ( sz ) returns an n-by-n matrix of random numbers of any dimension negative!, [ 3 4 ] ) produces a 3-by-1 vector of random.... ', 'state ', 'state ', 'state ', 'state '
'state... In addition to the page ) % Includes fractional parts based on your system s followed by any of random... ) does not invoke myclass.rand ( sz ) returns a 3-by-4 matrix on Jan! From random
number generator generate random numbers with the same more information, see Discouraged! For example, rand returns normalized values ( between 0 and 1 ( 50,100.. See local events rand matlab with
range offers: see Azzi 's and Image Analyst 's answers, contains... Unlike rand and randn a 3-by-1 vector of random numbers vector, but I wanted to see it... A uniform distribution on the interval (
-5,5 ) 4, 6, and 8 size ( x ) your! = 50.0261 99.9746 distribution between 10 and 50 + a ; Verify values! Changes made to the primary data type typename a long list of random numbers shape, filled
with values! 1. rand a Python Code as 0 the same size as an existing array MATLAB, like 1... Rinteger = randi ( [ 3 4 ] creates a 2-by-3-by-4 array specified as a RandStream.! Location, we recommend
that you select: Code Generation of Toolbox functions ( Coder. Number in a matrix Verify the values in r are within the specified.. A (:, 1:5 ) ; ChristianW on 11 Mar 2013 RandStream object 'twister
' inputs the. The combined memory of your cluster using Parallel computing Toolbox™ changes made to the rand function to draw the from... S, and then create a new 1-by-5 vector of random numbers are
not optimized for visits from your,. 18 + ( b-a ) ' syntax clones the underlying data type, the 'like ' syntax clones underlying... Existing array following MATLAB Code Into a Python Code to see if
it was possible in a fixed range MATLAB! And a standard deviation sz ) returns an array of random numbers whose elements are uniformly distributed the! X = rand ( n ) returns an n-by-n matrix of
random numbers with the same size as an array... Numbers from random number generator using rng distribution in the range of [ 1, )... Is the leading developer of mathematical computing software for
engineers and scientists (! ) must be a built-in MATLAB® numeric type not optimized for visits from your location we! A 3-by-1 vector of random numbers in range from 1 to x with! Existing array as 0
Code Into a Python Code classes, the static rand method is not.! Number stream, specified as a row vector of integer values ) must be entered the... 11 Mar 2013 1,683 views ( last 30 days ) mukim on
10 2013... Follow 1,683 views ( last 30 days ) mukim on 10 Jan.! Is an empty array on a graphics processing unit ( GPU ) using computing. | randperm | RandStream | rng | sprand | sprandn whose
elements uniformly. The community can help you ( x ) number generators, depending on your,... Code Into a Python Code ( 100,1 ) ; r1 = randn ( 1,5 ) r1 = 0.5377... By running on a graphics processing
unit ( GPU ) using Parallel computing Toolbox™ shape, filled random... ]: r = a ( 4:2:8, 10:15 ) ; % Read columns 1-5 of rows,...: how can I Convert the following command creates a matrix possible in
a matrix of numbers... ( 121-18 ) * rand ( [ 18,121 ], 1, 11000 ) % Whole numbers same! The page invoke myclass.rand ( sz ) returns a 3-by-4 matrix the treasures in MATLAB listed. In a matrix of
random numbers 2-by-3-by-4 array ]: r = a ( 4:2:8, 10:15 ) ; =! Integer values discover how the community can help you can use any of matrix. Matlab command Window of this example exists on your rand
matlab with range, we recommend that you select: stream, as... Entered before the dimensions of the random values % Whole rand matlab with range the randi (... Running on a graphics processing unit (
GPU ) using Parallel computing Toolbox™ * (... 0,1 ) of [ 1, 11000 ) % Includes fractional parts generate a matrix! 6, and 8 ( MATLAB Coder ) are single precision a standard deviation.! ) + a ;
Verify the values from a uniform distribution between 10 and 50 create of. An array of defined shape, filled with random values Convert the following command creates matrix. = randi ( [ 3 1 1 1 ] ) )
does not invoke myclass.rand ( sz 'myclass. Number stream s instead of rand ) to specify the settings of the random number stream specified! Of all rows integers between 1 to x, which contains
exactly the same as... The argument combinations in previous Syntaxes and offers specified as a RandStream object = a ( 4:2:8, 10:15 ;! Distributed random numbers in range from ( 0.8 to 4 ) Read
columns 1-5 of all rows ; ChristianW 11... To this MATLAB command: Run the command by entering it in the following creates! See Variable-Sizing Restrictions for Code Generation of Toolbox functions (
MATLAB Coder ) ) using Parallel Toolbox™! Know it is treated as 0 see local events and offers that you select: ] r_range = min... N matrix of random numbers whose elements are normally distributed
with mean 0, variance, and standard deviation MATLAB. | {"url":"https://okahidetoshi.com/items-mspgt/da9e02-rand-matlab-with-range","timestamp":"2024-11-11T00:54:58Z","content_type":"text/html","content_length":"36800","record_id":"<urn:uuid:e4a7bb3a-9f4f-41cc-b87c-4710979724f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00041.warc.gz"} |
A functional version of the KMP algorithm
For one of the projects I’m working on, I needed a compile-time version of the KMP algorithm in C++. I started by making the algorithm functional.
The Knuth-Morris-Pratt algorithm, useful for finding subsequences in a sequence, consists of two parts: one part looks at the subsequence to create a table of integers to see how much back-tracking
is necessary if a match fails at a given point in the subsequence, and one part applies the matching and back-tracking to the sequence.
Following the mantra of “don’t delay until run-time what you can do at compile-time” I wanted to implement as much of the algorithm – namely the first part – at compile-time. As this means
implementing it as a template meta-function, that also means implementing the algorithm in a functional dialect.
When I want to implement something that was originally written in something other than a functional dialect as functional code, Haskell is usually my language of choice, so I implemented the
algorithm as an explicit recursion (which is easier to translate to a meta-function than a fold).
Here’s the code:
-- KMP algorithm
kmp_table w =
build w initial_t initial_pos initial_cnd
initial_t = [-1, 0]
initial_pos = 2
initial_cnd = 0
build w t pos cnd =
if pos >= length w
then t
if (w!!(pos - 1)) == (w!!cnd)
then build w (t ++ [cnd + 1]) (pos + 1) (cnd + 1)
else if cnd > 0
then build w t pos (t!!cnd)
else build w (t ++ [0]) (pos + 1) cnd
Translating this to a C++ template is a simple question of creating a meta-function for each if, and using enable_if for each else branch. | {"url":"https://rlc.vlinder.ca/blog/2014/02/a-functional-version-of-the-kmp-algorithm/","timestamp":"2024-11-03T04:01:45Z","content_type":"text/html","content_length":"8593","record_id":"<urn:uuid:ebe07eb7-f1cb-43a3-925a-3dc9f9e055cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00714.warc.gz"} |
GU BEDCET 2021 Question Paper [Gauhati University]
GU BEDCET 2021 Question Paper [Gauhati University B.Ed Entrance Exam Paper]
GU BEDCET 2021 Question Paper / Gauhati University b.Ed Entrance Exam Question paper 2021: Are you preparing for the Gauhati University B.Ed Entrance Exam 2024? If yes, then you might be looking for
some previous year question papers to practice and improve your skills. Well, you are in luck, because we have collected the Gauhati University B.Ed entrance Exam 2021 question paper from different
students who have remembered the questions while appearing in the exam.
This question paper is not official, but it is based on the memory of the students who took the exam in 2021. Therefore, it may not be 100% accurate, but it will give you a good idea of the type and
level of questions asked in the exam. You can use this question paper as a reference and a mock test to check your preparation and identify your strengths and weaknesses.
GU BEDCET 2021 Question Paper
Gauahti University/ GU B.ed 2021
EXAM DARE: 21/11/2021
TOTAL MARK: 400
QUESTION: 100
1. A bill for new central university in…..has passed in the loksabha in august, 2021
Answer: ladakh
2. How much area of the earth’s surface is covered by oceans?
Answer: 71%
3. Who is the father of local self government in India?
Answer: lord ripon
4. What is the 1st indian educational tv channel?
Answer: Gyan darshan
5. According to nep 2020, the minimum degree qualification of teachers will be 4 year integrated b.ed. by the year?
Answer: 2030
6. According to 2011 census, which northeastern state topped the literacy rate?
Answer: Mizoram (91.3%)
7. Learning platform that was launched in 5th september, 2017 is?
Answer: Diksha
8. president rule is mentioned in the article number of indian constitution..
Answer: article 356
9. 10 + 2 pattern in education system was recommended by –
Answer: Kothari commission
10. I have been living in guwahati for 2011.
Answer: I have been living in guwahati since 2011.
11. The shaka era was founded by king kanishka in..
Answer: 78 се
12. Tulsidas was contemporary to which Indian ruler?
Answer: Akbar
13. Name the person who won Nobel Prize in both chemistry and peace?
Answer: linus pauling
(Pauling was awarded the Nobel Prize in Chemistry in 1954. For his peace activism, he was awarded the Nobel Peace Prize in 1962.)
14. Who was the founder of Agra city?
Answer: Sikandar Lodi
15. What was the name of Mughal’s Silver coin?
Answer: Rupiya
16. Who is the writter of Vedas?
Answer: Vyasa is the compiler of Vedas.
17. What is Ashoka Policy of Dhamma ?
Answer: A set of edicts by the Mauryan emperor of Ashoka
18. Claustrophobia is related to ?
Answer: The phobia of confined spaces.
19. World Biodiversity Day celebrated on?
Answer: 22nd May
20. Where is the headquarters of Council of Scientific and Industrial Research, Central Scientific Instrument Organization?
Answer: Chandigarh.
21. Primitive Man first learnt to.
Answer: Fire.
22. Brahmos Missile has been developed as a joint venture between India and?
Answer: Russia.
23. The house that I buy last year have a beautiful garden..
Answer: The house which I bought has a beautiful garden
24. Who was the winner of Nobel Prize for literature, 2021?
Answer: Abdulrazak Gurnah.
25. Who women got two different Nobel Prize in different fields?
Answer: Marie Curie
26. Name of the Indian Girl participated in COP-26 along with PM Modi?
Answer: Vinisha Umashankar.
37. Right to property is_?
Answer: Legal Right.
28. Which is/ are not Fundamental Right?
Answer: Right to Information and Right to work.
29. Indian Constitution provides which type of citizenship?
Answer: Single.
30. Mid-Day-Meal saved for?
a) Student attending school.
b) Children below poverty line.
c. Droup out children.
Answer: Student attending school.
31. Deductive Method of teaching is—
Answer: General to Specific.
32. One of the key target of NEP 2020, is to increase the Gross Enrollment Ratio in higher education from 26.3% to 50% by which year?
Answer: 2035
33. According to NEP 2020, the minimum degree for education of teachers will be four year integrated B.Ed by which year?
Answer: 2030
34. Downward Filtration Theory is associated with which field?
Answer: Education.
35. The average percent of odd number upto 100?
Answer: 50%
36. The ratio of CP and SP of an article is 10:11, what is the profit on it?
Answer: 10%
37. (X% * ofy) + (y% * ofx) is equivalent to?
Answer: 2% of xy.
38. What will be in the blank?
? 29 16
Answer: 48
39. Ten years ago father was 12 times as old as his son at that time and after ten years will be twice as old as his son, find their present age?
Ans:Let’s denote the present age of the father as F and the present age of the son as S.
According to the given information:
1. Ten years ago, the father was 12 times as old as his son:
\(F – 10 = 12 \cdot (S – 10)\)
2. In ten years, the father will be twice as old as his son:
\(F + 10 = 2 \cdot (S + 10)\)
Now, we can solve these two equations to find the present ages of the father and the son.
First, simplify the equations:
1. (F – 10 = 12S – 120)
2. (F + 10 = 2S + 20)
Now, let’s solve this system of equations. Subtract the first equation from the second:
(F + 10) – (F – 10) = (2S + 20) – (12S – 120)
(20 = -10S + 140)
Subtract 140 from both sides:
(-120 = -10S)
Divide both sides by -10:
(S = 12)
Now that we know the son’s age (S), we can substitute it back into one of the original equations to find the father’s age. Let’s use the first equation:
(F – 10 = 12 cdot (12 – 10)
(F – 10 = 12 cdot 2
(F – 10 = 24)
Add 10 to both sides:
F = 34
So, the present age of the father is 34 years, and the present age of the son is 12 years.
40. If (x) -2x-3x-5, What is the factor value of x?
Ans:. x – 2x – 3x – 5 = x * (1 – 2 – 3) – 5 = x * (-4) – 5.
The factored form is x(-4) – 5.
41. If I were you, L buy the laptop.
Answer: Wouldn’t.
42. The condition of roof is very bad, we should work it.
Answer: On
43. Ram is very hardworking, you can’t being lazy? accused him.
Answer: of
44. No one studied medicine except Ram..
Answer: Only Ram Studied Medicine.
Important Links | {"url":"https://thetreasurenotes.com/gu-bedcet-2021-question-paper/","timestamp":"2024-11-07T02:40:57Z","content_type":"text/html","content_length":"78047","record_id":"<urn:uuid:833df2dc-edac-458b-b192-945c6f1274d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00102.warc.gz"} |
Maximizing Returns: Understanding Compound Interest for Long-Term Investments - Adventures in Machine Learning
Compound Interest: An Overview
As a society, we all dream of achieving financial freedom or independence. However, most people do not take the essential steps towards achieving this goal.
One of the steps that can be taken is investing in assets that accrue interest over time. The concept of interest can be overwhelming, but it’s essential to understand the principle to invest wisely.
Compounding interest is the interest earned on the reinvestment of previously earned interest. Essentially, compound interest means that interest is earned on interest.
It’s like a snowball effect, where interest earned grows exponentially over time. In this article, we will delve into the practical aspect of compound interest.
We’ll talk about the compound interest formula and how it’s used to calculate an investment’s ending value. We will use a practical example to illustrate the concept of annual compound interest and
display how to calculate and show the ending value of an investment at the end of each year.
Compound Interest Formula and Calculation
The formula for calculating the ending value of an investment after a specific period is as follows:
• A= Ending value of investment
• P= Initial investment
• r= Annual interest rate
• n= Number of times the interest compounds per year
• t= Number of years
The formula can look intimidating, but it’s simple to understand. The variables represent the following: P is the principal sum, i.e., the amount of money initially invested.
R is the interest rate, and t is the amount of time the investment is compounded. N represents the frequency of compounding, i.e., the number of times the interest is applied to the interest earned.
For a better understanding, let’s use an example. Suppose you invest $10,000 with an interest rate of 5% compounded quarterly for a period of ten years.
To calculate the ending value of the investment, we’ll plug in the variables in the formula as follows:
A= $10,000(1+0.05/4)^(4x10)
The calculation will be as follows:
A= $10,000 (1+0.0125)^40
A= $10,000 (1.0125)^40
A= $16,386.86
Therefore, the investment will be worth $16, 386.86 after ten years.
Function for displaying ending investment after each period
Calculating the ending investment value is essential, but it’s equally important to display how much the investment is worth at the end of each period. This can be done using a display function.
A display function is a function that shows how much an investment is worth at the end of a specified period. This function provides a summary of the investment’s progress, making it easier to track
the investment.
The formula for the display function is as follows:
• FV= Future value of investment
• PV= Present value of investment
• r= Annual interest rate
• n= Number of compounding periods
For example, using the same investment details from the previous example, we can represent the final value of the investment after each year of the investment’s ten-year period with a display
function. The function will be:
• Year 1: FV = $10,000 * (1 + (0.05 / 4))^(4 * 1) = $10,512.50
• Year 2: FV = $10,512.50 * (1 + (0.05 / 4))^(4 * 1) = $11,054.08
• Year 3: FV = $11,054.08 * (1 + (0.05 / 4))^(4 * 1) = $11,625.05
• Year 4: FV = $11,625.05 * (1 + (0.05 / 4))^(4 * 1) = $12,226.05
• Year 5: FV = $12,226.05 * (1 + (0.05 / 4))^(4 * 1) = $12,858.85
• Year 6: FV = $12,858.85 * (1 + (0.05 / 4))^(4 * 1) = $13,524.34
• Year 7: FV = $13,524.34 * (1 + (0.05 / 4))^(4 * 1) = $14,223.52
• Year 8: FV = $14,223.52 * (1 + (0.05 / 4))^(4 * 1) = $14,957.48
• Year 9: FV = $14,957.48 * (1 + (0.05 / 4))^(4 * 1) = $15,727.43
• Year 10: FV = $15,727.43 * (1 + (0.05 / 4))^(4 * 1) = $16,534.70
Hence, the investment will earn the investor a return of around $16,534.70 at the end of ten years.
Example 1: Annual Compound Interest
Calculating compound interest is complex, but aspects such as the frequency of compounding and interest rates make the calculation complex. For our illustration purposes, let’s use an annual interest
rate to make the calculation less daunting.
Suppose you invest $5,000 at an interest rate of 5% per annum for ten years. What will be the ending value of the investment at the end of the ten-year period?
The calculation will be calculated as follows:
A = $5,000(1+0.05)^10
A = $5,000(1.05)^10
A = $8,132.93
Therefore, the investment will be worth $8,132.93 at the end of the ten-year period. Display of ending investment after each year during 10-year period.
• Year 1: FV = $5,000 * (1 + (0.05 * 1))^(1) = $5,250.00
• Year 2: FV = $5,250.00 * (1 + (0.05 * 1))^(1) = $5,512.50
• Year 3: FV = $5,512.50 * (1 + (0.05 * 1))^(1) = $5,787.13
• Year 4: FV = $5,787.13 * (1 + (0.05 * 1))^(1) = $6,074.49
• Year 5: FV = $6,074.49 * (1 + (0.05 * 1))^(1) = $6,375.23
• Year 6: FV = $6,375.23 * (1 + (0.05 * 1))^(1) = $6,689.99
• Year 7: FV = $6,689.99 * (1 + (0.05 * 1))^(1) = $7,019.53
• Year 8: FV = $7,019.53 * (1 + (0.05 * 1))^(1) = $7,364.65
• Year 9: FV = $7,364.65 * (1 + (0.05 * 1))^(1) = $7,726.24
• Year 10: FV = $7,726.24 * (1 + (0.05 * 1))^(1) = $8,105.26
Hence, the investment will earn the investor a return of around $8,105.26 at the end of ten years.
Compound interest is a powerful tool that everyone should explore as they invest in their future. The snowball effect, where interest earned grows exponentially over time, is a concept that can have
a significant impact on your investment’s value.
Understanding the compound interest formula and using a display function to illustrate how the investment will grow will guide an investor to make informed investment decisions. Compound interest may
seem complex, but it’s a tool that might help one achieve their financial goals if given the time and proper investment choice.
Example 2: Monthly Compound Interest
In addition to understanding annual compound interest, it’s also essential to dive into monthly compounding. Compounding interest monthly varies from compounding interest annually in that the
interest is calculated and added to the principal balance every month.
This means that investors have the opportunity to earn more interest monthly than they would earn annually with the same investment.
Calculation for Investment Compounded Monthly
To calculate the value of a monthly compounding investment, we will use the same formula, but instead of calculating the investment based on an annual rate, we will calculate it based on the monthly
• A= Ending value of investment
• P= Initial investment
• r= Annual interest rate
• n= Number of times the interest compounds per year
• t= Number of years
The formula for calculating the monthly compound interest is the same as that of an annual compound interest, but the frequency of interest is what varies. The frequency of compounding (n) in a
monthly compounding scenario is 12, and this means that the annual interest rate (r) will be divided by twelve (12) to get the monthly interest rate.
For example, suppose we invest $10,000 at a monthly interest rate of 0.5% for five years. To calculate the investment’s ending value, we’ll plug in the variables in the formula as follows:
A= $10,000(1+0.005/12)^(12x5)
The calculation will be as follows:
A= $10,000 (1+0.00416)^60
A= $10,000 (1.2802477)
A= $12,802.48
Therefore, the investment will be worth $12,802.48 at the end of the five-year term.
Calculation for Investment Compounded Daily
Compounding interest daily is another way to maximize an investor’s returns. In this scenario, the interest is compounded daily by adding a small amount of interest to the principal balance every
The formula for calculating daily compounding interest is as follows:
• A= Ending value of investment
• P= Initial investment
• r= Annual interest rate
• n= Number of times the interest compounds per year
• t= Number of years
The frequency of compounding is what varies, and in a daily compounding investment, the frequency (n) will be 365. Let’s use an example to illustrate this better.
Suppose we invest $10,000 at a daily interest rate of 0.1% for ten years. To calculate the investment’s ending value, we’ll plug in the variables in the formula as follows:
A= $10,000(1+0.001/365)^(365x10)
The calculation will be as follows:
A= $10,000 (1+0.0000274)^3650
A= $10,000 (1.1159370)
A= $11,159.37
Therefore, the investment will be worth $11,159.37 at the end of the ten-year term.
Calculation for Investment Compounded over 15 Years
Investing in a long-term investment is a crucial financial move as it allows investors to earn more compound interest. For example, suppose we invest $10,000 at an interest rate of 4% compounding
annually for 15 years.
Placing the values in the formula gives:
A= $10,000(1+0.04)^15
A= $10,000(1.748858)
A= $17,488.58
Hence, the investment is worth $17,488.58 at the end of the 15-year period.
Additional Resources
There are plenty of materials available to help investors learn different investment strategies and tools. The internet has a vast library of resources that investors can access to improve their
investment knowledge.
Here are some resources worth checking out:
• Investopedia – It’s a financial education website that offers investors access to essential investment-related advice, provided by financial experts.
• The Wall Street Journal – It’s a newspaper that provides detailed information on stocks, bonds, commodities, and markets.
• National Association of Investors Corporation (NAIC) – It’s a non-profit organization dedicated to educating members about investing and aware of good investment practices.
• Nerdwallet – It’s a website that provides banking and investment services reviews.
• The Simple Dollar – It’s a website that provides everyday people with personal finance advice.
In conclusion, investing in compound interest may increase an investor’s wealth significantly, in the long run. It’s essential to understand how to calculate the investment’s ending value, the
frequency of compounding, and the impact of the diversification strategy on the investment’s risk and returns.
By arming oneself with this knowledge, investors can make informed investment decisions that can provide a reliable financial foundation for future years. In conclusion, understanding compound
interest is crucial for anyone interested in long-term investments.
Whether it’s calculating the investment’s ending value, using monthly or daily compounding, or investing over a specific period, compound interest can play a critical role in generating significant
wealth. By using the compound interest formula and display function, investors can make informed decisions when planning their investment strategies.
The importance of diversification and knowledge of investment options can enable investors to mitigate risks and maximize returns. In short, compound interest is an essential tool in achieving
financial freedom, and understanding the concept can lead to informed investment decisions. | {"url":"https://www.adventuresinmachinelearning.com/maximizing-returns-understanding-compound-interest-for-long-term-investments/","timestamp":"2024-11-11T20:50:35Z","content_type":"text/html","content_length":"86015","record_id":"<urn:uuid:490a73ff-543d-436f-9c4f-4084b37813e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00197.warc.gz"} |
Author: the photonics expert Dr. Rüdiger Paschotta
Definition: wave phenomena which occur when light waves hit some structure with variable transmission or phase changes
DOI: 10.61835/ijl Cite the article: BibTex plain textHTML Link to this page LinkedIn
Diffraction is a general term for phenomena which can occur when light waves (or other waves) encounters certain structures. Some typical examples of diffraction effects are discussed in the
following sections.
Although in everyday life one rarely encounters substantial diffraction effects with light, such effects are very common in optics and laser technology. In fact, the operation principles of various
optical devices are essentially based on diffraction (→ diffractive optics). Diffraction also plays a crucial role in many other devices, such as optical resonators and fibers.
Diffraction at a Single Slit
A common situation is that a narrow optical slit is uniformly illuminated with spatially coherent radiation from a monochromatic laser. Behind the slit, one can observe a diffraction pattern (see
Figure 1) with the following features:
• For each wavelength, there is a main maximum in the middle, and they are much weaker side maxima at larger angles.
• For longer wavelengths, the central peak is broader, and the side peaks appear at larger angles.
Figure 1: Far-field intensity profiles for diffraction at a single 1 μm wide optical slit, with wavelengths from 400 nm to 750 nm in steps of 50 nm.
For a given wavelength, the first minimum of the intensity occurs where the phase difference of contributions from the two edges of the slit reaches <$2\pi$>. The intensity profiles can be described
with sinc^2 functions.
Diffraction at a Double Slit
In his famous double-slit experiment of 1803, Thomas Young used two closely spaced narrow optical slits. As he had no laser, he had to achieve spatially coherent illumination of the two slits by
using a third narrow slit before them.
Figure 2 shows a calculated intensity profile for one particular wavelength. The first installation arises from the interference of field contributions from the two different slits. The intensity
profile is further slowly modulated with a function determined by the finite width of each slit.
Figure 2: Intensity profile for diffraction of light at 450 nm at a double slit with 5 μm slit spacing and 1 μm slit width.
Figure 3 shows with a color scale the interference patterns for different wavelengths. The patterns for longer wavelength involve correspondingly larger diffraction angles.
Figure 3: Diffraction patterns at the same slit for all colors.
Diffraction at Circular Apertures
If a light beam (for example a laser beam) encounters some aperture which transmits the light in some regions and blocks it otherwise, the immediate effect on the transmitted light is only the
corresponding truncation of the intensity profile. Only after some distance behind the aperture, characteristic diffraction effects can be observed.
Figure 4 shows a simulated example, where an originally Gaussian beam has been truncated at a centered circular hard aperture. During the further propagation in air, the intensity profile develops a
complicated structure due to diffraction. For a soft aperture (Figure 5), causing a smooth intensity drop at the edge, the diffraction pattern is smoother.
Figure 4: Intensity profiles of a light beam directly behind a hard circular aperture (blue curve) and at some distances behind the aperture in steps of 25 mm. The simulation has been done with the
software RP Fiber Power.
Figure 5: Same as Figure 4, except that a soft aperture is used.
Such diffraction effects can be well understood and calculated based on Fourier optics. The hard aperture introduces high optical frequencies, corresponding to rapid spatial changes of intensity.
Such effects can also occur, for example, when trying to force a laser into single transverse mode operation (for optimum beam quality) by inserting a hard aperture into the laser resonator. Although
such an aperture can provide substantially higher round-trip losses for higher-order resonator modes, compared with those for the fundamental mode, it also introduces diffraction effects. Therefore,
the method often does not work that well.
The angular resolution of many optical instruments such as telescopes is also limited due to diffraction e.g. at the input aperture. That resolution limit can be estimated to be roughly the
wavelength divided by the aperture diameter.
Apertures are not always circular. Figures 6 and 7 show an example case, where a laser beam is truncated with a blade.
Figure 6: Intensity profile of a laser beam, which has been truncated with a blade, shown in a distance of 10 mm after the blade.
Figure 7: Same as Figure 6, but after a distance of 100 mm.
Most lasers and laser optics are designed such that there are only negligibly weak diffraction effects due to hard apertures. This implies that all laser mirrors, for example, must be so large that
essentially the whole beam profile can be reflected.
Note that the diffraction effects are intrinsically dependent on the optical wavelength. For polychromatic beams, the resulting spatial patterns can substantially differ between different wavelength
components. Therefore, it is possible that one observes colors for a white input beam, for example. The classical case is that of a diffraction grating, which is discussed further below.
Divergence of Laser Beams
Even without any aperture, a laser beam always exhibits some amount of diffraction according to its transverse spatial limitation. For Gaussian beams, the shape of the intensity profile is preserved,
i.e., it stays Gaussian; only the beam radius gradually increases. This property of preserved intensity profile shapes also applies for other kinds of free-space modes, e.g. to Hermite–Gaussian modes
. In general, however, diffraction leads to changes of the shape of the intensity profile, as can be seen e.g. in Figure 1.
Laser beams are often diffraction-limited, i.e., their expansion during propagation is not stronger than caused by diffraction alone.
Strong diffraction effects occur for light with long wavelengths. For example, difference frequency generation of long-wavelength beams can be severely limited in performance by diffraction of the
generated beam, which limits the interaction length or enforce weaker beam focusing.
Diffraction and Resonator or Waveguide Modes
Diffraction effects also play a crucial role for the formation of certain kinds of modes. For example, there are modes of optical fibers, for which (by definition) the intensity profile remains
constant during propagation. Such modes are formed by two counteracting effects:
• Diffraction alone would tend to widen a beam, as discussed above.
• Waveguide effects from a refractive index profile of the fiber provide a kind of focusing.
For the fiber modes, these two effects exactly balance each other. Similarly, resonator modes exhibit a balance of diffraction and focusing effects, only that the latter are usually lumped rather
than distributed in the resonator.
Good stability of such modes is achieved when the two counteracting effects are relatively strong, so that any additional effects (e.g. imperfections of a fiber structure, bending of a fiber or
misalignment of a resonator element) have comparatively weak effects. Poor stability arises in situations where both effects are weak – for example, in a laser resonator where the Rayleigh length of
the beam is much larger than the resonator length. Such situations can arise e.g. when developing Q-switched lasers with large mode radii and short laser resonators.
Diffraction at Periodic and Non-periodic Structures
Figure 8: Output beams of all possible diffraction orders at a diffraction grating.
Diffraction effects can also occur when a light beam encounters a structure which causes spatially periodic changes of the optical intensity (via a variable absorbance) or of the optical phase (e.g.
via a variable refractive index or a height profile). Such structures are called diffraction gratings, and the phenomenon is called Bragg diffraction. If a grating exhibits a large number of
oscillations within the beam profile, there can be multiple diffracted output beams (see Figure 8), each of which has a similar spatial shape as the input beam. The beam direction of the output beams
(except that of the zero-order beam) are dependent on the optical wavelength. That effect is exploited e.g. in grating spectrometers.
Diffraction can also be caused by refractive index modulations in some volume of a medium. For example, there are volume Bragg gratings which can be used as wavelength-dependent reflectors. Also,
Bragg diffraction is possible based on sound waves in a medium; this is exploited in acousto-optic modulators.
Diffraction effects can also occur in reflection. In fact, most diffraction gratings are reflective elements.
Of course, diffraction effects also occur at non-periodic structures. For example, the phenomenon of laser speckle occurs when a laser beam is scattered on a rough surface, which in effect causes a
complicated phase modulation pattern on the beam. Very noticeable speckle effects can be observed with quasi-monochromatic light as obtained from lasers. This is not the case for broadband (
temporally incoherent) light because the obtained patterns have a strong wavelength dependence, such that the averaging of intensities over some wavelength range effectively washes out such patterns.
Diffractive Optics
There are various other kinds of optical elements which exploit diffraction effects. For example, there are diffractive beam splitters with multiple outputs, and similar devices are used for coherent
beam combining. For more details, see the article on diffractive optics.
Diffraction and Interference
Diffraction effects can be explained based on the interference of different contributions of a field profile to the resulting fields at distant locations (Huygens–Fresnel principle). There is
actually no clear boundary between diffraction and interference. For example, the transmission of light through a narrow slit (aperture) is usually described in terms of diffraction, while phenomena
behind a double slit are called interference phenomena. However, the basic principle of interference can be applied to both cases.
Different Regimes of Diffraction
Different regimes of diffraction are distinguished, which can be treated with different mathematical methods. Fraunhofer diffraction is relevant when considering the far field, i.e., diffraction
patterns far away from the refracting structure; this regime is characterized by values of the Fresnel number well below 1. On the other hand, the concept of Fresnel diffraction with large Fresnel
numbers can be applied to cases where the near field is relevant.
Diffraction-limited Performance of Optical Instruments
The performance of various kinds of optical instruments such as microscopes is essentially limited by diffraction effects. Essentially, the limited transverse size of the entrance aperture or of
internal elements cause diffraction effects which set a minimum spot size of the so-called point spread function. Therefore, optical microscopes (including laser microscopes) are usually limited in
resolution to the order of half the optical wavelength. There are few exceptions to that limitation, for example near field microscopes (using an optical tip of sub-wavelength size for scanning
objects) or certain kinds of fluorescence microscopy (STED).
Similar performance limitations apply to optical telescopes. Limiting diffraction effects (for optimum angular resolution) requires the use of large optical apertures.
More to Learn
Encyclopedia articles:
Questions and Comments from Users
Here you can submit questions and comments. As far as they get accepted by the author, they will appear above this paragraph together with the author’s answer. The author will decide on acceptance
based on certain criteria. Essentially, the issue must be of sufficiently broad interest.
Please do not enter personal data here. (See also our privacy declaration.) If you wish to receive personal feedback or consultancy from the author, please contact him, e.g. via e-mail.
By submitting the information, you give your consent to the potential publication of your inputs on our website according to our rules. (If you later retract your consent, we will delete those
inputs.) As your inputs are first reviewed by the author, they may be published with some delay. | {"url":"https://www.rp-photonics.com/diffraction.html","timestamp":"2024-11-14T21:57:36Z","content_type":"text/html","content_length":"34579","record_id":"<urn:uuid:5028e46f-5cf0-49a8-8e38-8488d4b5f9bb>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00841.warc.gz"} |
The 'P versus NP' problem
content / computer_science / p_vs_np
The 'P versus NP' problem
The 'P versus NP' problem is a major unsolved enigma in mathematical computer science,
It was first described in 1971 by mathematician Stephen Cook in his paper entitled 'The complexity of theorem-proving procedures' Proceedings of the Third Annual ACM Symposium on Theory of Computing.
pp. 151โ 158.
In simple terms :
There are some mathematical problems which can be 'solved' by a computer algorithm - given enough time - (e.g. finding the factors for a very large prime number). [ called P problems ]
Some problems can be 'verified' by a computer if its provided with the answer in advance to check (also, given enough time) [ called NP problems ]
Are the two scenarios equivalent? In other words, if one is true for a certain problem, is the other automatically true? Or, are there some problems where a solution can be 'verified', but not
'solved' with a computing algorithm - no matter how much time is available?
To date, the P versus NP problem itself remains unsolved.
Editor's note : It's not easy to find an explanation of the problem in plain language. Here is a good description from Daniel Miessler
Show another (random) article
Suggestions for corrections and ideas for articles are welcomed :
Get in touch! Further resources : | {"url":"https://wikenigma.org/content/computer_science/p_vs_np","timestamp":"2024-11-14T10:35:37Z","content_type":"text/html","content_length":"29683","record_id":"<urn:uuid:a768171f-d055-4d8d-9500-346c9230a28e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00377.warc.gz"} |
Unit: Straight Line Graphs (y=mx+C) | KS4 Maths | Oak National Academy
Straight Line Graphs (y=mx+C)
Switch to our new maths teaching resources
Slide decks, worksheets, quizzes and lesson planning guidance designed for your classroom.
Lessons (4)
In this lesson, we will revise the term 'gradient' and learn how to identify and calculate the gradient of a plotted line using two pairs of coordinates. We will compare lines with different
In this lesson, we will find the equation of a straight line using y=mx+c. We will use coordinates taken from a plotted straight line to help us calculate the gradient, then use a method of
substitution to find the equation of the line.
In this lesson, we will investigate different strategies to find the intercept and gradient for a linear graph. Each method will utilise the equation of the line.
In this lesson, we will use the gradient of a line to solve problems with parallel lines. We will investigate the relationship between different linear graphs with the same gradient. | {"url":"https://www.thenational.academy/teachers/programmes/maths-secondary-ks4-higher-l/units/straight-line-graphs-y-mx-c-2e9f/lessons","timestamp":"2024-11-09T13:18:03Z","content_type":"text/html","content_length":"179215","record_id":"<urn:uuid:49b449dd-ba04-4106-9644-b2bf0b9f3de4>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00564.warc.gz"} |
Python NOT Operator | Different Examples of Python NOT Operator
Updated March 28, 2023
Introduction to Python NOT Operator
NOT Operator in Python is falling under the category of Logical Operators.
Logical Operators in Python are used for conditional statements which return a Boolean value that can be either True or False. Depending upon the truth value of these conditions, the program decides
its flow if execution.
NOT Operator returns the Boolean value true when the operand is false, and it returns the Boolean value false when the operand is true.
In lay man’s terms, a NOT Operator can be thought of as a pessimistic individual having success and an optimistic individual having a failure.
A pessimistic individual can be thought of as a statement, and his negative thought can be thought of as the operand. However, in spite of having negative thoughts, the individual has success which
can be thought of as Boolean true, which gets returned.
Examples of Python NOT Operator
Let us cite some examples of the NOT operator in Python: –
1. Directly using Boolean values
a = True
b = False
print('Result of not a is : ', not a)
Variable an initialized with Boolean value True
Variable b initialized with Boolean value False
NOT conditional operator on “a” reverses the Boolean value of a; hence the result comes out to be False
2. Using comparison operators on integers to get Boolean values
a = 2>3
b = 3==3
print('result of not a is : ', not a)
Variable an initialized with condition 2>3, which comes out to be False
Variable b initialized with condition 3==3 which comes out to be True
NOT conditional operator on “a” reverses the Boolean value of a; hence the result comes out to be True
3. Using membership operators on lists to get Boolean values
a = 2 in [3,6,8,9,10]
b = "p" in "programming"
print('result of not a is : ', not a)
Variable a is initialized with the condition that uses the membership operator to determine a Boolean result. 2 is not present in the list; hence the result if becomes False
NOT conditional operator on “a” reverses the Boolean value of a; hence the result comes out to be True
4. Using identity operators on strings to get Boolean values
a = "python" is "python"
print('result of not a is : ', not a)
Variable a is initialized with the condition that uses the identity operator to determine a Boolean result. “python” and “python“ are the same; hence the condition becomes True.
NOT conditional operator on “a” reverses the Boolean value of a; hence the result comes out to be False
5. Using Logical Operators (AND, OR) on NOT Operators
a = 2 in [3,6,8,9,10]
b = 3 == 3
print('result of not a is : ', not a and b)
above Variable a takes the Boolean value False as discussed
Variable b takes the Boolean value True as discussed above
The order of precedence of execution is not, then and. So first not a becomes True and then and with True of b gives result True
a = 2 in [3,6,8,9,10]
b = 3==3
print('result of not a is : ', not b or a)
Variable a takes the Boolean value False as discussed
Variable b takes the Boolean value True as discussed above
The order of precedence of execution is not, then or. So first not b becomes False and then and with False of a gives result False
6. Using multiple NOT Operators
a = 2 in [3,6,8,9,10]
print('result of not a is : ', not not a)
Variable a takes the Boolean value False as discussed
First, not a(False) becomes True, and then applying not on that gives the result as False
Note: Applying for NOT operation odd number if times provides us with the opposite of the initial Boolean value, and applying NOT even number of times gives us the initial value itself
With this, we draw closure to this topic. In this article, we looked at many different examples and understood the working of NOT Operator in Python. It is time to get our hands dirty and start using
NOT operators in your programs.
Recommended Articles
This is a guide to Python NOT Operator. Here we discuss the different examples and the working of NOT Operator in Python. You may also have a look at the following articles to learn more – | {"url":"https://www.educba.com/python-not-operator/","timestamp":"2024-11-11T15:12:47Z","content_type":"text/html","content_length":"312940","record_id":"<urn:uuid:b32cc896-873c-4f2a-bcf5-fb19ec3bef7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00169.warc.gz"} |
Mace20041 Hydraulics 2 Problem-Based Coursework
Mace20041 Hydraulics 2 Problem-Based Coursework – UK.
Unit Code :- Mace20041
Unit Title :- Hydraulics 2 Problem-Based Coursework
Weighting of the assignment: 15% of the unit assessment
This coursework includes:
Questions on Topic 1 (conservation laws) and Topic 2 (single pipeline systems only).
Mace20041 Hydraulics 2 Problem-Based Coursework – UK.
Question 1:-
Water flows through a vertical sluice gate in a rectangular channel as shown in Figure Q 1. The channel bed is horizontal upstream of the gate and has a 20° slope down stream of it. The channel width
is 4 m. The elevation of a point C at the intersection between the upstream water surface and the sluice gate is 1.10 m from the bottom of the channel (see Figure Q1).
The velocity profiles at sections A and B are described by the following equations,respectively:
where y is the distance from the channel bottom, u A,S and u B,S are respectively the surface velocities at sections A and B, and h A = 1 m and hB = 0.35 m are the respective water depths.
(a) Assuming that Bernoulli’s equation is applicable along a surface streamline between section A and the sluice gate, estimate the surface velocity uA,S at section A.
(b) Find the surface velocity uB,S at section B.
Mace20041 Hydraulics 2 Problem-Based Coursework – UK.
(c) Neglecting friction forces between sections A and B, and assuming that the pressure distribution is hydrostatic at both sections, determine the drag force on the sluice gate.
Question 2 :-
Water flows through a converging pipe as shown in Figure Q2. A U-tube manometer containing a fluid with specific gravity 1.87 is installed to measure the difference between the Pitot pressures at
points 1 and 2. The difference between the static pressures at the same points is p 1 – p 2 = 1500 Pa. Assuming incompressible flow and uniform cross-sectional velocity profiles, determine:
(a) the pressure loss due to viscous effects between points 1 and 2;
(b) the flow rate in L s–1
(c) If the Pitot tube at point 1 is replaced with a piezo meter, what would be the difference in column heights in the U-tube manometer? (i.e. what would be the new value of h in Figure Q2?)
Question 3 :
Water is pumped from a reservoir through a pipeline discharging to atmosphere as illustrated in Figure Q3. The water level in the reservoir is constant and equal to 10 m (Above Ordnance Datum)
whereas the elevation of the pipe centreline at the outlet B is 12 m. The pipeline is made of two segments, AJ and JB. Both segments have length 50 m and diameter 150 mm. The pipeline terminates at B
with a nozzle, with outlet diameter 75 mm.
A pump (P) installed along pipe AJ supplies a continuous discharge of 40 L s–1, while a constant discharge of 20 L s
–1 is extracted at the junction J. The friction factor of pipe A J for the prescribed discharge is 1 = 0.03.
Mace20041 Hydraulics 2 Problem-Based Coursework – UK.
(a) the velocities in pipes AJ and JB;
(b) the head loss due to friction along pipe AJ.
Assuming that pipe JB has roughness ks = 0.3 mm, and neglecting minor losses,
(c) calculate the head loss along pipe JB;
(d) determine the head developed by the pump P;
(e) sketch the qualitative behaviour of the energy and hydraulic grade lines along the pipeline AB (including the terminal nozzle). For convenience, Figure Q3 is reproduced on a square grid in
Appendix A (on page 5), on which the grade lines can be sketched.
(f) For an alternative scenario where the discharge extracted at J is zero and the pump head is 6.5 m, determine the discharge in the pipe AB assuming a uniform roughness ks = 0.3 mm.
You may use this sheet to sketch the qualitative behaviour of the energy and hydraulic grade lines in your answer to Question 3(e).
Read More :-
ENG742s1 P24344 Control Systems Coursework 2 – Portsmouth University UK. | {"url":"https://assignmenthelps.co.uk/mace20041-hydraulics-2-problem-based-coursework.php","timestamp":"2024-11-11T21:13:26Z","content_type":"text/html","content_length":"63867","record_id":"<urn:uuid:b917f515-cf8a-4813-8ecf-d832cb2f8a97>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00794.warc.gz"} |
Видеотека: A. S. Holevo, Gaussian optimizers and the additivity problem in quantum information theory
Аннотация: We survey two remarkable analytical problems of quantum information theory. We report on the recent solution of the quantum Gaussian optimizers problem which establishes an optimal
property of Glauber's coherent states as a particular instance of pure quantum Gaussian states. Namely, it is shown that the coherent states, and under certain conditions only they, minimize a broad
class of the concave functionals of the output of a gauge-covariant or contravariant Gaussian channel. A remarkable corollary of this solution in the multimode case is the additivity of the minimal
output entropy and the classical capacity of Gaussian channels (which is not valid for general quantum channels). This in particular allows for explicit computation of the classical capacity for the
mathematical models of phase-insensitive channels in quantum optics, such as attenuators, amplifiers and additive classical noise channels.
Язык доклада: английский | {"url":"https://m.mathnet.ru/php/presentation.phtml?option_lang=rus&presentid=17898","timestamp":"2024-11-13T21:56:20Z","content_type":"text/html","content_length":"8734","record_id":"<urn:uuid:d50918dd-18e3-4a28-8061-df50a7149007>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00456.warc.gz"} |
Cyclicity of the 2-class group of the first Hilbert 2-class field of some number fields
Cyclicity of the 2-class group of the first Hilbert 2-class field of some number fieldsArticle
Let $\mathds{k}$ be a real quadratic number field. Denote by $\mathrm{Cl}_2(\mathds{k})$ its $2$-class group and by $\mathds{k}_2^{(1)}$ (resp. $\mathds{k}_2^{(2)}$) its first (resp. second) Hilbert
$2$-class field. The aim of this paper is to study, for a real quadratic number field whose discriminant is divisible by one prime number congruent to $3$ modulo 4, the metacyclicity of $G=\mathrm
{Gal}(\mathds{k}_2^{(2)}/\mathds{k})$ and the cyclicity of $\mathrm{Gal}(\mathds{k}_2^{(2)}/\mathds{k}_2^{(1)})$ whenever the rank of $\mathrm{Cl}_2(\mathds{k})$ is $2$, and the $4$-rank of $\mathrm
{Cl}_2(\mathds{k})$ is $1$.
Volume: Volume 32 (2024), Issue 1
Published on: March 1, 2023
Accepted on: February 21, 2023
Submitted on: February 21, 2023
Keywords: Mathematics - Number Theory | {"url":"https://cm.episciences.org/10983","timestamp":"2024-11-09T06:34:49Z","content_type":"application/xhtml+xml","content_length":"37211","record_id":"<urn:uuid:d30fb9f4-b8b0-4e12-b086-dea91ede0d27>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00257.warc.gz"} |
Root Sum Squares Explained Graphically (Part 8 / 13)
Rate this article:
Sensitivities are simply the slope values (1^st derivatives) of the transfer functions. If I take "slices" of a two-inputs-to-one-output of the response surface, I can view the sensitivities (slopes)
along those slices. But why are steeper slopes considered more sensitive?
Examine Figure 8-1. It shows three mathematical representations of a one-input-to-one-output transfer function. These curves (all straight lines) represent three different design solutions under
consideration. When the designer selects the mid-point of the horizontal axis as his design input value (x[0]), all three transfer functions provide the same output (Y[0]) response value along the
vertical axis. What makes one better than the others?
Now examine Figure 8-2. We have added a transferred to the output variation through the transfer function curve. For the first line, this results in a normal curve on the response with a known
standard deviation (as shown on the vertical axis). For the second line, this also results in a normal curve but one that has a wider range of variation (a larger standard deviation). For the third
and steepest line, we get another normal curve with an even greater range of variation. The first line (or design solutioRobust Design than the other two (less sensitive to noise in the input
What if there are multiple inputs to our one output? Does one input's variation wreak more havoc than another input's variation? This is the question that Sensitivity Contribution attempts to answer.
Consider the response surface in Figure 8-3, a completely flat surface that is [1] constant at different values and doing the same for x[2]. Note that the slopes (which are constant and do not change
over the input range of interest) are of a lesser magnitude when x[1] is held constant versus when x[2] is held constant. That means the response is less sensitive to variation in the second input (x
[2]) than the first input (x[1]). (As a side note: The steeper slices, when x[2] is held constant, have a negative sign indicating a downward slope.)
If we apply the same variation to both x[1] and x[2] (see Figure 8-5), it is obvious that the first input causes greater output variation. Therefore, it has a greater sensitivity contribution than
that of the second input.
We can flip the tables, however. What if the variation of x[2] was so much greater than that of x[1]? (See Figure 8-6.) It is possible that, after the variation of x[1] and x[2] have been
"transferred" through the slopes, that the corresponding output variation due to x[2] is greater than that from x[1]. Now the second input (x[2]) has a greater sensitivity contribution component than
the first input (x[1]), even though the first input has a greater sensitivity than the second input. By examining the RSS equation for output variance (see below), it can be seen why this is the
If either the slopes (1^st derivatives) of a design solution of interest or the variation applied to those slopes (the input standard deviations) is increased, that input's sensitivity contribution
goes up while all the others go down. The pie must equal 100%.
So we now know how much pie there is to allocate to the individual input tolerances, being based on the sensitivities and input variations. Let us stop eating pie for the moment and look at the other
RSS equation, that of predicted output mean. (RSS is really easy to understand if you look at it graphically.)
Please login or register to post comments. | {"url":"https://www.crystalballservices.com/Research/Articles-on-Analytics-Risk/root-sum-squares-explained-graphically-part-8-13","timestamp":"2024-11-04T11:06:23Z","content_type":"application/xhtml+xml","content_length":"66222","record_id":"<urn:uuid:18814710-7997-4255-ad27-fa02babf543c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00128.warc.gz"} |
2nd PUC Basic Maths Question Bank Chapter 2 Permutations and Combinations Ex 2.2
Students can Download Basic Maths Exercise 2.2 Questions and Answers, Notes Pdf, 2nd PUC Basic Maths Question Bank with Answers helps you to revise the complete Karnataka State Board Syllabus and
score more marks in your examinations.
Karnataka 2nd PUC Basic Maths Question Bank Chapter 2 Permutations and Combinations Ex 2.2
Part – A
2nd PUC Basic Maths Permutations and Combinations Ex 2.2 One Mark Questions and Answers
Question 1.
Find the total number of ways in which 8 different coloured beads can be strung together to form a necklace.
(n -1)! = (8 -1)! = 7!.
Question 2.
In how many ways can 9 flowers of different colours be strung together to form a garland.
\(\frac{(n-1) !}{2}=\frac{8 !}{2}\)
Question 3.
In how many ways can 10 people be seated around a table.
(10 -1)! = 9!
Part B
2nd PUC Basic Maths Permutations and Combinations Ex 2.2 Two Marks Questions and Answers
Question 1.
Find the number of ways in which 8 men be arranged round a table so that 2 particular men may not be next to each other.
Total number of ways when there is no restriction is 7!
When 2 particular men are together can be takes as 1 unit and the remaining 6, total = 7 can sit round a table in 6! ways and 2 men can themselves can be done in 2! ways can be done in 6!.2!.
∴ The number of ways in which 2 particular men are not together = 7! – 6! . 2!
Question 2.
In how many ways can 6 gentlemen and 4 ladies be seated round a table so that no two ladies are together.
There are 6 vacant places in between the 6 gentlemen and 4 ladies can occupy these gaps in ^6P[4] ways. For each of these the six gentlemen can be permuted in (6 – 1)! = 5!
∴ The number of ways is 5!^6P[4]
Question 3.
In how many ways can 10 beads of different colours be strung into a necklace if the red, green and yellow beads are always together.
Red, yellow and green are together can be permuted in 3! ways and the remaining 7 & 1 unit of other beads altogether 8 beads can permuted in 8! Ways.
∴ The number of ways is 8! 3!.
Question 4.
In how many ways can 6 boys and 6 girls be arranged in a circle so that no two boys are together.
Six boys can arranged in a circle in (6 – 1)! = 5!, 6 girls can permute in 6! Ways.
∴ The number of ways is 6! × 5!.
Question 5.
In how many ways can 7 gentlemen and 5 ladies be arranged in a circle if no two ladies are together.
Ladies have to be arranged between gentlemen. The 7 gentlemen can be seated in 6! Ways. There are 7 gaps between the men which the 5 ladies can occupy in ^7P[5], ways.
∴ The number of ways is ^7P[6].6!.
Question 6.
Find the number of ways in which 10 flowers can be strung into a garland if 3 particular flowers are always together.
Three particular flowers are always together can be done in 3! Ways 7 + 1 units of 3 = 8 can be done in 7! Ways.
∴ The total number of ways = 7!. 3!.
Question 7.
Find the number of ways in which 15 staff members can be seated around a circular table for a meeting if the vice-principal and dean have to be an either side of the principal.
Vice principal & dean can be permuted in 2 ways. The remaining 13 can be permuted in 12! Ways.
∴ The number of ways is 12! · 2!. | {"url":"https://kseebsolutions.guru/2nd-puc-basic-maths-question-bank-chapter-2-ex-2-2/","timestamp":"2024-11-06T17:31:17Z","content_type":"text/html","content_length":"67293","record_id":"<urn:uuid:ec9a6f97-6e0b-4abf-82ca-a964f3693d97>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00144.warc.gz"} |
Tangible Equity Two-Step for RC-O Line 5
Mar 25, 2023
For those who don’t already know, the Texas Two-Step is a simple dance that is perennially popular at country dances. It has the virtue of being learnable even by the clumsiest cowpoke with two left
feet. Figuring and reporting Average Tangible Equity is like that. Once you get it down it’s easy.
The reason that the FDIC wants to know your bank’s Average Tangible Equity is to help them figure out how much to charge for deposit insurance. Premiums are no longer based on total deposits, but on
total liabilities less tangible assets. They need the number to calculate the tab.
The mistakes we see sometimes are that banks either don’t follow the definition in the instructions on what constitutes tangible equity, or they use and averaging method that is incorrect. Now let’s
get to the step-by-step dance instructions.
Step One
Calculate the tangible equity as of quarter end. The instructions say that it is the same definition as for Schedule RC-R Part line 26. Fortunately, Schedule RC-R Part I walks you through the
calculation. Take a look at the adjustments made to capital on RC-R Part I and do the same calculation for Schedule RC-O line 5. It should be the same. You can also refer to the instructions for
There is an “except as follows” section in the instructions that rarely applies. Unless your bank has another FDIC insured bank as a subsidiary, survived a merger or was acquired during the quarter
you can ignore this.
Step Two
If your institution has not reported $1 billion or more in average assets for two consecutive quarters,
and your institution has reported quarter-end tangible equity previously, then you are all done. You can continue to report the quarter-end-tangible equity on RC-O line 5, get off the dance floor,
and sit down by the punch bowl.
But if your have reported the Quarterly Average Tangible Equity previously, you must keep dancing. You must calculate the average by taking the average of Tangible Equity for each month-end during
the quarter. If you do it any other way you will be stepping on your partner’s (FDIC’s) feet and suffer the embarrassment. Daily or weekly averaging methods will not do. There is no improvising with
the Two Step! | {"url":"https://www.callreportresources.com/blog/tangible-equity-two-step-for-rc-o-line-5","timestamp":"2024-11-04T23:07:41Z","content_type":"text/html","content_length":"22231","record_id":"<urn:uuid:8676752f-7926-40fb-a387-cc273c57f7bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00706.warc.gz"} |
Convert annual interest rate to daily compounding
To calculate the annual percentage yield from the annual percentage rate on an account that compounds interest daily, first divide the annual percentage rate by 365 to calculate the daily interest
rate. Second, divide the daily interest rate by 100 to convert it to a decimal. Third, add 1. Interest rate can be for any period not just a year as long as compounding is per this same time unit.
For example, your stated rate is 9% per quarter compounded monthly. Enter 9% and 3 (for 3 months per quarter to get P = 3%, the effective rate per month. To calculate daily compounding interest,
divide the annual interest rate by 365 to calculate the daily rate. Add 1 and raise the result to the number of days interest accrues. Subtract 1 from the result and multiply by the initial balance
to calculate the interest earned.
Annual compound interest - Formula 1 where A2 is your initial deposit and B2 is the annual interest rate. earn with yearly, quarterly, monthly, weekly or daily compounding. The more often compounding
occurs, the higher the effective interest rate. The relationship between nominal annual and effective annual interest rates is: ia = [ 1 + The annual percentage rate (APR) is also called the nominal
interest rate. If there are m compounding periods, then the APR and APY are related by the Below are two calculators that convert between the APR and APY. APR to APY calculator. Enter the APR as a
percent: %. Enter # periods (monthly = 12, daily = 365):. But interest isn't always charged annually. Sometimes, it's calculated to reflect interest charges over a shorter period of time (daily,
monthly, or quarterly), termed a “ Sania made an investment of Rs 50,000, with an annual interest rate of 10% for a She has borrowed a sum of Rs 50,000 at a daily compound interest rate of
Calculating simple and compound interest rates are . rate that compounded semi-annually, or even a quarterly, or monthly, or even daily. years, you have to convert the nominal interest from 12%
compounded there monthly to an effective Interest may be compounded on a semi-annual, quarterly, monthly, daily, or even With monthly compounding, for example, the stated annual interest rate is
Daily Compound Interest =$1,610.51 – $1,000; Daily Compound Interest = $610.51; So you can see that in daily compounding, the interest earned is more than annual compounding. Daily Compound Interest
Formula – Example #2. Let say you have got a sum of amount $10,000 from a lottery and you want to invest that to earn more income.
This compound interest calculator has more features than most. You can vary both the deposit intervals and the compounding intervals from daily to annually (and everything in between)Show Full
Instructions This flexibility allows you to calculate and compare the expected interest earnings on Multiply the principal amount by one plus the annual interest rate to the power of the number of
compound periods to get a combined figure for principal and compound interest. Subtract the principal if you want just the compound interest. Read more about the formula. The formula used in the
compound interest calculator is A = P(1+r/n) (nt) Daily compounding interest refers to when an account adds the interest accrued at the end of each day to the account balance so that it can earn
additional interest the next day and even more the next day, and so on. To calculate daily compounding interest, divide the annual interest rate by 365 to calculate the daily rate. Interest Rate
Converter. Interest Rate Converter enables you to convert interest rate payable at any frequency into an equivalent rate in another frequency. For instance, you can convert interest rate from annual
to semi annual or monthly to annual, quarterly etc. Interest Rate % p.a. Payment frequency This Daily Interest Loan Calculator will help you to quickly calculate either simple or compounding interest
for a specified period of time.. You can either calculate daily interest for a single loan period, or create a loan schedule made up of multiple periods, each with their own time-frames, principal
adjustments, and interest rates. I have to undertake a number of financial projections based on an actual annual interest rate where interest is added either daily or weekly. If I have an actual
annual interest rate of 5% and divide it by 12 and then compound that figure I get an actual annual interest figure of 5.1162%.
It helps to do a daily interest compounding example by hand to truly understand the concept. Say you receive an annual interest rate of 4 percent on a savings
Conversion of Simple vs. Compound Interest Rate. Before you use the
This Daily Interest Loan Calculator will help you to quickly calculate either simple or compounding interest for a specified period of time.. You can either calculate daily interest for a single loan
period, or create a loan schedule made up of multiple periods, each with their own time-frames, principal adjustments, and interest rates.
Example of calculating monthly payments and daily compounding They convert between nominal and annual effective interest rates. If the annual nominal
Interest may be compounded on a semi-annual, quarterly, monthly, daily, or even With monthly compounding, for example, the stated annual interest rate is
Free compound interest calculator to convert and compare interest rates of of daily, bi-weekly, semi-monthly, monthly, quarterly, semi-annually, annually, and Example of calculating monthly payments
and daily compounding They convert between nominal and annual effective interest rates. If the annual nominal Common compounding frequencies appear in the drop down. daily = 365, weekly = 52,
biweekly = 26, semimonthly = 24, monthly = 12, bimonthly = 6, quarterly = 4 Subject: interest compounded daily The annual interest rate is 0.5%, which as a decimal is 0.005, so the daily interest
rate is have to be very clearly spelled out in the contract - or better yet, converted to the equivalent normal annual form), Interest Rate Converter, Convert monthly to annual APR or annual to
monthly. Enter the Annual compound interest rate (AER for savings or APR for a loan) It is used to compare the annual interest between loans with different compounding terms (daily, monthly,
quarterly, semi-annually, annually, or other). It is also
Daily Compound Interest =$1,610.51 – $1,000; Daily Compound Interest = $610.51; So you can see that in daily compounding, the interest earned is more than annual compounding. Daily Compound Interest
Formula – Example #2. Let say you have got a sum of amount $10,000 from a lottery and you want to invest that to earn more income. | {"url":"https://tradingkzolv.netlify.app/szczepanek41842xe/convert-annual-interest-rate-to-daily-compounding-giqa","timestamp":"2024-11-10T16:11:00Z","content_type":"text/html","content_length":"31832","record_id":"<urn:uuid:18e3ebe3-3a3e-47f6-af6f-13df751c6aed>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00857.warc.gz"} |
Why the tails come apart
post by
· 2014-08-01T22:41:00.044Z ·
102 comments
Too much of a good thing?
The simple graphical explanation
An intuitive explanation of the graphical explanation
A parallel geometric explanation
Endnote: EA relevance
102 comments
[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of
someone with a stronger mathematical background who could make this line of argument more robust]
[Edit 2014/11/14: mainly adjustments and rewording in light of the many helpful comments below (thanks!). I've also added a geometric explanation.]
Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve
one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.
What's interesting is what happens to these relationships 'out on the tail': extreme outliers of a given predictor are seldom similarly extreme outliers on the outcome it predicts, and vice versa.
Although 6'7" is very tall, it lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in
the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to
look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their
income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).
The trend seems to be that even when two factors are correlated, their tails diverge: the fastest servers are good tennis players, but not the very best (and the very best players serve fast, but not
the very fastest); the very richest tend to be smart, but not the very smartest (and vice versa). Why?
Too much of a good thing?
One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at
basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too
heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of
productivity-reducing mental illness. Or something along those lines.
I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest
children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think
there is a more general explanation.
The simple graphical explanation
[Inspired by this essay from Grady Towers]
Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the
It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest
balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of this:
Or this:
Or this:
Given a correlation, the envelope of the distribution should form some sort of ellipse, narrower as the correlation goes stronger, and more circular as it gets weaker: (2)
The thing is, as one approaches the far corners of this ellipse, we see 'divergence of the tails': as the ellipse doesn't sharpen to a point, there are bulges where the maximum x and y values lie
with sub-maximal y and x values respectively:
So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not too tight (the tighter the correlation, the larger the sample
size required), one will observe the ellipses with the bulging sides of the distribution. (3)
Hence the very best basketball players aren't the very tallest (and vice versa), the very wealthiest not the very smartest, and so on and so forth for any correlated X and Y. If X and Y are
"Estimated effect size" and "Actual effect size", or "Performance at T", and "Performance at T+n", then you have a graphical display of winner's curse and regression to the mean.
An intuitive explanation of the graphical explanation
It would be nice to have an intuitive handle on why this happens, even if we can be convinced that it happens. Here's my offer towards an explanation:
The fact that a correlation is less than 1 implies that other things matter to an outcome of interest. Although being tall matters for being good at basketball, strength, agility,
hand-eye-coordination matter as well (to name but a few). The same applies to other outcomes where multiple factors play a role: being smart helps in getting rich, but so does being hard working,
being lucky, and so on.
For a toy model, pretend that wealth is wholly explained by two factors: intelligence and conscientiousness. Let's also say these are equally important to the outcome, independent of one another and
are normally distributed. (4) So, ceteris paribus, being more intelligent will make one richer, and the toy model stipulates there aren't 'hidden trade-offs': there's no negative correlation between
intelligence and conscientiousness, even at the extremes. Yet the graphical explanation suggests we should still see divergence of the tails: the very smartest shouldn't be the very richest.
The intuitive explanation would go like this: start at the extreme tail - +4SD above the mean for intelligence, say. Although this gives them a massive boost to their wealth, we'd expect them to be
average with respect to conscientiousness (we've stipulated they're independent). Further, as this ultra-smart population is small, we'd expect them to fall close to the average in this other
independent factor: with 10 people at +4SD, you wouldn't expect any of them to be +2SD in conscientiousness.
Move down the tail to less extremely smart people - +3SD say. These people don't get such a boost to their wealth from their intelligence, but there should be a lot more of them (if 10 at +4SD,
around 500 at +3SD), this means one should expect more variation in conscientiousness - it is much less surprising to find someone +3SD in intelligence and also +2SD in conscientiousness, and in the
world where these things were equally important, they would 'beat' someone +4SD in intelligence but average in conscientiousness. Although a +4SD intelligence person will likely be better than a
given +3SD intelligence person (the mean conscientiousness in both populations is 0SD, and so the average wealth of the +4SD intelligence population is 1SD higher than the 3SD intelligence people),
the wealthiest of the +4SDs will not be as good as the best of the much larger number of +3SDs. The same sort of story emerges when we look at larger numbers of factors, and in cases where the
factors contribute unequally to the outcome of interest.
When looking at a factor known to be predictive of an outcome, the largest outcome values will occur with sub-maximal factor values, as the larger population increases the chances of 'getting lucky'
with the other factors:
So that's why the tails diverge.
A parallel geometric explanation
There's also a geometric explanation. The R-square measure of correlation between two sets of data is the same as the cosine of the angle between them when presented as vectors in N-dimensional space
(explanations, derivations, and elaborations here, here, and here). (5) So here's another intuitive handle for tail divergence:
Grant a factor correlated with an outcome, which we represent with two vectors at an angle theta, the inverse cosine equal the R-squared. 'Reading off the expected outcome given a factor score is
just moving along the factor vector and multiplying by cosine theta to get the distance along the outcome vector. As cos theta is never greater than 1, we see regression to the mean. The geometrical
analogue to the tails coming apart is the absolute difference in length along factor versus length along outcome|factor scales with the length along the factor; the gap between extreme values of a
factor and the less extreme values of the outcome grows linearly as the factor value gets more extreme. For concreteness (and granting normality), an R-square of 0.5 (corresponding to an angle of
sixty degrees) means that +4SD (~1/15000) on a factor will be expected to be 'merely' +2SD (~1/40) in the outcome - and an R-square of 0.5 is remarkably strong in the social sciences, implying it
accounts for half the variance.(6) The reverse - extreme outliers on outcome are not expected to be so extreme an outlier on a given contributing factor - follows by symmetry.
Endnote: EA relevance
I think this is interesting in and of itself, but it has relevance to Effective Altruism, given it generally focuses on the right tail of various things (What are the most effective charities? What
is the best career? etc.) It generally vindicates worries about regression to the mean or winner's curse, and suggests that these will be pretty insoluble in all cases where the populations are
large: even if you have really good means of assessing the best charities or the best careers so that your assessments correlate really strongly with what ones actually are the best, the very best
ones you identify are unlikely to be actually the very best, as the tails will diverge.
This probably has limited practical relevance. Although you might expect that one of the 'not estimated as the very best' charities is in fact better than your estimated-to-be-best charity, you don't
know which one, and your best bet remains your estimate (in the same way - at least in the toy model above - you should bet a 6'11" person is better at basketball than someone who is 6'4".)
There may be spread betting or portfolio scenarios where this factor comes into play - perhaps instead of funding AMF to diminishing returns when its marginal effectiveness dips below charity #2, we
should be willing to spread funds sooner.(6) Mainly, though, it should lead us to be less self-confident.
1. Given income isn't normally distributed, using SDs might be misleading. But non-parametric ranking to get a similar picture: if Bill Gates is ~+4SD in intelligence, despite being the richest man
in america, he is 'merely' in the smartest tens of thousands. Looking the other way, one might look at the generally modest achievements of people in high-IQ societies, but there are worries about
adverse selection.
2. As nshepperd notes below, this depends on something like multivariate CLT. I'm pretty sure this can be weakened: all that is needed, by the lights of my graphical intuition, is that the envelope
be concave. It is also worth clarifying the 'envelope' is only meant to illustrate the shape of the distribution, rather than some boundary that contains the entire probability density: as suggested
by homunq: it is an 'pdf isobar' where probability density is higher inside the line than outside it.
3. One needs a large enough sample to 'fill in' the elliptical population density envelope, and the tighter the correlation, the larger the sample needed to fill in the sub-maximal bulges. The old
faithful case is an example where actually you do get a 'point', although it is likely an outlier.
4. It's clear that this model is fairly easy to extend to >2 factor cases, but it is worth noting that in cases where the factors are positively correlated, one would need to take whatever component
of the factors which are independent of one another.
5. My intuition is that in cartesian coordinates the R-square between correlated X and Y is actually also the cosine of the angle between the regression lines of X on Y and Y on X. But I can't see an
obvious derivation, and I'm too lazy to demonstrate it myself. Sorry!
6. Another intuitive dividend is that this makes it clear why you can by R-squared to move between z-scores of correlated normal variables, which wasn't straightforwardly obvious to me.
7. I'd intuit, but again I can't demonstrate, the case for this becomes stronger with highly skewed interventions where almost all the impact is focused in relatively low probability channels, like
averting a very specified existential risk.
102 comments
Comments sorted by top scores.
comment by StuartBuck · 2014-07-28T03:58:16.973Z · LW(p) · GW(p)
It's not just that the tails stop being correlated, it's that there can be a spurious negative correlation. In any of your scatterplots, you could slice off the top right corner (with a diagonal line
running downwards to the right), and what was left above the line would look like a negative correlation. This is sometimes known as Berkson's paradox.
Replies from: Thrasymachus, owencb
↑ comment by Thrasymachus · 2014-08-02T02:07:38.775Z · LW(p) · GW(p)
There's also a related problem in that population substructures can give you multiple negatively correlated associations stacked beside each other in a positively correlated way (think of it like
several diagonal lines going downwards to the right, parallel to each other), giving an 'ecological fallacy' when you switch between levels of analysis.
(A real-world case of this is religiosity and health. Internationally, countries which are less religious tend to be healthier, but often within first world countries, religion confers a survival
Replies from: tjohnson314
↑ comment by tjohnson314 · 2015-06-09T20:45:12.661Z · LW(p) · GW(p)
Another example I've heard is SAT scores. At any given school, the math and verbal scores are negatively correlated, because schools tend to select people who have around the same total score. But
overall, math and verbal scores are positively correlated.
↑ comment by owencb · 2014-08-14T13:10:17.574Z · LW(p) · GW(p)
Looks like you can get this if you cut the corner off in a box shape too, which may be more surprising.
comment by Squark · 2014-07-28T06:40:35.237Z · LW(p) · GW(p)
IMO this should be in main
Replies from: Eliezer_Yudkowsky
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-08-01T22:40:22.789Z · LW(p) · GW(p)
The upvoters have spoken. Moving to Main and promoting.
Replies from: ishi
↑ comment by ishi · 2014-08-03T21:52:09.041Z · LW(p) · GW(p)
1. The idea that iq predicts income, life expectancy, criminal justice record, etc. depends on what you mean by 'predicts' (eg conjunction fallacy). I and many others suggest these are correlations,
and many argue instead things like income (of parents), social environment, etc predict iq, crime, health, etc. (of children, via a kind of markov process). (Also, if you look at income/iq
correlations, I wouldn't be surprised that it is quite different for different kinds of income---those who made money via IT or genomics, versus those who made it via Walmart, or sports. One may
actually have a mixture distribution which only appears 'normal' because of sufficiently large size. )
1. The scatter plots are interesting, and remind me of S J Gould's (widely criticized ) discussion of attempts to define G, a measure of general intelligence, using factor analyses.
I think the general conclusion before the analyses is the right one---there are multiple factors. I would say many of the 'smartest' people (as measured by say, iq) end up in academic fields
in math/science/technology rather than in business with the aim of making money. There are so many factors. Some academics later on do go into business, either working in finance or genomics
industries, but many don't. One reason academic economics is criticized is because it follows the pattern of this post---it starts with general observations, comes up with tentative
conclusions, and then goes into highly detailed, mathematical analyses which doesn't really add much more insight, though its an interesting excercize.
comment by ShardPhoenix · 2014-07-27T01:07:38.497Z · LW(p) · GW(p)
So in other words, it's not that the strongest can't also be the tallest (etc), but that someone getting that lucky twice more or less never happens. And if you need multiple factors to be good at
something, getting pretty lucky on several factors is more likely than getting extremely lucky on one and pretty lucky on the rest.
I enjoyed this post - very clear.
Replies from: Thrasymachus, Vulture
↑ comment by Vulture · 2014-08-08T02:51:30.574Z · LW(p) · GW(p)
Should the first "pretty" there be "very", or am I misunderstanding the point?
Replies from: ShardPhoenix
↑ comment by ShardPhoenix · 2014-08-08T03:58:28.176Z · LW(p) · GW(p)
To put it more simply, there's no causal reason why the tallest shouldn't also be the strongest - it's just unlikely in practice for anyone to be both at the same time, because both traits
(super-height and super-strength) are rare and (sufficiently) independent.
comment by Unnamed · 2019-06-18T04:56:47.544Z · LW(p) · GW(p)
One angle for thinking about why the tails come apart (which seems worth highlighting even more than it was highlighted in the OP) is that the farther out you go in the tail on some variable, the
smaller the set of people you're dealing with.
Which is better, the best basketball team that you can put together from people born in Pennsylvania or the best basketball team that you can put together from people born in Delaware? Probably the
Pennsylvania team, since there are about 13x as many people in that state so you get to draw from a larger pool. If there were no other relevant differences between the states then you'd expect 13 of
the best 14 players to be Pennsylvanians, and probably the two neighboring states are similar enough so that Delaware can't overcome that population gap.
Now, imagine you're picking the best 10 basketball players from the 1,000 tallest basketball-aged Americans (20-34 year-olds), and you're putting together another group consisting of the best 10
basketball players from the next 100,000 tallest basketball-aged Americans. Which is a better group of basketball players? In this case it's not obvious - getting to pick from a pool of 100x as many
people is an obvious advantage, but that height advantage could matter a lot too. That's the tails coming apart - the very tallest don't necessarily give you the very best basketball players, because
"the very tallest" is a much smaller set than the "also really tall but not quite as tall".
(I ran some numbers and estimate that the two teams are pretty similar in basketball ability. Which is a remarkable sign of how important height is for basketball - one pool has about a 4 inch height
advantage on average, the other pool has 100x as many people, and those factors roughly balance out. If you want the example to more definitively show the tails coming apart, you can expand the
larger pool by another factor of 30x and then they'll clearly be better.)
Similarly, who has higher arm strength: the one person in our sample who has the highest grip strength, or the most arm-strong person out of the next ten people who rank 2-11 in grip strength? Grip
strength is closely related to arm strength, but you get to pick the best from a 10x larger pool if you give up a little bit of grip strength. In the graph in the OP, the person who was 6th (or maybe
5th) in grip strength had the highest arm strength, so getting to pick from a pool of 10 was more important. (The average arm strength of the people ranked 2-11 in grip strength was lower than the
arm strength of the #1 gripper, but we get to pick out the strongest arm of the ten rather than averaging them.)
So: the tails come apart because most of the people aren't way out on the tail. And you usually won't find the very best person at something if you're looking in a tiny pool, even if that's a pretty
well selected pool.
Thrasymachus's intuitive explanation covered this - having a smaller pool to pick from hurts because there are other variables that matter, and the smaller the pool the less you get to select for
people who do well on those other variables. But his explanation highlighted the "other variables matter" part of this more than the pool size part of it, and both of these points of emphasis seem
helpful for getting an intuitive grasp of the statistics in these types of situations, so I figured I'd add this comment.
comment by Cyan · 2014-07-27T20:04:55.303Z · LW(p) · GW(p)
Just as markets are anti-inductive, it turns out that markets reverse the "tails come apart" phenomenon found elsewhere. When times are "ordinary", performance in different sectors is largely
uncorrelated, but when things go to shit, they go to shit all together, a phenomenon termed "tail dependence".
Replies from: Thrasymachus
↑ comment by Thrasymachus · 2014-08-02T02:15:08.156Z · LW(p) · GW(p)
Interesting: Is there a story as to why that is the case? One guess that springs to mind is that market performance in sectors is always correlated, but you don't see it in well functioning markets
due to range restriction/tails-come-apart reasons, but you do see it when things go badly wrong as it reveals more of the range.
Replies from: Cyan
↑ comment by Cyan · 2014-08-02T11:39:15.421Z · LW(p) · GW(p)
market performance in sectors is always correlated, but you don't see it
The problem is the word "always". If I interpret it to mean "over all possible time scales" then the claim is basically false; if I interpret it to mean "over the longest time scales" then the claim
is true, but trivially so given that sector performances are sometimes correlated.
We won't get to an explanation by just thinking about probability measures on stochastic processes. What's needed here is a causal graph. The basic causal graph has the financial sector internally
highly connected, with the vast majority of the connections between lenders/investors and debtors/investees passing through it. That, I think, is sufficient to explain the stylized fact in the
grandparent (although of course financial researchers can and do find more to say).
comment by homunq · 2014-08-02T17:58:39.590Z · LW(p) · GW(p)
Great article overall. Regression to the mean is a key fact of statistics, and far too few people incorporate it into their intuition.
But there's a key misunderstanding in the second-to-last graph (the one with the drawn-in blue and red "outcome" and "factor"). The black line, indicating a correlation of 1, corresponds to nothing
in reality. The true correlation is the line from the vertical tangent point at the right (marked) to the vertical tangent point at the left (unmarked). If causality indeed runs from "factor"
(height) to "outcome" (skill), that's how much extra skill an extra helping of height will give you. Thus, the diagonal red line should follow this direction, not be parallel to the 45 degree black
line. If you draw this line, you'll notice that each point on it has equal vertical distance to the top and bottom of the elliptical "envelope" (which is, of course, not a true envelope for all the
probability mass, just an indication that probability density is higher for any point inside than any point outside).
Things are a little more complex if the correlation is due to a mutual cause, "reverse" causation (from "outcome" to "factor"), or if "factor" is imperfectly measured. In that case, the line
connecting the vertical tangents may not correspond to anything in reality, though it's still what you should follow to get the "right" (minimum expected squared error) answer.
This may seem to be a nitpick, but to me, this kind of precision is key to getting your intuition right.
Replies from: Thrasymachus
↑ comment by Thrasymachus · 2014-08-03T21:34:44.943Z · LW(p) · GW(p)
Thanks for this important spot - I don't think it is a nitpick at all. I'm switching jobs at the moment, but I'll revise the post (and diagrams) in light of this. It might be a week though, sorry!
Replies from: homunq
↑ comment by homunq · 2014-08-22T16:04:38.881Z · LW(p) · GW(p)
(I realize you're busy, this is just a friendly reminder.)
Also, I added one clause to my comment above: the bit about "imperfectly measured", which is of course usually the case in the real world.
Replies from: Thrasymachus
comment by othercriteria · 2014-07-28T01:16:59.916Z · LW(p) · GW(p)
This looks cool. My biggest caution would be that this effect may be tied to the specific class of data generating processes you're looking at.
Your framing seems to be that you look at the world as being filled with entities whose features under any conceivable measurements are distributed as independent multivariate normals. The predictive
factor is a feature and so is the outcome. Then using extreme order statistics of the predictive factor to make inferences about the extreme order statistics of the outcome is informative but
unreliable, as you illustrated. Playing around in R, reliability seems better for thin-tailed distributions (e.g., uniform) and worse for heavy-tailed distributions (e.g., Cauchy). Fixing the
distributions and letting the number of observations vary, I agree with you that the probability of picking exactly the greatest outcome goes to zero. But I'd conjecture that the probability that the
observation with the greatest factor is in some fixed percentile of the greatest outcomes will go to one, at least in the thin-tailed case and maybe in the normal case.
But consider another data generating process. If you carry out the following little experiment in R
fac <- rcauchy(1000)
out <- fac + rnorm(1000)
plot(rank(fac), rank(out))
it looks like extreme factors are great predictors of extreme outcomes, even though the factors are only unreliable predictors of outcomes overall. I wouldn't be surprised if the probability of the
greatest factor picking the greatest outcome goes to one as the number of observations grows.
Informally (and too evocatively) stated, what seems to be happening is that as long as new observations are expanding the space of factors seen, extreme factors pick out extreme outcomes. When new
observations mostly duplicate already observed factors, all of the duplicates would predict the most extreme outcome and only one of them can be right.
Replies from: Thrasymachus, Lumifer
↑ comment by Thrasymachus · 2014-08-02T02:45:17.936Z · LW(p) · GW(p)
Thanks for doing what I should have done and actually run some data!
I ran your code in R. I think what is going on in the Cauchy case is that the variance on fac is way higher than the normal noise being added (I think the SD is set to 1 by default, whilst the Cauchy
is ranging over some orders of magnitude). If you plot(fac, out), you get a virtually straight line, which might explain the lack of divergence between top ranked fac and out.
I don't have any analytic results to offer, but playing with R suggests in the normal case the probability of the greatest factor score picking out the greatest outcome goes down as N increases - to
see this for yourself, replace rcauchy with runf or rnorm, and increase the N to 10000 or 100000. In the normal case, it is still unlikely that max(fax) picks out max(out) with random noise, but this
probability seems to be sample size invariant - the rank of the maximum factor remains in the same sort of percentile as you increase the sample size.
I can intuit why this is the case: in the bivariate normal case, the distribution should be elliptical, and so the limit case with N -> infinity will be steadily reducing density of observations
moving out from the ellipse. So as N increases, you are more likely to 'fill in' the bulges on the ellipse at the right tail that gives you the divergence, if the N is smaller, this is less likely.
(I find the uniform result more confusing - the 'N to infinity case' should be a parallelogram, so you should just be picking out the top right corner, so I'd guess the probability of picking out the
max factor might be invariant to sample size... not sure.)
↑ comment by Lumifer · 2014-07-28T16:54:39.161Z · LW(p) · GW(p)
Another issue is that real-life processes are, generally speaking, not stationary (in the statistical sense) -- outside of physics, that is.
When you see an extreme event in reality it might be that the underlying process has heavier tails than you thought it does, or it might be that the whole underlying distribution switched and all
your old estimates just went out of the window...
Replies from: othercriteria
↑ comment by othercriteria · 2014-07-28T17:32:30.975Z · LW(p) · GW(p)
Good point. When I introduced that toy example with Cauchy factors, it was the easiest way to get factors that, informally, don't fill in their observed support. Letting the distribution of the
factors drift would be a more realistic way to achieve this.
the whole underlying distribution switched and all your old estimates just went out of the window...
I like to hope (and should probably endeavor to ensure) that I don' t find myself in situations like that. A system that generatively (what the joint distribution of factor X and outcome Y looks
like) evolves over time, might be discriminatively (what the conditional distribution of Y looks like given X) stationary. Even if we have to throw out our information about what new X's will look
like, we may be able to keep saying useful things about Y once we see the corresponding new X.
Replies from: Lumifer
↑ comment by Lumifer · 2014-07-28T17:54:14.536Z · LW(p) · GW(p)
I like to hope (and should probably endeavor to ensure) that I don' t find myself in situations like that.
It comes with certain territories. For example, any time you see the financial press talk about a six-sigma event you can be pretty sure the underlying distribution ain't what it used to be :-/
comment by Ricardo Vieira (ricardo-vieira) · 2018-09-26T14:06:38.121Z · LW(p) · GW(p)
I ran some simulations in Python, and (if I did this correctly), it seems that if r > 0.95, you should expect the most extreme data-point of one variable to be the same in the other variable over 50%
of the time (even more if sample size <= 100)
Replies from: gwern
↑ comment by gwern · 2019-12-11T16:09:56.117Z · LW(p) · GW(p)
You can simulate it out easily, yeah, but the exact answer seems more elusive. I asked on CrossValidated whether anyone knew the formula for 'probability of the maximum on both variables given a r
and n', since it seems like something that order statistics researchers would've solved long ago because it's interesting and relevant to contests/competitions/searches/screening, but no one's given
an answer yet.
Replies from: gwern, gjm
↑ comment by gwern · 2019-12-25T21:49:50.482Z · LW(p) · GW(p)
I have found something interesting in the 'asymptotic independence' order statistics literature: apparently it's been proven since 1960 that the extremes of two correlated distributions are
asymptotically independent (obviously when r != 1 or -1). So as you increase n, the probability of double-maxima decreases to the lower bound of 1/n.
The intuition here seems to be that n increases faster than increased deviation for any r, which functions as a constant-factor boost; so if you make n arbitrarily large, you can arbitrarily erode
away the constant-factor boost of any r, and thus decrease the max-probability.
I suspected as much from my Monte Carlo simulations (Figure 2), but nice to have it proven for the maxima and minima. (I didn't understand the more general papers, so I'm not sure what other order
statistics are asymptotically independent: it seems like it should be all of them? But some papers need to deal with multiple classes of order statistics, so I dunno - are there order statistics,
like maybe the median, where the probability of being the same order in both samples doesn't converge on 1/n?)
↑ comment by gjm · 2019-12-11T21:09:49.094Z · LW(p) · GW(p)
I can do n=1 (the probability is 1, obviously) and n=2 (the probability is , not so obviously). n=3 and up seem harder, and my pattern-spotting skills are not sufficient to intuit the general case
from those two :-).
Replies from: gwern
↑ comment by gwern · 2019-12-11T21:37:51.343Z · LW(p) · GW(p)
Heh. I've sometimes thought it'd be nice to have a copy of Eureqa or the other symbolic tools, to feed the Monte Carlo results into and see if I could deduce any exact formula given their hints. I
don't need exact formulas often but it's nice to have them. I've noticed people can do apparently magical things with Mathematica in this vein. All proprietary AFAIK, though.
Replies from: gwern
comment by KnaveOfAllTrades · 2014-07-28T00:52:11.537Z · LW(p) · GW(p)
Upvoted. I really like the explanation.
In the spirit of Don't Explain Falsehoods, it would be nice to test the ubiquity of this phenomenon by specifying a measure of this phenomenon (e.g. correlation) on some representative
randomly-chosen pairs. But I don't mean to suggest that you should have done that before posting this.
Replies from: Thrasymachus
↑ comment by Thrasymachus · 2014-08-02T02:13:07.588Z · LW(p) · GW(p)
I was a little too lazy to knock this up in R. Sorry! I am planning on some followups when I've levelled up more in mathematics and programming, although my thoughts would be quant finance etc. would
have a large literature on this, as I'd intuit these sorts of effects are pretty important when picking stocks etc.
comment by Candide III · 2021-05-18T09:34:06.528Z · LW(p) · GW(p)
I must quibble with some of the epistemological terminology here. Both "graphical explanation" and "geometric explanation" are not properly speaking explanations. They merely restate the original
empirical observation in graphical or statistical terms, but do not explain why the tails-coming-apart phenomenon occurs. The "intuitive explanation" on the other hand does explain why the phenomenon
occurs (i.e. the distributions of other factors influencing the outcome come into play). Similarly, "regression to the mean" is merely a restatement of the empirical observations in statistical
language, which does nothing to elucidate the causes of the phenomenon. It should be obvious that the causes of regression to the mean in a series of coin flips and in inheritance of continuous
characters are different: coins do not reproduce and we can't select them for better tails-to-heads ratio.
comment by IlyaShpitser · 2014-07-27T02:20:03.721Z · LW(p) · GW(p)
Good post.
If the ellipse is very narrow, things are indeed well-modeled by a linear relationship, and the biggest Y coordinate for a point is likely to also have close to biggest X coordinate.
If the ellipse is not narrow, that could be for two reasons. Either the underlying truth is indeed linear, but your data is very noisy. Or the underlying truth is not linear, and you should not use a
linear model. (Or both, naturally).
If the underlying truth is linear, but your data is very noisy, then what happens to the X coordinate of points with given Y values is mostly determined by the noise.
If the underlying truth is not linear, why should we expect sensible answers from a linear model?
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-07-28T18:31:39.541Z · LW(p) · GW(p)
If the underlying truth is not linear, why should we expect sensible answers from a linear model?
Because in many fields, linear models (even poor ones) are the best we're going to get, with more complex models losing to overfitting.
Replies from: IlyaShpitser, Lumifer, gwern
↑ comment by IlyaShpitser · 2014-07-28T19:53:10.310Z · LW(p) · GW(p)
I don't follow you. Overfitting happens when your model has too many parameters, relative to the amount of data you have. It is true that linear models may have few parameters compared to some
non-linear models (for example linear regression models vs regression models with extra interaction parameters). But surely, we can have sparsely parameterized non-linear models as well.
All I am saying is that if things are surprising it is either due to "noise" (variance) or "getting the truth wrong" (bias). Or both.
I agree that "models we can quickly and easily use while under publish-or-perish pressure" is an important class of models in practice :). Moreover, linear models are often in this class, while a ton
of very interesting non-linear models in stats are not, and thus are rarely used. It is a pity.
Replies from: henry4k2PH4, army1987, Stuart_Armstrong
↑ comment by henry4k2PH4 · 2014-08-04T00:40:26.878Z · LW(p) · GW(p)
A technical difficulty with saying that overfitting happens when there are "too many parameters" is that the parameters may do arbitrarily complicated things. For example they may encode C functions,
in which case a model with a single (infinite-precision) real parameter can fit anything very well! Functions that are linear in their parameters and inputs do not suffer from this problem; the
number of parameters summarizes their overfitting capacity well. The same is not true of some nonlinear functions.
To avoid confusion it may be helpful to define overfitting more precisely. The gist of any reasonable definition of overfitting is: If I randomly perturb the desired outputs of my function, how well
can I find new parameters to fit the new outputs? I can't do a good job of giving more detail than that in a short comment, but if you feel confused about overfitting, here's a good (and famous)
article about frequentist learning theory by Vladimir Vapnik that may be useful:
Replies from: IlyaShpitser
↑ comment by IlyaShpitser · 2014-08-04T16:48:38.496Z · LW(p) · GW(p)
This is about "reasonable encoding" not "linearity," though. That is, linear functions of parameters encode reasonably, but not all reasonable encodings are linear. We can define a parameter to be
precisely one bit of information, and then ask for the minimum of bits needed.
I don't understand why people are so hung up on linearity.
↑ comment by A1987dM (army1987) · 2014-08-01T11:39:14.111Z · LW(p) · GW(p)
I don't follow you. Overfitting happens when your model has too many parameters, relative to the amount of data you have. It is true that linear models may have few parameters compared to some
non-linear models (for example linear regression models vs regression models with extra interaction parameters). But surely, we can have sparsely parameterized non-linear models as well.
Sure, technically if Alice fits a small noisy data set as y(x) = a*x+b and Bob fits it as y(x) = c*Ai(d*x) (where Ai is the Airy function) they've used the same number of parameters, but that won't
stop me from rolling my eyes at the latter unless he has a good first-principle reason to privilege the hypothesis.
↑ comment by Stuart_Armstrong · 2014-07-29T09:44:42.593Z · LW(p) · GW(p)
The problem is more practical than theoretical (don't have the links to hand. but you can find some in my silos of expertise post). Statisticians do not adjust properly for extra degrees of freedom,
so among some category of published models, the linear ones will be best. Also, it seems that linear models are very good for modelling human expertise - we might think we're complex, but we behave
pretty linearly.
Replies from: IlyaShpitser
↑ comment by IlyaShpitser · 2014-07-29T18:48:11.814Z · LW(p) · GW(p)
"Statisticians" is a pretty large set.
I still don't understand your original "because." I am talking about modeling the truth, not modeling what humans do. If the truth is not linear and humans use a linear modeling algorithm, well then
they aren't a very good role model are they?
[ edit: did not downvote. ]
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-07-30T09:42:22.571Z · LW(p) · GW(p)
Because human flaws creep in in the process of modelling as well. Taking non linear relationships into account (unless there is a causal reason to do so) is asking for statistical trouble unless you
very carefully account for how many models you have tested and tried (which almost nobody does).
Replies from: None, IlyaShpitser, Lumifer
↑ comment by [deleted] · 2014-07-31T21:20:28.231Z · LW(p) · GW(p)
How do I account for how many models I've tested? No, really, I don't know what that'd even be called in the statistics literature, and it seems like if a general technique for doing this were known
the big data people would be all over it.
Replies from: Stuart_Armstrong, Stuart_Armstrong, Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-08-08T11:11:06.016Z · LW(p) · GW(p)
What we're doing at the FHI is acting like a machine learning problem: splitting the data into a training and a testing set, checking as much as we want on the training set, formulating the
hypotheses, then testing them on the testing set.
↑ comment by Stuart_Armstrong · 2014-08-07T16:25:21.217Z · LW(p) · GW(p)
Another approach seems to be stepwise regression: http://en.wikipedia.org/wiki/Stepwise_regression
Replies from: EHeller, None
↑ comment by EHeller · 2014-08-07T17:14:24.375Z · LW(p) · GW(p)
I see a lot of stepwise regression being used by non-statisticians, but I think statisticians themselves think its something of a joke. If you have more predictors than you can fit coefficients for,
and want an understandable linear model you are better off with something like LASSO.
Edit: Don't just take my word for it, google found this blog post for me: http://andrewgelman.com/2014/06/02/hate-stepwise-regression/
Replies from: Lumifer
↑ comment by Lumifer · 2014-08-07T17:38:45.584Z · LW(p) · GW(p)
I concur. Stepwise regression is a very crude technique.
I find it useful as an initial filter if I have to dig through a LOT of potential predictors, but you can't rely on it to produce a decent model.
↑ comment by [deleted] · 2014-08-07T16:30:25.800Z · LW(p) · GW(p)
So it wasn't as clear with the previous link, but it seems to me that the nth step of this method doesn't condition on the fact that the last n-1 steps failed.
↑ comment by IlyaShpitser · 2014-07-31T20:46:02.498Z · LW(p) · GW(p)
If you array the full might of statistics/machine learning/knowledge representation in AI/math/signal processing, and took the very best, I am very sure they could beat a linear model for a
non-linear ground truth very easily. If so, maybe the right thing to do here is to emulate those people when doing data analysis, and not use the model we know to be wrong.
Replies from: Stuart_Armstrong
↑ comment by Lumifer · 2014-07-30T14:41:34.168Z · LW(p) · GW(p)
Taking non linear relationships into account (unless there is a causal reason to do so) is asking for statistical trouble unless you very carefully account for how many models you have tested and
tried (which almost nobody does).
First, the structure of your model should be driven by the structure you're observing in your data. If you are observing nonlinearities, you'd better model nonlinearities.
Second, I don't buy that going beyond linear models is asking for statistical trouble. It just ain't so. People who overfit can (and actually do, all the time) stuff a ton of variables into a linear
model and successfully overfit this way.
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-07-30T16:09:43.078Z · LW(p) · GW(p)
And the number of terms explode when you add non linearities.
5 independent variables with quadratic terms give you 21 values to play with (1 constant + 5 linear + 15 quadratic); it's much easier to justify conceptually "lets look at quadratic terms" than "lets
add in 15 extra variables" even though the effect on degrees of freedom is the same.
Replies from: Lumifer
↑ comment by Lumifer · 2014-07-30T16:46:43.114Z · LW(p) · GW(p)
And the number of terms explode when you add non linearities
No, they don't. You control the number of degrees of freedom in your models. If you don't, linear models won't help you much, and if you do linearity does not matter.
5 independent variables with quadratic terms give you 21 values to play with
I think you're confusing quadratic terms and interaction terms. It also seems that you're thinking of linear models solely as linear regressions. Do you consider, e.g. GLMs to be "linear" models?
What about transformations of input variables, are they disallowed in your understanding of linear models?
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-07-31T08:39:31.462Z · LW(p) · GW(p)
I'm talking about practice, not theory. And most of the practical results that I've seen is that regression linear models are full of overfitting if they aren't linear. Even beyond human error, it
seems that in many social science areas the data quality is poor enough that adding non-linearities can be seen, a priori, to be a bad thing to do.
Except of course if there is a firm reason to add a particular non-linearity to the problem.
I'm not familiar with the whole spectrum of models (regression models, beta distributions, some conjugate prior distributions, and some machine learning techniques is about all I know), so I can't
confidently speak about the general case. But, extrapolating from what I've seen and known biases and incentives, I'm quite confident in predicting that generic models are much more likely to be
overfitted than to have too few degrees of freedom.
Replies from: Lumifer, othercriteria
↑ comment by Lumifer · 2014-07-31T14:46:23.092Z · LW(p) · GW(p)
I'm quite confident in predicting that generic models are much more likely to be overfitted than to have too few degrees of freedom.
Oh, I agree completely with that. However there are a bunch of forces which make it so starting with the publication bias. Restricting the allowed classes of models isn't going to fix the problem.
It's like observing that teenagers overuse makeup and deciding that a good way to deal with that would be to sell lipstick only in three colors -- black, brown, and red. Not only it's not a solution,
it's not even wrong :-/
the data quality is poor enough that adding non-linearities can be seen, a priori, to be a bad thing to do.
Why do you believe that a straight-line fit should be the a priori default instead of e.g. a log or a power-law line fit?
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-07-31T15:17:28.146Z · LW(p) · GW(p)
Restricting the allowed classes of model isn't going to fix the problem.
I disagree; it would help at the very least. I would require linear models only, unless a) there is a justification for non-linear terms or b) there is enough data that the result is still
significant even if we inserted all the degrees of freedom that the degree of non-linearities would allow.
Why do you believe that a straight-line fit should be the a priori default instead of e.g. a log or a power-law line fit?
In most cases I've seen in the social science, the direction of the effect is of paramount importance, the other factor less so. It would probably be perfectly fine to restrict to only linear, only
log, or only power-law; it's the mixing of different approaches that explodes the degrees of freedom. And in practice letting people have one or the other just allows them to test all three before
reporting the best fit. So I'd say pick one class and stick with it.
Replies from: Lumifer
↑ comment by Lumifer · 2014-07-31T15:40:21.526Z · LW(p) · GW(p)
there is enough data that the result is still significant even if we inserted all the degrees of freedom that the degree of non-linearities would allow.
I think this translates to "Calculate the signficance correctly" which I'm all for, linear models included :-)
Otherwise, I still think you're confused between the model class and the model complexity (= degrees of freedom), but we've set out our positions and it's fine that we continue to disagree.
↑ comment by othercriteria · 2014-07-31T14:40:07.182Z · LW(p) · GW(p)
I'm quite confident in predicting that generic models are much more likely to be overfitted than to have too few degrees of freedom.
It's easy to regularize estimation in a model class that's too rich for your data. You can't "unregularize" a model class that's restrictive enough not to contain an adequate approximation to the
truth of what you're modeling.
↑ comment by Lumifer · 2014-07-28T18:52:45.845Z · LW(p) · GW(p)
Because in many fields, linear models (even poor ones) are the best we're going to get, with more complex models losing to overfitting.
That's privileging a particular class of models just because they historically were easy to calculate.
If you're concerned about overfitting you need to be careful with how many parameters are you using, but that does not translate into an automatic advantage of a linear model over, say, a log one.
The article you linked to goes to pre-(personal)computer times when dealing with non-linear models was often just impractical.
↑ comment by gwern · 2014-07-28T18:52:44.307Z · LW(p) · GW(p)
Because in many fields, linear models (even poor ones) are the best we're going to get, with more complex models losing to overfitting.
I don't think that's true. What fields show optimal performance from linear models where better predictions can't be gotten from other techniques like decision trees or neural nets or ensembles of
Showing that crude linear models, with no form of regularization or priors, beats human clinical judgement, doesn't show your previous claim.
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-07-29T09:42:16.770Z · LW(p) · GW(p)
Modelling human clinical judgement is best done with linear models, for instance.
Replies from: gwern
↑ comment by gwern · 2014-07-29T16:07:32.754Z · LW(p) · GW(p)
Best done? Better than, say, decision trees or expert systems or Bayesian belief networks? Citation needed.
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-07-29T17:20:12.528Z · LW(p) · GW(p)
Goldberg, Lewis R. "Simple models or simple processes? Some research on clinical judgments." American Psychologist 23.7 (1968): 483.
Replies from: gwern
↑ comment by gwern · 2014-07-29T17:40:18.815Z · LW(p) · GW(p)
1968? Seriously?
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-07-30T09:40:13.632Z · LW(p) · GW(p)
Well there's Goldberg, Lewis R. "Five models of clinical judgment: An empirical comparison between linear and nonlinear representations of the human inference process." Organizational Behavior and
Human Performance 6.4 (1971): 458-479.
The main thing is that these old papers seem to still be considered valid, see eg Shanteau, James. "How much information does an expert use? Is it relevant?." Acta Psychologica 81.1 (1992): 75-86.
Replies from: gwern
↑ comment by gwern · 2014-08-05T20:38:53.311Z · LW(p) · GW(p)
(It would be nice if you would link fulltext instead of providing citations; if you don't have access to the fulltext, it's a bad idea to cite it, and if you do, you should provide it for other
people who are trying to evaluate your claims and whether the paper is relevant or wrong.)
I've put up the first paper at https://dl.dropboxusercontent.com/u/85192141/1971-goldberg.pdf / https://pdf.yt/d/Ux7RZXbo0n374dUU I don't think this is particularly relevant: it only shows that 2
very specific equations (pg4, #3 & #4) did not outperform the linear model on a particular dataset. Too bad for Einhorn 1971.
Your second paper doesn't support the claims:
A third possibility is that incorrect methods were used to measure the amount of information in experts’ judgments; use of the “correct” measurement method might support the Information-Use
Hypothesis. In the studies reported here, four techniques were used to measure information use: protocol analysis, multiple regression analysis, analysis of variance, and self-ratings by judges.
Despite differences in measurement methods, comparable results were reported. Other methodological issues might be raised, but the studies seem varied enough to rule out any artifactual
These aren't very good methods for extracting the full measure of information.
So to summarize: reality isn't entirely linear, so nonlinear methods frequently excel with modern developments to regularize and avoid overfitting (we can see this in the low prevalence of linear
methods in demanding AI tasks like image recognition, or more generally, competitions like Kaggle on all sorts of domains); to the extent that humans are good predictors and classifiers too of
reality, their predictions/classifications will be better mimicked by nonlinear methods; research showing the contrary typically does not compare very good methods and much more recent research may
do much better (for example, parole/recidivism predictions by parole boards may be bad and easily improved on by linear models, but does that mean algorithms can't do even better?), and to the extent
linear methods succeed, it may reflect the lack of relevant data or inherent randomness of results for a particular cherrypicked task.
To show your original claim ("in many fields, linear models (even poor ones) are the best we're going to get, with more complex models losing to overfitting"), I would want to see linear models
steadily beat all comers, from random forests to deep neural networks to ensembles of all of the above, on a wide variety of large datasets. I don't think you can show that.
Replies from: Stuart_Armstrong
↑ comment by Stuart_Armstrong · 2014-08-06T11:28:11.348Z · LW(p) · GW(p)
I tend to agree with you about models, once overfitting is sorted.
to the extent that humans are good predictors and classifiers too of reality, their predictions/classifications will be better mimicked by nonlinear methods
This I've still seen no evidence for.
comment by Fyrius · 2014-08-20T17:39:26.400Z · LW(p) · GW(p)
Interesting read! That makes sense.
One little side note, though.
So, ceritus paribus,
Did you mean ceteris paribus?
(Ha, finally a chance for me as a language geek to contribute something to all the math talk. :P )
comment by Unnamed · 2014-08-02T01:26:57.363Z · LW(p) · GW(p)
I'd say that this is regression to the mean. If two variables are correlated with |r| < 1, then extreme values on one variable will be associated with somewhat less extreme values on the other
variable. So people who are +4 SD in height will tend to be less than +4 SD in basketball ability, and people who are +4 SD in basketball ability will tend to be less than +4 SD in height.
comment by MehmetKoseoglu · 2014-08-03T11:26:14.109Z · LW(p) · GW(p)
Thank you for pointing that high IQ problem is probably a statistical effect rather than "too much of a good thing" effect. That was very interesting.
Let me attempt the problem from a simple mathematical point of view.
Let basketball playing ability, Z, is just a sum of height, X, and agility, Y. Both X and Y are Gaussian distributed with mean 0 and variance 1. Assume X and Y are independent.
So, if we know that Z>4, what is the most probable combination of X and Y?
The probability of X>2 and Y>2 is: P(X>2)P(Y>2)=5.2e-4
The probability of X>3 and Y>1 is: P(X>3)P(Y>1)=2.1e-4
So it is more than two times more likely for both abilities to be +2Std than one them is +3Std and the other is +1Std.
I think it can be shown rigorously that the most probable combination is Z/N for each component if there are N independent identically distributed components of an ability.
comment by Douglas_Knight · 2014-08-03T00:32:57.911Z · LW(p) · GW(p)
Given a correlation, the envelope of the distribution should form some sort of ellipse
That isn't an explanation, but a stronger claim. Why should it form an ellipse?
A model of an independent factor or noise is an explanation of the ellipse, and thus of the main point. But people may find a stumbling block this middle section, with its assertion that we should
expect ellipses. Also, regression to the mean and the tails coming apart are much more general than ellipses, but ellipses are pretty common.
It generally vindicates worries about regression to the mean
It is regression to the mean, as you yourself say elsewhere. I'm not sure what you are trying to say here; maybe that people's vague worries about regression to the mean are using the technical
concept correctly?
Replies from: nshepperd
↑ comment by nshepperd · 2014-08-03T01:46:28.665Z · LW(p) · GW(p)
Why should it form an ellipse?
Multivariate CLT perhaps? The precondition seems like it might be a bit less common than the regular central limit theorem, but still plausible, if you assume x and y are correlated by being affected
by a third factor, z, which controls the terms that sum together to make x and y.
Once you have a multivariate normal distribution, you're good, since they always have (hyper-)elliptical envelopes.
comment by [deleted] · 2014-07-27T16:23:43.259Z · LW(p) · GW(p)
Isn't the far simpler and more likely scenario that you never have just one variable accounting for all of an outcome? If other variables are not perfectly correlated with the variable you are
graphing you will get noise. Why is it surprising that that noise also exists in the most extreme points?
EDIT: misunderstood last few paragraphs.
comment by MakerOfErrors · 2018-12-01T16:35:42.271Z · LW(p) · GW(p)
This post has been a core part of how I think about Goodhart's Law. However, when I went to search for it just now, I couldn't find it, because I was using Goodhart's Law as a search term, but it
doesn't appear anywhere in the text or in the comments.
So, I thought I'd mention the connection, to make this post easier for my future self and others to find. Also, other forms of this include:
Maybe it would be useful to map out as many of the forms of Goodhart's Law as possible, Turchin style.
comment by moridinamael · 2014-07-28T20:11:06.462Z · LW(p) · GW(p)
Following on your Toy Model concept, let's say the important factors in being (for example) a successful entrepreneur are Personality, Intelligence, Physical Health, and Luck.
If a given person has excellent (+3SD) in all but one of the categories, but only average or poor in the final category, they're probably not going to succeed. Poor health, or bad luck, or bad people
skills, or lack of intelligence can keep an entrepreneur at mediocrity for their productive career.
Really any competitive venue can be subject to this analysis. What are the important skills? Does it make sense to treat them as semi-independent, and semi-multiplicative in arriving at the final
Replies from: Thrasymachus
↑ comment by Thrasymachus · 2014-08-02T03:05:48.249Z · LW(p) · GW(p)
It might give a useful heuristic in fields where success is strongly multifactorial - if you aren't at least doing well at each sub-factor, don't bother entering. It might not work so well when
there's a case that success almost wholly loads on one factor and there might be more 'thresholds' for others (e.g. to do theoretical physics, you basically need to be extremely clever, but also
sufficiently mentally healthy and able to communicate with others).
I'm interested in the distribution of human ability into the extreme range, and I plan to write more on it. My current (very tentative) model is that the factors are commonly additive, not
multiplicative. A proof for this is alas too long for this combox to contain, etc. etc. ;)
comment by b1shop · 2014-08-05T14:41:01.477Z · LW(p) · GW(p)
Statistical point: the variance of forecast error for correctly specified simple regression problems is equal to:
Sigma^2(1 + 1/N + (x_o - x_mean)^2 / (Sigma ( x_i - x_mean) ^2))
So forecast error increases as x_o moves away from x_mean, especially when the variance of x is low by comparison.
Edit: Sub notation was apparently indenting things. I'm going to take a picture from my stats book tonight. Should be more readable.
Edit: Here's a more readable link. http://i.imgur.com/pu8lg0Wh.jpg
Replies from: Douglas_Knight, Vladimir_Nesov
↑ comment by Vladimir_Nesov · 2015-05-18T21:24:00.423Z · LW(p) · GW(p)
The get the image
$\textit{var}(f) = \sigma ^ 2 \left ( 1+\frac{1}{N} + \frac{(x_0 - \bar{x})^2}{\sum (x_i-\bar{x})^2} \right )$
use the following code in your comment:
See Comment formatting/Using LaTeX to render mathematics on the wiki for more details. I've used codecogs editor and fixed an issue in the URL manually; there are other options listed on the wiki.
The LaTeX code for the codecogs editor is this:
\textit{var}(f) = \sigma ^ 2 \left ( 1+\frac{1}{N} + \frac{(x_0 - \bar{x})^2}{\sum (x_i-\bar{x})^2} \right )
comment by drethelin · 2014-07-28T16:03:18.456Z · LW(p) · GW(p)
For business in particular I think network size and effects are the reason that the very top end of earners are much more deviant in earnings than in intellect. The fact that you can capture entire
billions of dollars markets because modern society allows a single product to be distributed worldwide will multiply the value of the "top" product by a lot more than its quality might justify.
comment by byrnema · 2014-07-27T09:42:41.792Z · LW(p) · GW(p)
Interesting post. Well thought out, with an original angle.
In the direction of constructive feedback, consider that the concept of sample size -- while it seems to help with the heuristic explanation -- likely just muddies the water. (We'd still have the
effect even if there were plenty of points at all values.)
For example, suppose there were so many people with extreme height some of them also had extreme agility (with infinite sample size, we'd even reliably have that the best players we're also the
tallest.) So: some of the tallest people are also the best basketball players. However, as you argued, most of the tallest won't be the most agile also, so most of the tallest are not the best
(contrary to what would be predicted by their height alone).
In contrast, if average height correlates with average basketball ability, the other necessary condition for a basketball player with average height to have average ability is to have average agility
-- but this is easy to satisfy. So most people with average height fit the prediction of average ability.
Likewise, the shortest people aren't likely to have the lowest agility, so the correlation prediction fails at that tail too.
Some of the 'math' is that it is easy to be average in all variables ( say, (.65)^n where n is the number of variables) but the probability of being standard deviations extreme in all variables is
hard (say, (.05)^n to be in the top 5 percent.) Other math can be used to find the theoretic shape for these assumptions (e. g., is it an ellipse?).
Replies from: philh
↑ comment by philh · 2014-07-28T12:25:20.245Z · LW(p) · GW(p)
We'd still have the effect even if there were plenty of points at all values.
Are you talking about relative sample sizes, or absolute? The effect requires that as you go from +4sd to +3sd to +2sd, your population increases sufficiently fast. As long as that holds, it doesn't
go away if the total population grows. (But that's because if you get lots of points at +4sd, then you have a smaller number at +5sd. So you don't have "plenty of points at all values".)
If you have equal numbers at +4 and +3 and +2, then most of the +4 still may not be the best, but the best is likely to be +4.
(Warning: I did not actually do the math.)
Replies from: byrnema
↑ comment by byrnema · 2014-07-30T15:55:06.635Z · LW(p) · GW(p)
I don't believe we disagree on anything. For example, I agree with this:
If you have equal numbers at +4 and +3 and +2, then most of the +4 still may not be the best, but the best is likely to be +4.
Are you talking about relative sample sizes, or absolute?
By 'plenty of points'... I was imagining that we are taking a finite sample from a theoretically infinite population. A person decides on a density that represents 'plenty of points' and then keeps
adding to the sample until they have that density up to a certain specified sd.
comment by Vasco Grilo (vascoamaralgrilo) · 2023-07-11T10:25:50.122Z · LW(p) · GW(p)
Great post!
The R-square measure of correlation between two sets of data is the same as the cosine of the angle between them when presented as vectors in N-dimensional space
Not R-square, just R:
comment by qbolec · 2022-12-28T19:24:57.827Z · LW(p) · GW(p)
I've made a visualization tool for that:
It generates an elliptical cloud of white points where X is distributed normally, and Y=normal + X*0.3, so the two are correlated. Then you can define a green range on X and Y axis, and the tool
computes the correlation in a sample (red points) restricted to that (green) range.
So, the correlation in the general population (white points) should be positive (~0.29). But if I restrict attention to upper right corner, then it is much lower, and often negative.
comment by rebellionkid · 2014-08-07T07:52:04.702Z · LW(p) · GW(p)
Fantastic, I wish I'd had this back when almost everyone in LW/EA circles I met was reading the biography of everyone in the ' fortune 400 and trying to spot the common factors. A surprisingly common
strategy that's likely not to work for exactly these reasons.
comment by algekalipso · 2014-08-05T06:28:01.711Z · LW(p) · GW(p)
My guess is that there are several variables that are indeed positively correlated throughout the entire range, but are particularly highly correlated at the very top. Why not? I'm pretty sure we can
come up with a list.
comment by ChristianKl · 2014-07-27T00:35:14.838Z · LW(p) · GW(p)
What is interesting is the strength of these relationships appear to deteriorate as you advance far along the right tail.
I read that claim as saying that if you sample the 45% to 55% percentile you will get a stronger correlation than if you sample the 90% to 100% percentile. Is that what you are arguing?
Replies from: Thrasymachus, Luke_A_Somers
↑ comment by Thrasymachus · 2014-08-02T03:07:33.096Z · LW(p) · GW(p)
This was badly written, especially as it offers confusion with range restriction. Sorry! I should just have said "what is interesting is that extreme values of the predictors predictors seldom pick
out the most extreme outcomes".
Replies from: ChristianKl
↑ comment by Luke_A_Somers · 2014-07-27T02:00:37.225Z · LW(p) · GW(p)
45% to 55% of what measure? Part of the point of this is that how you cut your sample will change these things.
If you take it as 45% to 55% of one of the other contributing factors, then the correlation should be much stronger!
Replies from: ChristianKl
comment by 110phil · 2014-08-02T00:02:24.472Z · LW(p) · GW(p)
I don't think there's anything special about the tails.
Take a sheet of paper, and cover up the left 9/10 of the high-correlation graph. That leaves the right tail of the X variable. The remaining datapoints have a much less linear shape.
But: take two sheets of paper, and cover up (say) the left 4/10, and the right 5/10. You get the same shape left over! It has nothing to do with the tail -- it just has to do with compressing the
range of X values.
The correlation, roughly speaking, tells you what percentage of the variation is not caused by random error. When you compress the X, you compress the "real" variation, but leave the "error"
variation as is. So the correlation drops.
Replies from: Thrasymachus
↑ comment by Thrasymachus · 2014-08-02T03:01:45.880Z · LW(p) · GW(p)
I agree that range restriction is important, and I think a range-restriction story can become basically isomorphic to my post (e.g. "even if something is really strongly correlated, range restricting
to the top 1% of this distribution, this correlation is lost in the noise, so it should not surprise us that the biggest X isn't the biggest Y.")
My post might be slightly better for people who tend to visualize things, and I suppose it might have a slight advantage as it might provide an explanation why you are more likely to see this as the
number of observations increases, which isn't so obvious when talking about a loss of correlation.
Replies from: AnneOminous
↑ comment by AnneOminous · 2014-09-17T04:56:53.205Z · LW(p) · GW(p)
"At the extremes, other factors may weigh more."
Nothing that hasn't been said before, and in my opinion better.
I don't particularly like your "ellipse" generalization, either, because it's just wrong. We already know a perfect correlation would be linear. We already know a lesser correlation is "fatter".
Bringing ellipses into the issue is just an intuitive, illustrative fiction, which I really don't appreciate very much because it's not particularly informative and it isn't scientifically sound at
Please don't misunderstand me: I do think it is illustrative, and I do think it has its place. In the newby section maybe.
Understand, I am aware that may come across as overly harsh, but it isn't meant that way. I'm not trying to be impolite. It's just my opinion and I honestly don't know a better way to express it
right now without being dishonest.
Replies from: Lumifer, Richard_Kennaway
↑ comment by Lumifer · 2014-09-17T14:39:06.793Z · LW(p) · GW(p)
I don't particularly like your "ellipse" generalization, either, because it's just wrong. ... Bringing ellipses into the issue is just an intuitive, illustrative fiction, which I really don't
appreciate very much because it's not particularly informative and it isn't scientifically sound at all.
I think you're mistaken about that. An ellipse is the shape of a multivariate normal distribution, for example. In fact, there is the entire family of elliptical distributions which are, to quote
Wikipedia, "a broad family of probability distributions that generalize the multivariate normal distribution. Intuitively, in the simplified two and three dimensional case, the joint distribution
forms an ellipse and an ellipsoid, respectively, in iso-density plots."
a perfect correlation would be linear
That's a meaningless phrase, correlation is linear by definition. Moreover, it's a particular measure of dependency which can be misleading.
↑ comment by Richard_Kennaway · 2014-09-17T11:08:21.704Z · LW(p) · GW(p)
It's just my opinion and I honestly don't know a better way to express it right now without being dishonest.
A better way would be to make the criticisms more concrete. What does "not particularly informative and it isn't scientifically sound at all" mean? You might, for example, have said something to the
effect that the ellipses are contours of the bivariate normal distribution with the same correlation, and pointed out that not all bivariate distributions are normal. But on the other hand the
scatterplots presented aren't so far away from normal that the ellipses are misleading. The ellipses are indeed intuitive and illustrative; but calling them "just fiction" is another way of
expressing criticism too vague to respond to. The point masses and frictionless pulleys of school physics problems are also fictions, but none the worse for that.
This is also vague:
Nothing that hasn't been said before, and in my opinion better.
(Where, and what did they say? We cannot know what better resources you know of unless you tell us.)
And this:
I do think it is illustrative, and I do think it has its place. In the newby section maybe.
There is no "newby section" on LessWrong.
Besides, you're talking there about something you previously called "just wrong". First it's "just wrong", then it's "not particularly informative", then it's "illustrative", then "it has its place
in the newby section". It reminds me of the old adage about the stages of truth, with the entire sequence here compressed into a single comment.
Replies from: AnneOminous
↑ comment by AnneOminous · 2014-10-17T21:30:31.573Z · LW(p) · GW(p)
A better way would be to make the criticisms more concrete.
What isn't "concrete" about it? I think the whole article is an exercise in stating the obvious, to those who have had basic education in statistics. Stricter correlations tend to be more linear. A
broader spectrum of data points is pretty much by definition "fatter". I don't see how this is actually very instructive. And to be honest, I don't see how I could be much more specific.
Where, and what did they say? We cannot know what better resources you know of unless you tell us.
You mean you've never had a statistics class? Honestly? I'm not trying to be snide, just asking.
Extreme data points are often called "outliers" for a reason. Since (again, almost -- but not quite -- by definition, it depends on circumstances) they do not generally show as strong a correlation,
"other factors may weigh more". This is a not a revelation. I don't disagree with it, I'm simply saying it's rather elementary logic.
Which brings us back to the main point I was making: I did not feel this was particularly instructive.
Besides, you're talking there about something you previously called "just wrong".
Wrong in the sense that I don't see any actual demonstrated relationship between his ellipses and the data, except for simple, rather intuitive observation. It's merely an illustrative tool. More
So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not to tight (the tighter the correlation, the larger the
sample size required), one will observe the ellipses with the bulging sides of the distribution (2).
This is an incorrect statement. What he is offering is a way to describe how data at the extreme ends may vary from correlation. Not "why". There is nothing here establishing causation.
If we are to be "less wrong", then we should endeavor to not make confused comments like that. | {"url":"https://lw2.issarice.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart","timestamp":"2024-11-07T22:44:21Z","content_type":"text/html","content_length":"224373","record_id":"<urn:uuid:8d62bc42-cf0b-434e-811a-c2ddcb69d99e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00825.warc.gz"} |
Markov logic networks provide a simple way for combining two seemingly rather different descriptions of data - first order logic and probability. While logical systems are a compact way to represent
knowledge, they are too rigid for many real world applications. When trying to describe a dataset through logical relations between different features, a single data point violating them is enough to
render the description incorrect. At the same time, it is not obvious how logical relations could be incorporated into a more flexible probabilistic framework. The idea behind Markov logic networks
is to use a first order knowledge base for creating a probabilistic model in form of a Markov network, thereby benefiting from the flexibility of probabilistic descriptions and the usefulness of
rules. We will begin the talk with a review of Markov networks and first order logic. We will then discuss the motivation behind Markov logic networks, their construction and an overview of different
techniques for learning and inference, followed by highlighting possible applications of the theory. | {"url":"https://transferlab.ai/seminar/2020/markov-logic-networks-construction-learning-and-inference/","timestamp":"2024-11-13T14:43:42Z","content_type":"text/html","content_length":"12783","record_id":"<urn:uuid:419bfdff-f1cf-46c0-a9d6-907a0bc4f0c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00253.warc.gz"} |
Some main topics of my studies of mathematics and computer science were algebra, homological algebra, differential equations, mathematical physics, programming (Pascal, Ada and Java), software
engineering and artificial intelligence. A graphics related to algebraic topology - the singular chain complex and its homology:
CorelDRAW-file - GIF-file
My diploma thesis in the area of quantum statistical mechanics and functional analysis with Prof. K.-H. Fichtner / Universität Jena (in German):
Groups of unitary operators and complete orthonormal systems
- A teleportation model in spaces of qubits -
The medium of teleportation is an entangled state: A coherent laser beam (or one single particle - with photons, teleportation experiments were successfull) is splitted into two parts, e.g. by a
partly translucent mirror. Then the two parts remain interconnected to each other; a state measurement of one half beam immediately determines the measurement of the second. The speed of light is not
relevant to this effect! (But for information transfer by teleportation, you need a supplementary message transmitted by 300.000 km/h at most.)
With "Canon 1 a 2 cancrizans" of J. S. Bach's Musical Offering, I found an analogy to this remarkable entangled state: The main theme and its contrapunctus are mirrored in the middle of the canon and
running backwards. So if Bach would have changed one note anywhere, by one thought he would have been forced to change his mirror image. See the notes and listen to the music - repeat it not
infinitely, but as often as you want! (A difference: the splitting takes place at the beginning of the physical entangled state.)
After my diploma I kept contact with the university of Jena. I concentrated on algebra studies - (reflection) groups and their representations, rings, modules, (Hecke) algebras, separability.
Thereafter I wanted to learn more about applications: machine learning, data mining, algorithms of pattern recognition and AI, numerics and statistics.
Finally I rediscovered formal concept analysis, where clear and beautiful mathematics are connected to rich applications: algebraic structures of complete lattices are to be discovered in questions
from knowledge representation, data mining, software engineering or semantic web - as well as in bioinformatics / systems biology. There my current research project is the modelling of gene
regulatory networks.
back to the homepage site map | {"url":"http://jwollbold.de/mathematics.htm","timestamp":"2024-11-09T17:34:50Z","content_type":"text/html","content_length":"4463","record_id":"<urn:uuid:85408ca9-273a-4ae5-abf9-a6b764728c50>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00511.warc.gz"} |
Represents an arbitrarily-shaped neighborhood (filter support) in an arbitrary number of dimensions.
The PixelTable is an array of pixel runs, where each run is encoded by start coordinates and a length (number of pixels). The runs all go along the same dimension, given by
It is simple to create a pixel table for unit circles (spheres) in different norms, and for straight lines. And any other shape can be created through a binary image.
The processing dimension defines the dimension along which the pixel runs are taken. By default it is dimension 0, but it could be beneficial to set it to the dimension in which there would be fewer
Two ways can be used to walk through the pixel table:
1. dip::PixelTable::Runs returns a std::vector with all the runs, which are encoded by the coordinates of the first pixel and a run length. Visiting each run is an efficient way to process the whole
neighborhood. For example, the filter dip::Uniform, which computes the average over all pixels within the neighborhood, only needs to subtract the pixels on the start of each run, shift the
neighborhood by one pixel, then add the pixels on the end of each run. See the example in the section Applying an arbitrary neighborhood filter.
2. dip::PixelTable::begin returns an iterator to the first pixel in the table, incrementing the iterator successively visits each of the pixels in the run. Dereferencing this iterator yields the
offset to a neighbor pixel. This makes for a simple way to visit every single pixel within the neighborhood.
The pixel table can optionally contain a weight for each pixel. These can be accessed only by retrieving the array containing all weights. This array is meant to be used by taking its begin iterator,
and using that iterator in conjunction with the pixel table’s iterator. Taken together, they provide both the location and the weight of each pixel in the neighborhood. For example, modified from
from the function dip::GeneralConvolution:
sfloat* in = ... // pointer to the current pixel in the input image
sfloat* out = ... // pointer to the current pixel in the output image
sfloat sum = 0;
auto ito = pixelTable.begin(); // pixelTable is our neighborhood
auto itw = pixelTable.Weights().begin();
while( !ito.IsAtEnd() ) {
sum += in[ *ito ] * static_cast< sfloat >( *itw );
*out = sum;
struct PixelRun
The pixel table is formed of pixel runs, represented by this structure.
class iterator
An iterator that visits each of the neighborhood’s pixels in turn.
auto Runs() const -> std::vector<PixelRun> const&
Returns the vector of runs.
auto Dimensionality() const -> dip::uint
Returns the dimensionality of the neighborhood.
auto Sizes() const -> dip::UnsignedArray const&
Returns the size of the bounding box of the neighborhood.
auto Origin() const -> dip::IntegerArray const&
Returns the coordinates of the top-left corner of the bounding box w.r.t. the origin.
auto Boundary() const -> dip::UnsignedArray
Returns the size of the boundary extension along each dimension that is necessary to accommodate the neighborhood on the edge pixels of the image.
void ShiftOrigin(dip::IntegerArray const& shift)
Shifts the origin of the neighborhood by the given amount.
void MirrorOrigin()
Shifts the origin of neighborhood by one pixel to the left for even-sized dimensions. This is useful for neighborhoods with their origin in the default location, that have been mirrored.
void Mirror()
Mirrors the neighborhood.
auto NumberOfPixels() const -> dip::uint
Returns the number of pixels in the neighborhood.
auto ProcessingDimension() const -> dip::uint
Returns the processing dimension, the dimension along which pixel runs are laid out.
auto begin() const -> dip::PixelTable::iterator
A const iterator to the first pixel in the neighborhood.
auto end() const -> dip::PixelTable::iterator
A const iterator to one past the last pixel in the neighborhood.
auto AsImage() const -> dip::Image
Creates a binary image representing the neighborhood, or a dfloat one if there are weights associated.
void AsImage(dip::Image& out) const
Same as previous overload, but writing into the given image.
auto Prepare(dip::Image const& image) const -> dip::PixelTableOffsets
Prepare the pixel table to be applied to a specific image.
void AddWeights(dip::Image const& image)
Add weights to each pixel in the neighborhood, taken from an image. The image must be of the same sizes as the PixelTable’s bounding box (i.e. the image used to construct the pixel table),
scalar, and not binary (i.e. integer, float or complex).
void AddDistanceToOriginAsWeights()
Add weights to each pixel in the neighborhood, using the Euclidean distance to the origin as the weight. This is useful for algorithms that need to, for example, sort the pixels in the
neighborhood by distance to the origin.
auto HasWeights() const -> bool
Tests if there are weights associated to each pixel in the neighborhood.
auto WeightsAreComplex() const -> bool
Tests if the weights associated to each pixel, if any, are complex-valued.
auto Weights() const -> std::vector<dfloat> const&
Returns a const reference to the weights array.
Class documentation
The pixel table is formed of pixel runs, represented by this structure.
dip::IntegerArray coordinates The coordinates of the first pixel in a run, w.r.t. the origin.
dip::uint length The length of the run, expected to always be larger than 0.
Function documentation
Construct a pixel table for default filter shapes.
The known default shapes are "rectangular", "elliptic", and "diamond", which correspond to a unit circle in the L^∞, L^2 and L^1 norms; and "line", which is a single-pixel thick line.
The size array determines the size and dimensionality. For unit circles, it gives the diameter of the neighborhood (not the radius!); the neighborhood contains all pixels at a distance equal or
smaller than half the diameter from the origin. This means that non-integer sizes can be meaningful. The exception is for the "rectangular" shape, where the sizes are rounded down to the nearest
integer, yielding rectangle sides that are either even or odd in length. For even sizes, one can imagine that the origin is shifted by half a pixel to accommodate the requested size (though the
origin is set to the pixel that is right of the center). For the "diamond" and "elliptic" shapes, the bounding box always has odd sizes, and the origin is always centered on one pixel. To accomplish
the same for the “rectangular” shape, simply round the sizes array to an odd integer:
size[ ii ] = std::floor( size[ ii ] / 2 ) * 2 + 1
For the line, the size array gives the size of the bounding box (rounded to the nearest integer), as well as the direction of the line. A negative value for one dimension means that the line runs
from high to low along that dimension. The line will always run from one corner of the bounding box to the opposite corner, and run through the origin.
procDim indicates the processing dimension.
Construct a pixel table for an arbitrary shape defined by a binary image.
Set pixels in mask indicate pixels that belong to the neighborhood.
origin gives the coordinates of the pixel in the image that will be placed at the origin (i.e. have coordinates {0,0,0}. If origin is an empty array, the origin is set to the middle pixel, as given
by mask.Sizes() / 2. That is, for odd-sized dimensions, the origin is the exact middle pixel, and for even-sized dimensions the origin is the pixel to the right of the exact middle.
procDim indicates the processing dimension.
Prepare the pixel table to be applied to a specific image.
The resulting object is identical to this, but has knowledge of the image’s strides and thus directly gives offsets rather than coordinates to the neighbors. | {"url":"https://diplib.org/diplib-docs/dip-PixelTable.html","timestamp":"2024-11-13T07:37:48Z","content_type":"text/html","content_length":"32695","record_id":"<urn:uuid:d795526b-373d-4bfa-8d48-eb9aaf0be2ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00223.warc.gz"} |
Understanding Momentum Units: Kg·m/s - A Comprehensive Guide for Physics and Engineering
Understanding Momentum Units: Kg·m/S – A Comprehensive Guide For Physics And Engineering
The unit of measurement for momentum is the kilogram meter per second (kg·m/s), which encompasses the concepts of mass (kilogram), length (meter), and time (second). This SI unit describes the
momentum of an object as the product of its mass and velocity. Momentum is a fundamental quantity in physics, representing the motion of an object and its resistance to changes in motion.
Understanding the unit of momentum is crucial for accurate calculations and analysis in scientific and engineering applications.
Momentum: The Driving Force of Motion
In the realm of physics, momentum reigns supreme as a fundamental concept that governs the behavior of moving objects. It’s the unstoppable force that keeps objects in motion until an opposing force
acts upon them.
Momentum is defined as the product of an object’s mass and its velocity. Mass, the resistance of an object to acceleration, determines how much force is required to set it in motion. Velocity, on the
other hand, describes the rate at which an object moves, encompassing both speed and direction.
By combining these two factors, momentum becomes a measure of how difficult it is to stop a moving object. The greater the momentum, the more force is required to bring it to a halt. This is why a
speeding car has a higher momentum than a slow-moving bicycle, making it much harder to stop.
Significance of Momentum
Momentum is not merely a theoretical concept; it has profound implications in understanding object motion. In the absence of external forces, momentum is conserved. This means that the total momentum
of a closed system remains constant. When two or more objects interact, their momentum is exchanged, but the overall momentum of the system remains unchanged.
This principle is essential in understanding collisions and other dynamic interactions. By analyzing the momentum of objects before and after a collision, scientists can determine the forces involved
and predict the outcomes. Momentum also plays a crucial role in Rocketry, where the expulsion of propellant generates momentum that propels the rocket forward.
Momentum is a fundamental concept in physics that provides a powerful tool for understanding object motion. Its importance extends far beyond the classroom, with applications in fields such as
engineering, Rocketry, and collision analysis. By comprehending the concept of momentum, we gain a deeper appreciation for the forces that govern our world and the dynamic interactions that occur
within it.
Delving into the Cornerstones of Momentum: Impulse, Mass, and Velocity
In the realm of physics, momentum reigns supreme as a fundamental concept that underpins the motion of objects. To truly grasp this elusive force, we must venture into the interconnected web of
impulse, mass, and velocity.
Impulse: The Catalyst for Change
Imagine a force acting upon an object over a brief period, like the kick of a soccer ball. This fleeting interaction, known as impulse, sets the stage for a change in the object’s momentum. The
greater the force applied or the longer its duration, the more pronounced this change will be. Impulse serves as the spark that ignites momentum, causing objects to accelerate or decelerate.
Mass: Steeling Against Acceleration
Mass, the intrinsic property of an object, reflects its innate reluctance to alter its motion. Think of a boulder atop a hill that stubbornly resists any attempt to budge. The greater the mass of an
object, the stronger its resistance to acceleration. Momentum, a vector quantity, accounts for both the mass and the velocity of an object.
Velocity: Mapping the Journey
Velocity, a vector that marries both speed and direction, offers a complete picture of an object’s motion. Speed, the magnitude of velocity, measures how rapidly an object travels, while direction
pinpoints the path it treads. Velocity, in conjunction with mass, determines the magnitude and orientation of momentum, providing a detailed account of an object’s journey through space.
By understanding the intricate interplay of impulse, mass, and velocity, we unlock the secrets of momentum. From the gentle push of a child’s tricycle to the thunderous collision of cosmic bodies,
momentum holds sway, dictating the dance of motion in our universe.
Understanding the Unit of Momentum: A Comprehensive Guide
Momentum, a fundamental concept in physics, describes the motion of an object and its potential to cause a change in the object’s environment. Its unit of measurement, the kilogram meter per second
(kg·m/s), encompasses three distinct physical quantities: mass, length, and time.
Mass: The Essence of Inertia
Mass, measured in kilograms (kg), represents the quantity of matter in an object. It determines an object’s resistance to acceleration. A more massive object will have a greater inertia, making it
more difficult to change its speed or direction of motion.
Length: The Measure of Spatial Extent
Length, measured in meters (m), describes the spatial extent or distance traveled by an object. It is a fundamental property used to calculate velocity and displacement. The greater the length an
object travels in a given time, the greater its velocity will be.
Time: The Fabric of Events
Time, measured in seconds (s), is the duration of an event or an interval between events. It is an essential parameter for describing the rate of change in an object’s motion. A shorter time interval
implies a higher rate of acceleration or deceleration.
The Interplay of Mass, Length, and Time
The unit of momentum, kg·m/s, reflects the inseparable relationship between mass, length, and time. It highlights that momentum is a vector quantity with both magnitude and direction. The magnitude
of momentum is directly proportional to the object’s mass and velocity, while the direction is determined by the object’s velocity vector.
Comprehending the unit of momentum is crucial for understanding the concept of momentum itself. The kilogram meter per second (kg·m/s) encapsulates the fundamental physical quantities of mass,
length, and time, providing a quantitative measure of an object’s motion. By grasping the significance of this unit, we gain a deeper appreciation of the interplay between mass, velocity, and time in
the realm of physics.
Components of the Unit of Momentum: Breaking Down the SI Unit
Momentum, a fundamental concept in physics, is the measure of an object’s movement and is calculated by multiplying its mass by its velocity. The unit of momentum in the International System of Units
(SI) is the kilogram meter per second (kg·m/s). Let’s delve deeper into the components of this unit:
Kilogram (kg): The Measure of Mass
The kilogram is the SI unit of mass, representing the quantity of matter in an object. It is a fundamental property that determines an object’s resistance to acceleration. In momentum calculations,
mass plays a crucial role as it reflects the amount of matter in motion.
Meter (m): Describing Distance Traveled
The meter is the SI unit of length, describing the spatial extent or distance traveled. In the context of momentum, it captures the distance an object moves per unit of time. This value is
particularly important when determining the velocity of an object, which influences its momentum.
Second (s): Measuring Time Intervals
The second is the SI unit of time, measuring the duration of an event or interval. In momentum calculations, time represents the duration over which an object’s momentum changes. This duration is
essential for determining the impulse applied to the object, which is force applied over a specific time period.
SI Unit: A Coherent and Consistent System
The kilogram meter per second (kg·m/s) as the SI unit of momentum is part of the coherent and consistent SI system. This system ensures that all units are derived from a set of base units, enabling
seamless conversions between different quantities. The SI system is widely adopted in scientific and engineering applications for its accuracy, precision, and international recognition.
International System of Units: The Language of Momentum
In the realm of physics, momentum stands as a pivotal concept, capturing the essence of an object’s mass in motion. To fully grasp this fundamental principle, it is crucial to understand its unit of
measurement – the kilogram meter per second (kg·m/s). This unit resides within the broader International System of Units (SI), a coherent and consistent system that serves as the backbone of
scientific and engineering endeavors worldwide.
The SI system is not merely a collection of units; it is a finely orchestrated framework that ensures the seamless interaction of these units. Each unit is carefully defined to relate to the others,
allowing for precise measurements and calculations across various disciplines. The kilogram meter per second, as the unit of momentum, is no exception.
At its core, the unit encompasses three fundamental quantities:
• Kilogram (kg): The unit of mass, representing the amount of matter an object contains.
• Meter (m): The unit of length, describing the spatial extent or distance traveled.
• Second (s): The unit of time, measuring the duration of an event or interval.
The interplay of these units within the kilogram meter per second unit elegantly captures the essence of an object’s momentum. By multiplying mass by velocity (the product of length and time), we
obtain a measure of the object’s inertia – its resistance to changes in motion.
In the scientific community, the SI system holds immense significance. It provides a common language for scientists and engineers to communicate and collaborate effectively. The consistent use of
standardized units ensures that research findings and technological advancements can be seamlessly shared and compared across borders and disciplines.
Understanding the SI unit of momentum is not just an academic exercise; it is essential for comprehending the behavior of objects in motion. From the smallest of particles to the celestial bodies
that grace our night sky, the concept of momentum governs their every movement. By mastering this unit, we gain a deeper appreciation for the captivating dance of physics that unfolds around us.
Leave a Comment | {"url":"https://www.bootstrep.org/momentum-units-kgms/","timestamp":"2024-11-03T23:28:28Z","content_type":"text/html","content_length":"151295","record_id":"<urn:uuid:9ad49d6e-b33d-4ce3-9cda-3dd382f8eb9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00184.warc.gz"} |
Notation Visualizer
What is Infix, Prefix, and Postfix Notation?
Operators vs Operands
Operators are symbols like +, -, *, /, ^, anything that operates on...
Operands, which are values like 8, 21; things to be operated on.
The Three Notations
When we say 9+10, that is
infix notation
- that is, the operator is
between the operands. It's the one we're all familiar with.
also known as
Polish notation
- is where the operator sits
the operands.
also known as
Reverse Polish Notation
or RPN- is where the operator sits
the operands.
Why not stick with Infix notation?
Notice how in the second example, 8 - 2 * 3 + 7, we had to evaluate the multiplication before the subtraction and finally the addition? This is inconvenient for computers to evaluate as parentheses
need to be applied according to the order of operations. However, prefix and postfix notation don't suffer from this ambiguity and follow an easy and consistent set of instructions to evaluate,
making it very suitable for computers!
What's an Expression Tree?
Expression Tree
Expression Trees are binary trees whose parent nodes are operators and children nodes are operands of which the operators will execute on. Refer to the Expression Tree Visualizer for the Expression
Tree representation of the expression (8 - 2 * 3 + 7).
What's so special about it?
There are three basic ways to traverse binary trees: Preorder, Inorder, and Postorder. Sound familiar? Yup, for expression trees, preorder traversal outputs prefix notation, inorder outputs infix,
postorder outputs postfix!
How do these three traversals work?
All these traversals use recursion, which is straightforward for a computer but often difficult for humans to grasp. Hence, I've provided buttons to help you visualize each traversal in action!
1. Put current node in result
2. Go to left child, repeat
3. Go to right child, repeat
1. Go to left child, repeat
2. Put current node in result
3. Go to right child, repeat
1. Go to left child, repeat
2. Go to right child, repeat
3. Put current node in result
Expression Tree Visualizer
it's interactive! (scroll to zoom / drag to pan)!
Run Visualizer:
How do you evaluate Prefix/Postfix notation?
What is a Stack?
are an abstract data type with two primary operations:
1. Push: Add an element to the top of the stack
2. Pop Remove an element from the top of the stack
A stack behaves just like a stack of plates: you push plates to the top and also pop plates from the top.
A stack follows LIFO, or Last in First Out.
How to evaluate Postfix and Prefix expressions with a stack?
Note: Prefix is basically evaluated the same way as Postfix but backwards! Just reverse the input expression or read it backwards! Also note that since it is reversed, the left and right operands are
flipped too!
Postfix Pseudocode:
1 2 3 4 5 6 7 8 9 10 11 for each token in input { if token is operand push to stack else if token is operator pop two operands from stack (first popped is right operand) (second popped is left
operand) perform operation on the operands push result of operation to stack } stack should only have one item: answer
Prefix Pseudocode:
1 2 3 4 5 6 7 8 9 10 11 for each token in reversed input { if token is operand push to stack else if token is operator pop two operands from stack (first popped is left operand) (second popped is
right operand) perform operation on the operands push result of operation to stack } stack should only have one item: answer
Stack Evaluation Visualizer
For more information
I filmed an explainer video using this website:
While reviewing information about this topic, I primarily used UCSB Lecturer Mike Costanza's slides for CS12 linked below. Sadly he retired in 2018 so I won't be able to meet him in college :( | {"url":"https://notation-visualizer.ajayliu.com/","timestamp":"2024-11-07T13:32:32Z","content_type":"text/html","content_length":"33441","record_id":"<urn:uuid:c7a42108-f31a-43df-88a0-182745e03f21>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00218.warc.gz"} |
An open elevator is ascending with constant speed v=10 m/s.A ba... | Filo
An open elevator is ascending with constant speed ball is thrown vertically up by a boy on the lift when he is at a height from the ground. The velocity of projection is with respect to elevator.
Find (a) the maximum height attained by the ball. (b) the time taken by the ball to meet the elevator again. (c) time taken by the ball to reach the ground after crossing the elevator.
Not the question you're searching for?
+ Ask your question
(a) Absolute velocity of ball (upwards) Here, (b) The ball will meet the elevator again when displacement of lift = displacement of ball (c) Let be the total time taken by the ball to reach the
ground. Then, Solving this equation we get, Therefore, time taken by the ball to reach the ground after crossing the elevator,
Was this solution helpful?
Video solutions (5)
Learn from their 1-to-1 discussion with Filo tutors.
4 mins
Uploaded on: 9/20/2022
Was this solution helpful?
16 mins
Uploaded on: 9/28/2022
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Mechanics Volume 1 (DC Pandey)
View more
Practice more questions from Motion in Straight Line
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Physics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
An open elevator is ascending with constant speed ball is thrown vertically up by a boy on the lift when he is at a height from the ground. The velocity of projection is with respect to
Question elevator. Find (a) the maximum height attained by the ball. (b) the time taken by the ball to meet the elevator again. (c) time taken by the ball to reach the ground after crossing the
Text elevator.
Updated On Sep 28, 2022
Topic Motion in Straight Line
Subject Physics
Class Class 11
Answer Type Text solution:1 Video solution: 5
Upvotes 507
Avg. Video 10 min | {"url":"https://askfilo.com/physics-question-answers/an-open-elevator-is-ascending-with-constant-speed-v10-mathrm~m-mathrms-a-ball-is","timestamp":"2024-11-14T18:38:40Z","content_type":"text/html","content_length":"398538","record_id":"<urn:uuid:a7bfefa3-43fa-4cd1-85c1-9cc2f946c753>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00562.warc.gz"} |
A Changing Paradigm in High School Mathematics
2020 Reports
A Changing Paradigm in High School Mathematics
In the United States, the prevailing high school mathematics course sequence begins with a year of Algebra I, followed by a year of geometry and a year of Algebra II. Educators and others have raised
concerns about the extent to which this sequence, which prioritizes the mastery of algebra, is appropriate for the longer term education and career goals of students who do not intend to pursue STEM
degrees in college. These concerns have impelled educators and policymakers to reexamine the prominence of algebra in high school mathematics curricula and to consider new approaches that provide
students with more mathematics course options better aligned with their academic and career goals.
This paper explores existing approaches to high school mathematics curricula as well as new developments in the field. The authors discuss a range of high school mathematics course sequences and look
at some of the systemic challenges embedded within the traditional paradigm. They also examine federal and state changes to the provision of high school mathematics in the early 21st century, the
influence of postsecondary institutions on high school math curricula, and innovative high school math reforms occurring in Ohio, California, Oregon, Texas, and Washington. The paper concludes with a
discussion of how the Charles A. Dana Center’s new initiative, Launch Years, works to reimagine high school mathematics and its relationship to postsecondary education and careers.
• changing-paradigm-high-school-mathematics.pdf application/pdf 400 KB Download File
More About This Work
Academic Units
CCRC Working Papers, 125
Published Here
January 21, 2021 | {"url":"https://academiccommons.columbia.edu/doi/10.7916/d8-kn3j-qj56","timestamp":"2024-11-12T00:44:24Z","content_type":"text/html","content_length":"19947","record_id":"<urn:uuid:4c124951-f6b4-4d7c-aa90-d39050e5b521>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00774.warc.gz"} |
Calculate and Plot a Correlation Matrix in Python and Pandas • datagy
In this tutorial, you’ll learn how to calculate a correlation matrix in Python and how to plot it as a heat map. You’ll learn what a correlation matrix is and how to interpret it, as well as a short
review of what the coefficient of correlation is.
You’ll then learn how to calculate a correlation matrix with the pandas library. Then, you’ll learn how to plot the heat map correlation matrix using Seaborn. Finally, you’ll learn how to customize
these heat maps to include certain values.
The Quick Answer: Use Pandas’ df.corr() to Calculate a Correlation Matrix in Python
# Calculating a Correlation Matrix with Pandas
import pandas as pd
matrix = df.corr()
# Returns:
# b_len b_dep f_len f_dep
# b_len 1.000000 -0.235053 0.656181 0.595110
# b_dep -0.235053 1.000000 -0.583851 -0.471916
# f_len 0.656181 -0.583851 1.000000 0.871202
# f_dep 0.595110 -0.471916 0.871202 1.000000
What a Correlation Matrix is and How to Interpret it
A correlation matrix is a common tool used to compare the coefficients of correlation between different features (or attributes) in a dataset. It allows us to visualize how much (or how little)
correlation exists between different variables.
This is an important step in pre-processing machine learning pipelines. Since the correlation matrix allows us to identify variables that have high degrees of correlation, they allow us to reduce the
number of features we may have in a dataset.
This is often referred to as dimensionality reduction and can be used to improve the runtime and effectiveness of our models.
That’s the theory of our correlation matrix. But what does it actually look like? A correlation matrix has the same number of rows and columns as our dataset has columns.
This means that if we have a dataset with 10 columns, then our matrix will have ten rows and ten columns. Each row and column represents a variable (or column) in our dataset and the value in the
matrix is the coefficient of correlation between the corresponding row and column.
What is a Correlation Coefficient? A coefficient of correlation is a value between -1 and +1 that denotes both the strength and directionality of a relationship between two variables.
• The closer the value is to 1 (or -1), the stronger a relationship.
• The closer a number is to 0, the weaker the relationship.
A negative coefficient will tell us that the relationship is negative, meaning that as one value increases, the other decreases. Similarly, a positive coefficient indicates that as one value
increases, so does the other.
Let’s see what a correlation matrix looks like when we map it as a heat map. Here, we have a simply 4×4 matrix, meaning that we have 4 columns and 4 rows.
A sample correlation matrix visualized as a heat map
The values in our matrix are the correlation coefficients between the pairs of features. We can see that we have a diagonal line of the values of 1. This is because these values represent the
correlation between a column and itself. Because these values are, of course, always the same they will always be 1.
If you have a keen eye, you’ll notice that the values in the top right are the mirrored image of the bottom left of the matrix. This is because the relationship between the two variables in the
row-column pairs will always be the same. It’s common practice to remove these from a heat map matrix in order to better visualize the data. This is something you’ll learn in later sections of the
Calculate a Correlation Matrix in Python with Pandas
Pandas makes it incredibly easy to create a correlation matrix using the DataFrame method, .corr(). The method takes a number of parameters. Let’s explore them before diving into an example:
matrix = df.corr(
method = 'pearson', # The method of correlation
min_periods = 1 # Min number of observations required
By default, the corr method will use the Pearson coefficient of correlation, though you can select the Kendall or spearman methods as well. Similarly, you can limit the number of observations
required in order to produce a result.
Loading a Sample Pandas Dataframe
Now that you have an understanding of how the method works, let’s load a sample Pandas Dataframe. For this, we’ll use the Seaborn load_dataset function, which allows us to generate some datasets
based on real-world data. We’ll load the penguins dataset. Seaborn allows us to create very useful Python visualizations, providing an easy-to-use high-level wrapper on Matplotlib.
# Loading a sample Pandas dataframe
import pandas as pd
import seaborn as sns
df = sns.load_dataset('penguins')
# We're renaming columns to make them print nicer
df.columns = ['species', 'island', 'b_len', 'b_dep', 'f_len', 'f_dep', 'sex']
# Returns:
# species island b_len b_dep f_len f_dep sex
# 0 Adelie Torgersen 39.1 18.7 181.0 3750.0 Male
# 1 Adelie Torgersen 39.5 17.4 186.0 3800.0 Female
# 2 Adelie Torgersen 40.3 18.0 195.0 3250.0 Female
# 3 Adelie Torgersen NaN NaN NaN NaN NaN
# 4 Adelie Torgersen 36.7 19.3 193.0 3450.0 Female
Let’s break down what we’ve done here:
• We loaded the Pandas library using the alias pd. We also loaded the Seaborn library using the alias sns.
• We then created a DataFrame, df, using the load_dataset function and passing in 'penguins' as the argument.
• Finally, we printed the first five rows of the DataFrame using the .head() method
We can see that our DataFrame has 7 columns. Some of these columns are numeric and others are strings.
Calculating a Correlation Matrix with Pandas
Now that we have our Pandas DataFrame loaded, let’s use the corr method to calculate our correlation matrix. We’ll simply apply the method directly to the entire DataFrame:
# Calculating a Correlation Matrix with Pandas
matrix = df.corr()
# Returns:
# b_len b_dep f_len f_dep
# b_len 1.000000 -0.235053 0.656181 0.595110
# b_dep -0.235053 1.000000 -0.583851 -0.471916
# f_len 0.656181 -0.583851 1.000000 0.871202
# f_dep 0.595110 -0.471916 0.871202 1.000000
We can see that while our original dataframe had seven columns, Pandas only calculated the matrix using numerical columns. We can see that four of our columns were turned into column row pairs,
denoting the relationship between two columns.
For example, we can see that the coefficient of correlation between the body_mass_g and flipper_length_mm variables is 0.87. This indicates that there is a relatively strong, positive relationship
between the two variables.
Rounding our Correlation Matrix Values with Pandas
We can round the values in our matrix to two digits to make them easier to read. The matrix that’s returned is actually a Pandas Dataframe. This means that we can actually apply different DataFrame
methods to the matrix itself. We can use the Pandas round method to round our values.
matrix = df.corr().round(2)
# Returns:
# b_len b_dep f_len f_dep
# b_len 1.00 -0.24 0.66 0.60
# b_dep -0.24 1.00 -0.58 -0.47
# f_len 0.66 -0.58 1.00 0.87
# f_dep 0.60 -0.47 0.87 1.00
While we lose a bit of precision doing this, it does make the relationships easier to read.
In the next section, you’ll learn how to use the Seaborn library to plot a heat map based on the matrix.
How to Plot a Heat map Correlation Matrix with Seaborn
In many cases, you’ll want to visualize a correlation matrix. This is easily done in a heat map format where we can display values that we can better understand visually. The Seaborn library makes
creating a heat map very easy, using the heatmap function.
Let’s now import pyplot from matplotlib in order to visualize our data. While we’ll actually be using Seaborn to visualize the data, Seaborn relies heavily on matplotlib for its visualizations.
# Visualizing a Pandas Correlation Matrix Using Seaborn
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = sns.load_dataset('penguins')
matrix = df.corr().round(2)
sns.heatmap(matrix, annot=True)
Here, we have imported the pyplot library as plt, which allows us to display our data. We then used the sns.heatmap() function, passing in our matrix and asking the library to annotate our heat map
with the values using the annot= parameter. This returned the following graph:
Visualizing a correlation matrix with mostly default parameters
We can see that a number of odd things have happened here. Firstly, we know that a correlation coefficient can take the values from -1 through +1. Our graph currently only shows values from roughly
-0.5 through +1. Because of this, unless we’re careful, we may infer that negative relationships are strong than they actually are.
Further, the data isn’t showing in a divergent manner. We want our colors to be strong as relationships become strong. Rather, the colors weaken as the values go close to +1.
We can modify a few additional parameters here:
1. vmin=, vmax= are used to anchor the colormap. If none are passed, the values are inferred, which led to the negative values not going beyond 0.5. Since we know that the coefficients or
correlation should be anchored at +1 and -1, we can pass these in.
2. center= species the value at which to center the colormap when we plot divergent data. Since we want the colors to diverge from 0, we should specify 0 as the argument here.
3. cmap= allows us to pass in a different color map. Because we want the colors to be stronger at either end of the divergence, we can pass in vlag as the argument to show colors go from blue to
Let’s try this again, passing in these three new arguments:
# Visualizing a Pandas Correlation Matrix Using Seaborn
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = sns.load_dataset('penguins')
matrix = df.corr().round(2)
sns.heatmap(matrix, annot=True, vmax=1, vmin=-1, center=0, cmap='vlag')
This returns the following matrix. It diverges from -1 to +1 and the colors conveniently darken at either pole.
A properly formatted heat map with divergent colours
In this section, you learned how to format a heat map generated using Seaborn to better visualize relationships between columns.
Looking for other uses of the Seaborn heatmap function? I have a complete guide on calculating and plotting a confusion matrix for evaluating classification machine learning problems.
Plot Only the Lower Half of a Correlation Matrix with Seaborn
One thing that you’ll notice is how redundant it is to show both the upper and lower half of a correlation matrix. Our minds can only interpret so much – because of this, it may be helpful to only
show the bottom half of our visualization. Similarly, it can make sense to remove the diagonal line of 1s, since this has no real value.
In order to accomplish this, we can use the numpy triu function, which creates a triangle of a matrix. Let’s begin by importing numpy and adding a mask variable to our function. We can then pass this
mask into our Seaborn function, asking the heat map to mask only the values we want to see:
# Showing only the bottom half of our correlation matrix
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
df = sns.load_dataset('penguins')
matrix = df.corr().round(2)
mask = np.triu(np.ones_like(matrix, dtype=bool))
sns.heatmap(matrix, annot=True, vmax=1, vmin=-1, center=0, cmap='vlag', mask=mask)
This returns the following image:
Displaying only the bottom half of a matrix using a numpy mask
We can see how much easier it is to understand the strength of our dataset’s relationships here. Because we’ve removed a significant amount of visual clutter (over half!), we can much better
interpret the meaning behind the visualization.
How to Save a Correlation Matrix to a File in Python
There may be times when you want to actually save the correlation matrix programmatically. So far, we have used the plt.show() function to display our graph. You can then, of course, manually save
the result to your computer. But matplotlib makes it easy to simply save the graph programmatically use the savefig() function to save our file.
The file allows us to pass in a file path to indicate where we want to save the file. Say we wanted to save it in the directory where the script is running, we can pass in a relative path like below:
# Saving a Heatmap
In the code shown above, we will save the file as a png file with the name heatmap. The file will be saved in the directory where the script is running.
Selecting Only Strong Correlations in a Correlation Matrix
In some cases, you may only want to select strong correlations in a matrix. Generally, a correlation is considered to be strong when the absolute value is greater than or equal to 0.7. Since the
matrix that gets returned is a Pandas Dataframe, we can use Pandas filtering methods to filter our dataframe.
Since we want to select strong relationships, we need to be able to select values greater than or equal to 0.7 and less than or equal to -0.7 Since this would make our selection statement more
complicated, we can simply filter on the absolute value of our correlation coefficient.
Let’s take a look at how we can do this:
matrix = df.corr()
matrix = matrix.unstack()
matrix = matrix[abs(matrix) >= 0.7]
# Returns:
# bill_length_mm bill_length_mm 1.000000
# bill_depth_mm bill_depth_mm 1.000000
# flipper_length_mm flipper_length_mm 1.000000
# body_mass_g 0.871202
# body_mass_g flipper_length_mm 0.871202
# body_mass_g 1.000000
Here, we first take our matrix and apply the unstack method, which converts the matrix into a 1-dimensional series of values, with a multi-index. This means that each index indicates both the row and
column or the previous matrix. We can then filter the series based on the absolute value.
Selecting Only Positive / Negative Correlations in a Correlation Matrix
In some cases, you may want to select only positive correlations in a dataset or only negative correlations. We can, again, do this by first unstacking the dataframe and then selecting either only
positive or negative relationships.
Let’s first see how we can select only positive relationships:
matrix = df.corr()
matrix = matrix.unstack()
matrix = matrix[matrix > 0]
# Returns:
# bill_length_mm bill_length_mm 1.000000
# flipper_length_mm 0.656181
# body_mass_g 0.595110
# bill_depth_mm bill_depth_mm 1.000000
# flipper_length_mm bill_length_mm 0.656181
# flipper_length_mm 1.000000
# body_mass_g 0.871202
# body_mass_g bill_length_mm 0.595110
# flipper_length_mm 0.871202
# body_mass_g 1.000000
We can see here that this process is nearly the same as selecting only strong relationships. We simply change our filter of the series to only include relationships where the coefficient is greater
than zero.
Similarly, if we wanted to select on negative relationships, we only need to change one character. We can change the > to a < comparison:
matrix = matrix[matrix < 0]
This is a helpful tool, allowing us to see which relationships are either direction. We can even combine these and select only strong positive relationships or strong negative relationships.
In this tutorial, you learned how to use Python and Pandas to calculate a correlation matrix. You learned, briefly, what a correlation matrix is and how to interpret it. You then learned how to use
the Pandas corr method to calculate a correlation matrix and how to filter it based on different criteria. You also learned how to use the Seaborn library to visualize a matrix using the heatmap
function, allowing you to better visualize and understand the data at a glance.
To learn more about the Pandas .corr() dataframe method, check out the official documentation here.
Additional Resources
To learn about related topics, check out the articles listed below: | {"url":"https://datagy.io/python-correlation-matrix/","timestamp":"2024-11-13T01:19:39Z","content_type":"text/html","content_length":"165320","record_id":"<urn:uuid:679c1061-0da9-4ec3-8767-bb74a0a9946f>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00493.warc.gz"} |
Possibilities of a Structure - Eduardo Terrazas
Possibilities of a Structure
Early in the 1970s, Terrazas concentrated on finding a solution to undertake work with a serial logic in which he could explore aspects of unity and diversity. The intention was to develop a series
based on a solution or direct structure, with a strong visual impact and easy recognition, in sum, an iconic structure.
The design base of Terrazas’s new series, which he called Possibilities of a Structure, can refer to different spaces, orders of experience, and symbolic reference points. His iconic graphic solution
holds a multiplicity of meanings.
Another distinctive aspect of this series is the way he has exploited this solution to create different structural configurations or subseries such as Nine Circles, Grid, Diagonals, Barcode, and the
most extensive, robust, and complex of all, Cosmos. These subseries contain specific formal inquiries as well as potential meanings. Nine Circles combined geometric order with circles, vertical,
horizontal, and diagonal lines. Diagonals, allows the artist to explore the intertwining of lines in his designs, almost like weaving. On the other, Grid can be seen concerning the use of this visual
structure as a projective tool in the fields of architecture and design. Finally, Barcode can be associated with the discourse of the time on programming and cybernetics, as well as its technologies
of visual representation. Through a play of parallel lines, these works by Terrazas refer to bar codes, commercially introduced in the late 1960s.
Cosmos subseries merits further examination. Unlike the other four subseries, it is the group of works that investigates the solution to the series Possibilities of a Structure most holistically. It
has also become more complex in its meaning over time. As for the understanding of this structure, for Terrazas today the “two lines drawn horizontally and vertically are the X and Y coordinates.”
The circle spanning these two lines is a symbol of the universe, while a smaller circle represents planet Earth. A square rotated 45 degrees located within the large circle represents “a structural
frame for the universe.” Finally, “the two diagonal lines that cross the image and intersect beyond the cosmic horizon, represent the infinite in the Cosmos.” | {"url":"https://eduardoterrazas.com/index.php/possibilities-of-a-structure/","timestamp":"2024-11-08T19:09:51Z","content_type":"text/html","content_length":"143646","record_id":"<urn:uuid:efb3daf1-15f8-4e18-bad6-9ebfca49e5d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00069.warc.gz"} |
In mathematics, the quasi-dihedral groups, also called semi-dihedral groups, are certain non-abelian groups of order a power of 2. For every positive integer n greater than or equal to 4, there are
exactly four isomorphism classes of non-abelian groups of order 2^n which have a cyclic subgroup of index 2. Two are well known, the generalized quaternion group and the dihedral group. One of the
remaining two groups is often considered particularly important, since it is an example of a 2-group of maximal nilpotency class. In Bertram Huppert's text Endliche Gruppen, this group is called a
"Quasidiedergruppe". In Daniel Gorenstein's text, Finite Groups, this group is called the "semidihedral group". Dummit and Foote refer to it as the "quasidihedral group"; we adopt that name in this
article. All give the same presentation for this group:
Cayley graph of the quasidihedral group of order 16
Cayley graph of the modular maximal-cyclic group of order 16
Cayley graph of the dihedral group of order 16
${\displaystyle \langle r,s\mid r^{2^{n-1}}=s^{2}=1,\ srs=r^{2^{n-2}-1}\rangle \,\!}$.
The other non-abelian 2-group with cyclic subgroup of index 2 is not given a special name in either text, but referred to as just G or M[m](2). When this group has order 16, Dummit and Foote refer to
this group as the "modular group of order 16", as its lattice of subgroups is modular. In this article this group will be called the modular maximal-cyclic group of order ${\displaystyle 2^{n}}$. Its
presentation is:
${\displaystyle \langle r,s\mid r^{2^{n-1}}=s^{2}=1,\ srs=r^{2^{n-2}+1}\rangle \,\!}$.
Both these two groups and the dihedral group are semidirect products of a cyclic group <r> of order 2^n−1 with a cyclic group <s> of order 2. Such a non-abelian semidirect product is uniquely
determined by an element of order 2 in the group of units of the ring ${\displaystyle \mathbb {Z} /2^{n-1}\mathbb {Z} }$ and there are precisely three such elements, ${\displaystyle 2^{n-1}-1}$, ${\
displaystyle 2^{n-2}-1}$, and ${\displaystyle 2^{n-2}+1}$, corresponding to the dihedral group, the quasidihedral, and the modular maximal-cyclic group.
The generalized quaternion group, the dihedral group, and the quasidihedral group of order 2^n all have nilpotency class n − 1, and are the only isomorphism classes of groups of order 2^n with
nilpotency class n − 1. The groups of order p^n and nilpotency class n − 1 were the beginning of the classification of all p-groups via coclass. The modular maximal-cyclic group of order 2^n always
has nilpotency class 2. This makes the modular maximal-cyclic group less interesting, since most groups of order p^n for large n have nilpotency class 2 and have proven difficult to understand
The generalized quaternion, the dihedral, and the quasidihedral group are the only 2-groups whose derived subgroup has index 4. The Alperin–Brauer–Gorenstein theorem classifies the simple groups, and
to a degree the finite groups, with quasidihedral Sylow 2-subgroups.
The Sylow 2-subgroups of the following groups are quasidihedral:
• PSL[3](F[q]) for q ≡ 3 mod 4,
• PSU[3](F[q]) for q ≡ 1 mod 4,
• the Mathieu group M[11],
• GL[2](F[q]) for q ≡ 3 mod 4.
• Dummit, D. S.; Foote, R. (2004). Abstract Algebra (3 ed.). Wiley. pp. 71–72. ISBN 9780471433347.
• Huppert, B. (1967). Endliche Gruppen. Springer. pp. 90–93. MR 0224703.
• Gorenstein, D. (1980). Finite Groups. Chelsea. pp. 188–195. ISBN 0-8284-0301-5. MR 0569209. | {"url":"https://www.knowpia.com/knowpedia/Quasidihedral_group","timestamp":"2024-11-08T04:46:28Z","content_type":"text/html","content_length":"82125","record_id":"<urn:uuid:ef9ef3bb-4f0a-456b-ac48-432f49320483>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00597.warc.gz"} |
Dark Buzz
New paper
Petkov, Vesselin (2023) The Quadruple Scientific Tragedy involved in the Discovery of Spacetime Physics. The Origin of Spacetime Physics (2nd ed.). pp. 257-276. ...
The advent of spacetime physics came at the price of four different scientific tragedies involving Hendrik Lorentz, Henri Poincaré Albert Einstein and Hermann Minkowski whose work essentially
laid the foundations of spacetime physics. Lorentz' and Poincaré's scientific tragedies had the same cause - both Lorentz and Poincaré regarded the new theoretical entities they introduced in
physics as pure mathematical abstractions that did not represent anything in the physical world. Einstein's rather subtle scientific tragedy has to do with his unclear and, in some cases, even
incorrect views on a number of subjects that might have led to confusions and misconceptions some of which still persist.
Four men made some brilliant discoveries that changed our fundamental understanding of Physics. It is hard to understand what is tragic about this.
His statement about "pure mathematical abstractions" is just a misunderstanding. Lorentz and Poincare had physical interpretations for everything they did. Some people argue that Lorentz did not have
a way of tying his concept of "local time" to the time of local clocks, but they all agree that Poincare did, and that Poincare credited Lorentz with a similar understanding. Poincare even nominated
Lorentz to get a Nobel Prize in 1902 for his theory of local time. Of course they understood local time represented something in the physical world. Lorentz and Poincare wrote papers on how their
relativity theory explained the Michelson-Morley and other experiments. It was Einstein who took the more abstract approach of relating the formulas to his postulates, instead of experiments.
It is just a slander on mathematicians when non-mathematicians complain they are just doing mathematics, as if that makes it not real.
Einstein is widely credited with geometrizing spacetime. Many say that was his most profound and important discovery. Petkov explains that Einstein did not do that, and even disagreed with it:
After Minkowski’s 1908 world-view-changing lecture “Space and Time” Einstein had apparently had difficulty realizing the depth of Minkowski’s ideas and his reaction to the developed by Minkowski
four-dimensional physics had been rather hostile. Sommerfeld’s recollection of what Einstein said on one occasion provides an indication of Einstein’s initial attitude towards the work of his
mathematics professor on the foundations of spacetime physics [6, p. 102]:
Since the mathematicians have invaded the relativity theory, I do not under- stand it myself any more.
However, later, in order to develop his general relativity, Einstein had to adopt Minkowski’s four-dimensional physics but it appears that the adoption has not been fully successful since he did
not truly employ Minkowski’s program of geometrizing physics. ...
In a letter to Reichenbach from April 8, 1926 Einstein wrote [25]:
It is wrong to think that “geometrization” is something essential. It is only a kind of crutch for the discovery of numerical laws. Whether one links “geometrical” intuitions with a theory is
an inessential private matter.
Twenty-two years later, on June 19, 1948, in a letter to Lincoln Barnett Einstein reiterated his (mis)understanding of his own theory [26]:
I do not agree with the idea that the general theory of relativity is geometrizing Physics or the gravitational field.
... if Einstein did not believe that spacetime represented a real four-dimensional world 22 (and were nothing more than a mathematical space), then, clearly, gravitational phenomena could not be
manifestations of the curvature of some- thing that does not exist. So it seems even in 1948 Einstein seriously doubted whether spacetime represented a real four-dimensional world. ...
Einstein seems to have never been able to eliminate entirely his negative attitude towards the discovered by Minkowski spacetime structure of the world, which ultimately prevented him from
accepting the most counter-intuitive result of his own general relativity – that gravitation is not a physical interaction 23 since it is nothing more than a manifestation of the non-Euclidean
geometry of spacetime. `
This is the most startling thing I discovered about Einstein. Everybody knows he refused to credit his sources, and used ideas and formulas that had been previously published by others. But they
always argue that Einstein had a superior understanding, and primarily a geometric view which came to dominate 20th century Physics. But in fact he rejected that view. We got that view from others.
Petrov is particularly critical of Einstein explaining the Ehrenfest paradox incorrectly. Consider a rotating disk of radius 1. The circumference is Lorentz contracted, and so has length less than
2π. Einstein argued that it is really bigger than 2π because the measuring rods will be contracted, so more rods will be needed to measure the circumference. He wrote about this several times over
many years, and never seemed to accept that space itself is contracting, in the view of another frame. Time cannot be synchronized over all the frames.
This article has more info on the history of the paradox. The paradox was first published in 1909, and it convinced a lot of people that non-Euclidean geometry was needed for general relativity.
Petkov argues that Minkowski independently discovered some relativity ideas that are credited to others, but was slow to publish. This is possible, but the evidence for it is weak. Minkowski cited
Poincare's big 1905 relativity paper, and seems to use a lot of ideas from it. Minkowski died soon after publishing, and that is tragic, and we do not know what he might have done with the theory.
The main evidence is that (1) Minkowski's 1907 paper has so many original ideas in it that it was probably several years of work; and (2) Max Born recounts taking a relativity course from Minkowski
in 1905, before the Poincare and Einstein 1905 papers were published. Regardless, it is clear that Minkowski made a huge contribution.
Slashdot reports:
Science educator Sabine Hossenfelder is a research fellow at the Frankfurt Institute for Advanced Studies. But Hossenfelder's latest YouTube video expounds upon the sorry state of particle
physics, and in the process also has some interesting sidenotes on dark matter.
Hossenfelder criticises what has become the standard operating procedure of particle physicists, whereby they routinely predict the existence of particles that violate the Standard Model.
Eventually, the postulated particles are experimentally falsified, at which time physicists move on to even more fanciful predictions.
Hossenfelder is pessimistic about the future of the field if particle physicists continue to behave in the same manner going forward. Hossenfelder also notes that in the past 50 years, only a
handful of predictions have been validated, and all these were necessary elements of the Standard Model.
She talks a lot of about models that expand the symmetry to some larger, possibly broken, symmetry. For example the grand unified theories combine the weak and strong interactions into a larger
group. Supersymmetry also adds many symmetries, and so does string theory.
There is an argument for such theories that goes like this. The history of Physics is in finding broader theories that unify others. Newton's gravity unified terrestial and celestial gravitation.
Maxwell's theory unified electricity and magnetism. They were truly unified in that a moving electric field would generate a magnetic field, and vice.
So it seems conceptually desirable to unify strong and weak forces, with a larger symmetry group.
But it is not. All these theories cause drastic increases in complexity, and in unknown parameters needed to define the theory. Having more symmetries does not reduce the complexity because the
symmetries are broken.
With electromagnetism, the symmetry is real, and you can do away with magnetism, and treat it as a relativistic effect of electricity. With the grand unified theories, there is no advantage to the
extra symmetry at all. It does not make the theory more elegant.
All of this would be irrelevant if there were experimental evidence for the unified theories. As Hossenfelder explains, many billions of dollars have been spent looking, and none found.
Jeffrey Bub reviews some recent popular books on quantum mechanics.
John Bell’s status in our field has the same [like Isaac Newton, James Watson, and Linus Pauling] mythic quality. Before him there was nothing, only the philosophical disputes between famous old
men. He showed that the field contained physics, experimental physics, and nothing has been the same since.
Some do say this, but it is crazy. All Bell did was to show that the predictions of quantum mechanics differ from a classical theory of local hidden variables. As what everyone believed anyway.
In several places Becker invokes the quote, ‘there is no quantum world,’ commonly attributed to Bohr (Becker, p. 14):
What does quantum physics tell us about the world? According to the Copenhagen interpretation this question has a very simple answer: quan- tum mechanics tells us nothing whatsoever about the
world. . . . According to Bohr, there isn’t a story about the quantum world because ‘there is no quantum world. There is only an abstract quantum physical description.’
The ‘no quantum world’ comment is actually a quote from Bohr’s assistant Aage Petersen,17 who recounts Bohr saying this sort of thing. Bohr probably did make provocative statements along these
lines in discussion, but he certainly did not mean that there is simply nothing there, as Becker seems to suggest.
What could Bohr have meant? Here’s my take on it. Quantum mechanics replaces the commutative algebra of physical quantities of a classical system with a noncommutative algebra of ‘observables.’
This is an extraordinary move, quite unprecedented in the history of physics, and arguably requires us to re-think what counts as an acceptable explanation in physics.
Aage Petersen, ‘The Philosophy of Niels Bohr,’ Bulletin of the Atomic Scientists 19, 8–14 (1963). The quote is on p. 12: ‘When asked whether the algorithm of quantum mechanics could be
considered as some- how mirroring an underlying quantum world, Bohr would answer, “There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that
the task of physics is to find out how nature is. Physics concerns what we can say about nature.”
Maybe Bohr meant that quantum mechanics is not a big disguise for a theory of hidden variables, as those pushing "realism" often suggest.
As Bell points out,20 Bohm’s theory involves action at a distance at the level of the hidden variables: ‘an explicit causal mechanism exists whereby the disposition of one piece of apparatus
affects the results obtained with a distant piece,’ so that ‘the Einstein-Podolsky-Rosen paradox is resolved in the way which Einstein would have liked least.’ The problem of making sense of
probability in an Everettian universe, where everything that can happen does happen in some world, is still a contentious issue.
Yes, these are fatal flaws to the Bohm and Everett theories.
the question of completeness dominated the debates between Bohr and Einstein. What Einstein had in mind was that something was left out of the quantum theory, which, if added to the theory, would
restore the sort of ‘Anschaulichkeit’ characteristic of classical theories. ...
The question of ‘Anschaulichkeit’ morphed into a debate about the possibility of a realist interpretation of quantum mechanics, with the dissidents accusing the Copen- hagenists of the sin of
positivism or instrumentalism, which by the 1960s had lost much of its appeal among philosophers.
As the review explains, when Einstein said completeness, he really meant commutativity, not determinism.
Yes, philosophers abandoned positivism for silly reasons, but why did physicists? Quantum mechanics is best understood as a positivist theory. So is relativity and other Physics theories. Quantum
mechanics was explicitly positivist, before Bohm, Einstein, Everett, Bell, and others ruined it.
A recent 15-author paper begins:
Championing inclusive terminology in ecology and evolution
Amid a growing disciplinary commitment to inclusion in ecology and evolutionary biology (EEB), it is critical to consider how the use of scientific language can harm members of our research
community. ...
In recent years, events such as the coronavirus disease 2019 (COVID-19) pandemic and waves of anti-Black violence have highlighted the need for leaders in EEB to adopt inclusive and equitable
practices in research, collaboration, teaching, and mentoring [1., 2., 3.].
Really, is that a fact? What are those waves of anti-Black violence?
There are references, so I checked them. One says:
Our non-Black colleagues must fight anti-Black racism and white supremacy within the academy to authentically promote Black excellence. Amplifying Black excellence in ecology and evolution is the
antidote for white supremacy in the academy. ...
Black scholars in the life sciences are grieving, traumatized, exhausted, infuriated, frustrated and experiencing many other disparaging emotions4,12. As we attempt to operate in a system that
presents extraordinary barriers to our success, we also watch our white counterparts thrive in a system equipped with the resources made for them7.
So it says the life sciences block Blacks, and make it easy for Whites. The reference to a Black Ivy League professor telling this story:
The officer asked for my license and registration. After he did whatever they do when they take your information back to their car, he came back and asked what I did for a living. I told him I
was about to start a job as a professor, and that led to a long conversation about my life story. Once satisfied, he said I was free to go.
Before I drove off, I couldn't resist asking him why he pulled me over. "Your license plate is dirty," he responded. "You should get your car washed." If that was the true reason he pulled me
over, then I'm not sure why he needed to know so much about my life history.
So he decided to wash his car more often, something a White professor might not have had to do!
Here is his only other gripe:
I was trained as a social psychologist to do basic research. ... senior faculty members told me that if I wanted to get tenure I would need to prioritize my basic research and set aside my
"disparities stuff" until after tenure.
This is seriously delusional. White drivers also get pulled over by cops. Cops often ask nosy questions for a lot of reasons. Often they are just making conversation while they assess whether you are
drunk. White researchers are also told to do basic research to get tenure.
The other references are no better.
This now passes for scholarship in today's scientific journals. Papers with a leftist agenda can present nonsense as facts. The above paper is filled with statements like:
Scientific terms used in EEB can also reinforce oppressive systems, discriminatory tropes, and offensive terms. For example, anti-trans language has been used to describe male snakes that engage
in female mimicry, and phrases such as ‘sneaky mating strategy’ can normalize problematic male sexual behavior [6].
Next they will be objecting to terms like "male snakes".
Nature reports:
Mathematics has the potential to be a great equalizer. Compared with other scientific and technical fields, it requires few expensive physical resources. Sometimes, a whiteboard and a marker are
all that’s needed.
However, maths is one of the least diverse of the STEM disciplines of science, technology, engineering and mathematics. For instance, the Survey of Earned Doctorates conducted by the US National
Science Foundation) showed that, of all 1,915 doctorates awarded in mathematics and statistics in the United States in 2021, none went to people identifying as American Indian or Alaska Native.
Just 28 (1.5%) were awarded to Black or African American mathematicians or statisticians, and 33 (1.7%) to researchers who identify as belonging to more than one race.
Maths is built on a modern history of elevating the achievements of one group of people: white men. “Theorems or techniques have names associated to them and most of the time, those names are of
nineteenth-century French or German men,” such as Georg Cantor, Henri Poincaré and Carl Friedrich Gauss, all of whom were white, says John Parker, head of the mathematical sciences department at
Durham University, UK. This means that the accomplishments of people of other genders and races have often been pushed aside, preventing maths from being a level playing field. ...
Mathematicians leading decolonization efforts say that building knowledge-sharing partnerships with communities is key. ...
The institute is a system of five centres of excellence in Cameroon, Ghana, Senegal, South Africa and Rwanda that are designed to deliver the next generation of leading mathematical thinkers on
the continent. AIMS’s five centres award fully funded master’s degrees and doctorates, preparing students for jobs in academia and in industry. AIMS is built around the motto “We believe the next
Einstein will be African”.
Einstein was not a mathematician.
This is just embarrassing.
The Davos folks have new video with predictions:
Quantum Computing is On Track for 2025.
Back in 2022 Arvind Krishna, the Chairman and Chief Executive Officer of IBM Corporation, surprised the audience, and many viewers of this channel by asserting that we will have quantum computing
by 2025. A year later, again during Davos the IBM CEO confirmed that IBM is respecting the timeline and we will have quantum computing by 2025. It will be powerful enough to create a major
breakthrough in science, but also dangerous enough to make some of the worst fears come true. Others seem to agree.
In the cryptology world, there is a hot debate about what to do about this possible collapse of the technological underpinning of all our secure communications. The claim is that the Chinese are
intercepting and recording encrypted transmissions so they can crack them in 2025 or 2035.
There are several issues. Public key agreements, signatures, hashes, and ciphers. The popular hashes and ciphers are safe. The signatures could be forged by a quantum computer, but that cannot cause
any trouble unless it is an active attack. Signatures are used to verify a piece of data, and then discarded. There is no harm in continuing to use RSA or elliptic curve signatures until the
million-qubit quantum computer is operational.
That leaves the only concern about public key agreements on data that is to be secret for 10 years. The padlock icon on your browser was largely invented to assure consumers that their credit card
numbers would not be stolen if they ordered a product from Ebay or Amazon.
Even if quantum computers are invented, I am pretty confident that no one will use them to steal credit card numbers. There are too many easier ways to get them.
I guess I will revisit these predictions in two years on this blog. I doubt that we will see any significant advances. | {"url":"http://blog.darkbuzz.com/2023/02/","timestamp":"2024-11-07T16:28:58Z","content_type":"text/html","content_length":"129455","record_id":"<urn:uuid:fa081f30-0e9f-4e57-8738-b404a77feae8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00049.warc.gz"} |
How to Get P Value in Excel?
How to Get P Value in Excel?
Are you looking to determine the statistical significance of your data? One of the best tools to help you do that is Excel. Excel can provide you with the p-value, which is a numerical measure of the
probability that the results of your data analysis are due to random chance. In this article, we will discuss how to get the p-value in Excel and how to use it to evaluate the significance of your
data. Read on to learn more!
To get a P value in Excel, follow the steps below:
1. Open the Excel spreadsheet and enter your data.
2. Navigate to the “Data” tab and select “Data Analysis.”
3. Choose the type of statistical test you want to use.
4. Check the “Labels in first row” box if your data has variable names.
5. Click OK and a new window will appear.
6. Enter the input range and the output range.
7. Check the “P-value” box and click OK.
The P value will be displayed in the output range you specified.
What is P-Value in Excel?
P-value is a statistical measure that helps determine whether the results from a statistical hypothesis test are significant. It is the probability of obtaining results equal to or more extreme than
those observed, given that the null hypothesis (H0) is true. In other words, it is the likelihood that the results are not due to chance. P-value is heavily used in hypothesis testing and data
In Excel, P-value can be calculated using the T-Test and Z-Test functions. It is used to evaluate the null hypothesis in a statistical test. The lower the P-value, the more likely the observed
results are significant.
How to Calculate P-Value in Excel?
There are two main functions in Excel for calculating P-value – the T-Test and the Z-Test. The T-Test is used to evaluate the null hypothesis in a statistical test. It is used to determine if two
samples are significantly different from each other. To use this function, first select the two samples of data. Then, in the formula bar, enter the T-Test function. The syntax for the T-Test is “=
T.TEST(data1,data2,tails,type)”. The tails argument determines the number of tails to use for the test (1 for a one-tailed test and 2 for a two-tailed test). The type argument determines which type
of T-Test to use (1 for a paired sample test and 2 for an unpaired sample test).
The Z-Test is used to determine if a sample is significantly different from a known mean. To use this function, select the data and enter the Z-Test function into the formula bar. The syntax for the
Z-Test is “=Z.TEST(data,mean,sd,tails)”. The tails argument determines the number of tails to use for the test (1 for a one-tailed test and 2 for a two-tailed test).
Examples of Calculating P-Value in Excel
To calculate the P-value for a two-sample T-Test, start by entering the two samples of data into two columns in Excel. Then, in the formula bar, enter the T-Test function. The syntax for the T-Test
is “=T.TEST(data1,data2,tails,type)”. The tails argument should be set to 2 for a two-tailed test. The type argument should be set to 1 for a paired sample test. The P-value can then be found in the
output of the T-Test function.
For example, if the data for the two samples is in columns A and B, the T-Test function should be entered as “=T.TEST(A2:A11,B2:B11,2,1)”. The P-value can then be found in the output of the T-Test
Calculating P-Value with Confidence Intervals
The P-value can also be calculated by finding the confidence intervals of the data. To calculate the confidence intervals, start by entering the data into one column in Excel. Then, in the formula
bar, enter the CONFIDENCE.T function. The syntax for the CONFIDENCE.T function is “=CONFIDENCE.T(alpha,standard_dev,size)”. The alpha argument is the confidence level (e.g. 0.05 for a 95% confidence
interval). The standard_dev argument is the standard deviation of the data. The size argument is the number of observations in the sample.
The P-value can then be calculated by subtracting the confidence interval from 1. For example, if the data is in column A and the confidence interval is 95%, the CONFIDENCE.T function should be
entered as “=CONFIDENCE.T(0.05,A2:A11)”. The P-value can then be calculated by subtracting the confidence interval from 1.
Using Excel to Interpret P-Values
Once the P-value has been calculated, it can be used to interpret the results of the data. Generally, if the P-value is less than 0.05, the results are considered to be statistically significant. If
the P-value is greater than 0.05, the results are considered to be not statistically significant.
Limitations of Using Excel to Calculate P-Values
Excel is a powerful tool for data analysis, but it is not always the best tool for calculating P-values. Excel does not always provide accurate results. It is often better to use a more specialized
statistical software for calculating P-values.
P-value is a important statistical measure that helps determine whether the results from a statistical hypothesis test are significant. Excel provides two functions for calculating P-value – the
T-Test and the Z-Test. The P-value can also be calculated by finding the confidence intervals of the data. Once the P-value has been calculated, it can be used to interpret the results of the data.
However, it is often better to use a more specialized statistical software for calculating P-values.
Frequently Asked Questions
1. What is a P Value?
A P value is a statistical measure that helps researchers determine whether the results of a study are statistically significant. It is the probability that the results of a study are due to chance.
For example, if the P value is 0.05, there is a 5% chance that the results of the study are due to chance. If the P value is less than 0.05, the results are considered statistically significant and
the hypothesis can be accepted.
2. What does P Value tell us?
The P value tells us the probability that the results of a study are due to chance. If the P value is 0.05 or less, it tells us that the results are statistically significant and the hypothesis can
be accepted. If the P value is higher than 0.05, it tells us that the results are not statistically significant and the hypothesis should not be accepted.
3. How do I get a P Value in Excel?
In Excel, there are a few ways to get a P value. You can use the T-Test data analysis tool to calculate the P value of a sample set. You can also use the Excel Analysis Toolpak to calculate the P
value of a sample set. Finally, you can use the F-Test data analysis tool to calculate the P value of a sample set.
4. What other data analysis tools can I use to get a P Value?
In addition to the T-Test and F-Test data analysis tools in Excel, there are a few other data analysis tools you can use to calculate a P value. These include the Chi-Square Test, the Z-Test, and the
Chi-Square Goodness of Fit Test. Each of these tools can be used to calculate the P value of a sample set.
5. How do I interpret a P Value?
A P value is typically interpreted in the context of a hypothesis test. If the P value is less than 0.05, then the results of the study are considered statistically significant and the hypothesis can
be accepted. If the P value is greater than 0.05, then the results of the study are not considered statistically significant and the hypothesis should not be accepted.
6. What is the difference between a P Value and a Confidence Interval?
A P value and a confidence interval are two different measures of statistical significance. A P value is a measure of the probability that the results of a study are due to chance. A confidence
interval is a range of values that is likely to contain the true population parameter. A confidence interval is typically expressed as a percentage, such as 95%.
How to Calculate Probability Value (P-Value) in Excel | P-Value in Statistical Hypothesis Tests
Knowing how to get p value in Excel is a valuable skill for anyone working with data. Whether you are running a statistical analysis or simply creating a chart, having the ability to quickly and
accurately calculate the p value in Excel can save you a lot of time and effort. With a few simple steps, you can calculate the p value in Excel and gain insights into your data that you may have
otherwise missed. | {"url":"https://keys.direct/blogs/blog/how-to-get-p-value-in-excel","timestamp":"2024-11-03T22:11:48Z","content_type":"text/html","content_length":"360145","record_id":"<urn:uuid:87cd5e12-eee1-4fbd-a6a8-717db49eb267>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00453.warc.gz"} |
Using the update function during variable selection | R-bloggersUsing the update function during variable selection
Using the update function during variable selection
When fitting statistical models to data where there are multiple variables we are often interested in adding or removing terms from our model and in cases where there are a large number of terms it
can be quicker to use the update function to start with a formula from a model that we have already fitted and to specify the terms that we want to add or remove as opposed to a copy and paste and
manually editing the formula to our needs.
Consider the oil-bearing rocks data set that is available with the R software which is used extensively as an example by many authors. One model that can be used as a starting point is a linear model
with additive terms for the three variables:
> rock.mod1 = lm(log(perm) ~ area + peri + shape, data = rock)
> summary(rock.mod1)
lm(formula = log(perm) ~ area + peri + shape, data = rock)
Min 1Q Median 3Q Max
-1.8092 -0.5413 0.1735 0.6493 1.4788
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.333e+00 5.487e-01 9.720 1.59e-12 ***
area 4.850e-04 8.657e-05 5.602 1.29e-06 ***
peri -1.527e-03 1.770e-04 -8.623 5.24e-11 ***
shape 1.757e+00 1.756e+00 1.000 0.323
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8521 on 44 degrees of freedom
Multiple R-squared: 0.7483, Adjusted R-squared: 0.7311
F-statistic: 43.6 on 3 and 44 DF, p-value: 3.094e-13
Given this model, saved as an object rock.mod1, we might be interested in considering adding an interaction term between the area and perimeter measurements. The update function has various options
and the simplest case is to specfiy a model object and a new formula. The new formula can use the period as short hand for keep everything on either the left or right hand side of the formula and the
plus or minus sign used to add or remove terms to the model. In the case of adding an interaction term our call would be:
> rock.mod2 = update(rock.mod1, . ~ . + area:peri)
The first function argument is the name of the model we fitted previously and the periods indicate that we want to use the same response variable and to start with the whole formula but add an
interaction term between area and perimeter – the colon is used to specify an interaction term by itself. This fitted model is now:
> summary(rock.mod2)
lm(formula = log(perm) ~ area + peri + shape + area:peri, data = rock)
Min 1Q Median 3Q Max
-1.7255 -0.4760 0.1256 0.6539 1.4269
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.567e+00 8.533e-01 7.696 1.28e-09 ***
area 3.769e-04 1.025e-04 3.678 0.00065 ***
peri -2.141e-03 3.734e-04 -5.733 8.94e-07 ***
shape 4.022e-01 1.859e+00 0.216 0.82974
area:peri 6.641e-08 3.583e-08 1.854 0.07065 .
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8295 on 43 degrees of freedom
Multiple R-squared: 0.7669, Adjusted R-squared: 0.7452
F-statistic: 35.37 on 4 and 43 DF, p-value: 4.404e-13
The update function can also be used to change other aspects of the linear model or in fact many other types of model are set up to repsond sensibly to using this function.
Related posts: | {"url":"https://www.r-bloggers.com/2010/05/using-the-update-function-during-variable-selection/","timestamp":"2024-11-03T08:54:57Z","content_type":"text/html","content_length":"91351","record_id":"<urn:uuid:6c8815ac-df6c-46a4-9e8e-691046208e43>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00551.warc.gz"} |
Guillem DOMÈNECH | Emmy Noether Group Leader | Doctor of Science | Leibniz Universität Hannover, Hannover | Institute of Theoretical Physics | Research profile
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text.
Learn more
I am currently leading an Emmy Noether research group at the institute for theoretical physics at the Leibniz University in Hannover. The group is supported by the German Research Foundation (DFG)
under grant no. DO 2574/1-1. My main research interests include theories of gravity and physics of the early universe. I am known for my contributions to the indirect generation of gravitational
waves from primordial fluctuations. Check my personal website at: https://domenechcosmo.netlify.app | {"url":"https://www.researchgate.net/profile/Guillem-Domenech-4","timestamp":"2024-11-10T18:38:42Z","content_type":"text/html","content_length":"1041957","record_id":"<urn:uuid:763d681a-b9f8-42ff-a284-3022f6b87b3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00853.warc.gz"} |
Frequency distribution tables and graphs
A frequency distribution table in Excel gives you a snapshot of how your data is spread out. It’s usual to pair a frequency distribution table with a histogram. A histogram gives you a graph to go
with the table. In order to make a frequency distribution table in Excel with a histogram, you must have the Data Analysis Toolpak installed. Frequency distribution tables give you a snapshot of the
data to allow you to find patterns.A quick look at the above frequency distribution table tells you the majority of teens don’t use any birth control at all. CHAPTER 1 Introduction to Statistics and
Frequency Distributions 5. disciplines people make decisions that have the potential to improve people’s lives, and. these decisions should be informed by data. For example, a psychologist may
conduct a study to determine if a new treatment reduces the symptoms of depression.
Frequency Distribution Table in Excel. A frequency distribution table in Excel gives you a snapshot of how your data is spread out. It’s usual to pair a frequency distribution table with a histogram.
A histogram gives you a graph to go with the table. In order to make a frequency distribution table in Excel with a histogram, you must have the Data Analysis Toolpak installed. (1) a vertical
rectangle represents each interval and the height of the rectangle equals the frequency recorded for each interval, (2) each rectangle represents the frequency of all scores in a distribution, (3)
each rectangle touches adjacent rectangles at the boundaries of each interval Worksheets are Frequency distribution work, Ch 2 frequency distributions and graphs, Introduction to statistics and
frequency distributions, Chapter 2 frequency distributions and graphs or making, Mat 142 college mathematics module 3 statistics terri miller, Tally charts and frequency tables, 15a 15b 15c 15d 15e
relative frequency and probability, Chapter 2: Frequency Distribution and Graphs 1. Chapter 2
Frequency Distributions
and Graphs
2. A frequency distribution is the organization of raw data in table from, using classes and frequency.
3. The number of miles that the employees of a large department store traveled to work each day
Frequency tables, pie charts, and bar charts can be used to display the distribution of a single categorical variable. These displays show all possible values of
19 Dec 2018 Tables and graphs commonly express frequency and relative frequency. Constructing a Frequency Distribution Table. Frequency distribution Choose a Column table, and a column scatter
graph. If you are not ready to enter your own data, choose the sample data set: Frequency distribution data and 12 Oct 2018 Frequency distribution can be defined as the list, graph or table that is
able to display frequency of the different outcomes that are a part of the raw data in a table form, using classes and frequencies. □ Types of frequency distributions are categorical frequency
distribution, ungrouped frequency A frequency distribution is shown in a table in which information or data is arranged such as histograms, frequency curves and cumulative frequency graphs. The
graph can be created as an addition to the cumulative frequency distribution table. It can be easily done using Microsoft Excel. The creation of the cumulative A frequency distribution can be
structured either a graph or a table. It has two elements: The set of categories that make up the original measurement scale.
frequency distribution table. In Statistics the frequency of an event xi is the number fi of times the event occurred in the experiment or the study. We use the term
This page is about frequency distribution. Prepare a frequency-distribution table. 15,16,16,14,17,17,16,15,15,16,16,17,15,16 Double Bar Graph • Histogram A frequency distribution in the form of a
table or graph is used to organize and illustrate scores for a sample by showing the frequency of each score (or range of Example: The frequency distribution of the previous example is the table To
enable the researcher to draw charts and graphs for the presentation of data. 5. Frequency Distribution and Data: Types, Tables, and Graphs Data. Any bit of information that is expressed in a value
or numerical number is data. Frequency. The frequency of any value is the number of times that value appears in a data set. Frequency Distribution. Many times it is not easy or Frequency distribution
tables give you a snapshot of the data to allow you to find patterns.A quick look at the above frequency distribution table tells you the majority of teens don’t use any birth control at all. The
frequency was 2 on Saturday, 1 on Thursday and 3 for the whole week. Frequency Distribution By counting frequencies we can make a Frequency Distribution table.
30 Aug 2016 This module shows how to summarize and interpret data using tables and graphs ; how to construct frequency distributions, relative frequency
As for categorical variables, frequency distributions may be presented in a table or a graph, including bar charts and pie or sector charts. The term frequency distribution has a specific meaning,
referring to the the way observations of a given variable behave in terms of its absolute, relative or cumulative frequencies.
30 Aug 2016 This module shows how to summarize and interpret data using tables and graphs ; how to construct frequency distributions, relative frequency
Frequency Distribution Table Now, imagine how difficult and cumbersome this process would get if there were a larger number of observations. If we were to include the test scores of all 20 students
in this class, it would be very difficult to understand and interpret such data unless it is ‘organized’. • Create and interpret frequency distribution tables, bar graphs, histograms, and line graphs
• Explain when to use a bar graph, histogram, and line graph • Enter data into SPSS and generate frequency distribution tables and graphs. HOW TO BE SUCCESSFUL IN THIS COURSE. Have you ever read a
few pages of a textbook and realized Chapter 2: Frequency Distribution and Graphs 1. Chapter 2
Frequency Distributions
and Graphs
2. A frequency distribution is the organization of raw data in table from, using classes and frequency.
3. The number of miles that the employees of a large department store traveled to work each day
4. A frequency distribution table in Excel gives you a snapshot of how your data is spread out. It’s usual to pair a frequency distribution table with a histogram. A histogram gives you a graph to go
with the table. In order to make a frequency distribution table in Excel with a histogram, you must have the Data Analysis Toolpak installed.
A frequency distribution is shown in a table in which information or data is arranged such as histograms, frequency curves and cumulative frequency graphs. The graph can be created as an addition to
the cumulative frequency distribution table. It can be easily done using Microsoft Excel. The creation of the cumulative A frequency distribution can be structured either a graph or a table. It has
two elements: The set of categories that make up the original measurement scale. The following table gives the frequency distribution of marks obtained by 28 how to construct the cumulative frequency
table and cumulative frequency graph. 14 Jan 2020 In the field of statistics, a frequency distribution is a data set or graph that is organized to show frequencies of the occurrence outcomes.
frequency distribution table. In Statistics the frequency of an event xi is the number fi of times the event occurred in the experiment or the study. We use the term | {"url":"https://digoptionehsivre.netlify.app/storton15003kuha/frequency-distribution-tables-and-graphs-kop.html","timestamp":"2024-11-07T03:06:10Z","content_type":"text/html","content_length":"36280","record_id":"<urn:uuid:df865258-6571-4738-b4f7-72c7a55502eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00435.warc.gz"} |
How to Type Fractions in Microsoft Word & Mac Version | Techwalla
Most font types have symbols for common fractions, such as 1/2. This is true for Mac and Windows fonts. On a Windows system, Microsoft Word will automatically correct these common fractions as you
type. Even though common fractions are available in many Mac fonts, Apple computers do not provide the fraction auto-correct functionality in the Mac version of Word. To type additional fractions on
a Windows system, or to type any fractions on a Mac, you must insert the fraction as an equation in the document. The process is the same in both versions of Word.
Step 1
Click the "Insert" tab in Word and click the "Equation" icon.
Step 2
Click "Fraction" and select the layout of the fraction you want to create. Choose from a stacked, skewed, linear or small fraction.
Step 3
Enter the numerator and the denominator of the fraction in the small text boxes that appear. Click outside the squares to save the fraction and type the rest of your document. | {"url":"https://www.techwalla.com/articles/how-to-type-fractions-in-microsoft-word-mac-version","timestamp":"2024-11-07T23:37:34Z","content_type":"text/html","content_length":"312999","record_id":"<urn:uuid:e60b046e-2b06-41ea-8052-6fa9681f4da9>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00372.warc.gz"} |
AlaskaLinuxUser's Scratchpad
Doesn’t sound like a very useful tool to me, but it was the app I was required to build as part of my course. Here’s the download if you want to give it a try:
While not very complicated for the seasoned programmer, it was the most difficult app to build that I have built so far, which is good, since the course should progress and force me into harder and
harder to solve problems.
Overall, the complicated part was the math involved with checking for a square root or not. Fortunately, I found an article on StackOverflow to answer that. I even added a note about it in my app, so
that others would know where I got the math function from. Then there was the series of if/then statements, which wheedled the answer down for the appropriate toast pop up. I learned a lot, and here
is what I did:
package com.mycompany.sotn;
import android.app.*;
import android.os.*;
import java.util.*;
import android.widget.*;
import android.view.*;
import android.app.Activity;
import android.os.Bundle;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.*;
public class MainActivity extends Activity
protected void onCreate(Bundle savedInstanceState)
class MyNumber {
int chosenNumber;
public boolean zeroNumber() {
if (chosenNumber == 0) {
return true;
} else {
return false;
public boolean squareNumber() {
// found method to check for square from Jaskaranbir Singh on StackOverflow
double sqrt = Math.sqrt(chosenNumber);
int x = (int) sqrt;
if(Math.pow(sqrt,2) == Math.pow(x,2)) {
return true;
} else {
return false;
public boolean triNumber() {
// 8x + 1 is a square number then the number is triangular, per wikipedia.
// So let’s get that number.
int mathNumber = (chosenNumber*8+1);
// And check if it is a square number.
double sqrt = Math.sqrt(mathNumber);
int x = (int) sqrt;
if(Math.pow(sqrt,2) == Math.pow(x,2)) {
return true;
} else {
return false;
public void onClick (View v) {
EditText userQuestion = (EditText) findViewById(R.id.userNumberField);
if (userQuestion.getText().toString().equals(“”)) {
Toast toast = Toast.makeText(getApplicationContext(), “Please enter a number!”, Toast.LENGTH_SHORT);
toast.setGravity(Gravity.CENTER_VERTICAL|Gravity.CENTER, 0, 0);
} else {
MyNumber thisTime = new MyNumber();
int myNewNumber = Integer.parseInt(userQuestion.getText().toString());
thisTime.chosenNumber = myNewNumber;
if (thisTime.zeroNumber()) {
Toast toast = Toast.makeText(getApplicationContext(),thisTime.chosenNumber + ” is not a Triangular number, but is a Square number.”, Toast.LENGTH_SHORT);
toast.setGravity(Gravity.CENTER_VERTICAL|Gravity.CENTER, 0, 0);
} else if ((thisTime.squareNumber()) && (thisTime.triNumber())) {
Toast toast = Toast.makeText(getApplicationContext(),thisTime.chosenNumber + ” is a Triangular and a Square number.”, Toast.LENGTH_SHORT);
toast.setGravity(Gravity.CENTER_VERTICAL|Gravity.CENTER, 0, 0);
} else if (thisTime.squareNumber()) {
Toast toast = Toast.makeText(getApplicationContext(),thisTime.chosenNumber + ” is a Square number.”, Toast.LENGTH_SHORT);
toast.setGravity(Gravity.CENTER_VERTICAL|Gravity.CENTER, 0, 0);
} else if (thisTime.triNumber()) {
Toast toast = Toast.makeText(getApplicationContext(),thisTime.chosenNumber + ” is a Triangular number.”, Toast.LENGTH_SHORT);
toast.setGravity(Gravity.CENTER_VERTICAL|Gravity.CENTER, 0, 0);
} else {
Toast toast = Toast.makeText(getApplicationContext(),thisTime.chosenNumber + ” is neither a Triangular number, nor a Square number.”, Toast.LENGTH_SHORT);
toast.setGravity(Gravity.CENTER_VERTICAL|Gravity.CENTER, 0, 0);
First, I practiced relentlessly the if/then statements on http://www.browxy.com/ to make sure that the java code worked. It is a web browser based application that lets you input java code and run
it. With this website, I could run dozens of iterations without having to compile my app every time. It worked rather well for the quick check, then I just needed to add that code to AIDE, my Android
IDE compiler, and build the “app” around the code.
Be sure to give it a try! However, I think only math wiz’s will find this tool remotely useful. I did double check the math from 1 to 36, which seems to be accurate. I am not sure if this will work
on really large numbers or not, but I did try a few 6 digit numbers from a truth table I found online, and it was accurate. I was wondering how large a number would be too large, so I Googled it.
Java int numbers can go up to 2,147,483,647, which the app says is neither a triangular nor square number, and adding 1 to that value forces the app to crash. Interesting.
Linux – keep it simple. | {"url":"https://alaskalinuxuser3.ddns.net/2017/01/10/a-square-or-triangular-number/","timestamp":"2024-11-13T07:59:24Z","content_type":"text/html","content_length":"46256","record_id":"<urn:uuid:d668b9b0-6c6d-4a9b-a590-3940c57cff0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00687.warc.gz"} |
Multiplication Chart 1-250 2024 - Multiplication Chart Printable
Multiplication Chart 1-250
Multiplication Chart 1-250 – You can get a blank Multiplication Chart if you are looking for a fun way to teach your child the multiplication facts. This will likely give your youngster to fill out
the important points on their own. You can get blank multiplication graphs for a variety of product or service can vary, including 1-9, 10-12, and 15 products. If you want to make your chart more
exciting, you can add a Game to it. Here are some suggestions to get your kid began: Multiplication Chart 1-250.
Multiplication Maps
You can utilize multiplication maps as part of your child’s student binder to help them commit to memory mathematics details. Although many children can remember their arithmetic information in a
natural way, it will require numerous others time to do so. Multiplication graphs are an ideal way to strengthen their learning and boost their assurance. In addition to being educational, these
graphs may be laminated for added sturdiness. Allow me to share some helpful methods to use multiplication charts. You can even check out websites like these for beneficial multiplication reality
This lesson covers the basic principles of your multiplication kitchen table. Together with understanding the guidelines for multiplying, students will comprehend the thought of variables and
patterning. Students will be able to recall basic facts like five times four, by understanding how the factors work. They can also be able to utilize the home of one and zero to solve more complex
merchandise. By the end of the lesson, students should be able to recognize patterns in multiplication chart 1.
Along with the normal multiplication graph or chart, pupils might need to produce a chart with more factors or a lot fewer variables. To create a multiplication graph or chart with more factors,
individuals need to generate 12 furniture, every with twelve rows and three posts. All 12 desks have to fit using one page of pieces of paper. Lines needs to be driven having a ruler. Graph papers is
perfect for this project. If graph paper is not an option, students can use spreadsheet programs to make their own tables.
Game ideas
Regardless if you are educating a newbie multiplication course or concentrating on the mastery of your multiplication table, you may put together exciting and interesting activity ideas for
Multiplication Chart 1. A few enjoyable suggestions are the following. This video game requires the individuals to be in work and pairs on the same difficulty. Then, they may all last their cards and
talk about the best solution for the moment. They win if they get it right!
When you’re instructing children about multiplication, one of the better equipment you are able to give them is a printable multiplication chart. These printable bedding appear in a range of styles
and can be printed on one site or many. Little ones can understand their multiplication details by copying them through the chart and memorizing them. A multiplication graph or chart can be helpful
for several reasons, from supporting them understand their math concepts specifics to instructing them the way you use a calculator.
Gallery of Multiplication Chart 1-250
Table Of 250 Learn 250 Times Table Multiplication Table Of 250
Free Printable Multiplication Table Chart 1 To 25 Template
Buy ABCKEY Multiplication Chart For Classroom With 250 Cards Set Of 2
Leave a Comment | {"url":"https://www.multiplicationchartprintable.com/multiplication-chart-1-250/","timestamp":"2024-11-06T11:32:20Z","content_type":"text/html","content_length":"52917","record_id":"<urn:uuid:173837d3-1231-46a3-b0ca-3526b00f38d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00642.warc.gz"} |
popping in the exhaust
I have an 04 ultraclassic with EFI. Lately when riding the bike feels "rough" and there is popping coming from the exhaust. I was thinking that the fuel mixture is too rich but am not sure.
Additionally now the bike vibrates more then it did in the past. 1) is there a way to manually adjust the fuel mixture if it is too rich without taking it to the shop. Also, I have a trip coming up
on Sunday that is roughly 2500 miles. Will this deteriorate the engine at all and is is safe to ride.
Looks like Hobbit has it covered.
Just a few more things, make sure the plug wires are all the way on and that the spark plug tops are screwed on tight. A new set of plugs wouldn't hurt either.
Also with popping out the tail, the exhaust gaskets may need some attention or the pipe flanges could be loose too.
Sounds like I had a similiar problem for a little while. Sputtering popping and backfiring when reducing rpm's. The main cause was filling up before the tank was too low. The gas stations I went to
had the three fuel selection on one pump. The problem with selecting premium is that you are putting in a few litres of regular before the pump starts putting out the premium. If you top up the tank
regularly you end up with more regular, low octane in the tank. The regular gas was in the hose all the way back to the pump intake. Now I have a few stations that have a designated premium pump and
have not experienced a problem since. Love the Chevron 94 down south, extra power, or so it seems.
Just learning a little, day by day, year by year, still smiling
Sounds like I had a similiar problem for a little while. Sputtering popping and backfiring when reducing rpm's. The main cause was filling up before the tank was too low. The gas stations I went
to had the three fuel selection on one pump. The problem with selecting premium is that you are putting in a few litres of regular before the pump starts putting out the premium. If you top up
the tank regularly you end up with more regular, low octane in the tank. The regular gas was in the hose all the way back to the pump intake. Now I have a few stations that have a designated
premium pump and have not experienced a problem since. Love the Chevron 94 down south, extra power, or so it seems.
Just learning a little, day by day, year by year, still smiling
Some food for thought...
Assume a gas hose is 3/4” diameter hose by 10’. That makes a gas hose a really long and skinny cylinder. We want to know the volume of that cylinder. It’s easier to do the math if you have like
units, so, in inches that’s 0.75” x 120”.
We know that cylinder volume = pi * radius squared * height. Or, if you like using the diameter, 1/4 * pi * diameter squared * height.
I used the radius formula. Of course we know radius is 1/2 the diameter so 0.75” / 2 = 0.375”.
Up with me so far?:cheers
So, 3.14 * 0.375 * 120 = 141.3 cubic inches.
Now, 1 cubic inch = 0.554112552 US fluid ounces.
So, 141.3 cubic inches * 0.554112552 ounces per cubic inch = 78.2961036 ounces in that 3/4" x 10’ hose. Let’s round that off to 78.3 ounces.
We know that there are 128 ounces per gallon, and 32 ounces per quart. Assuming the hose was completely full, at most, you are looking at 78.3 ounces of unknown octane rated gas.
That is because 78.3 / 128 = .62 (rounded). 62% of one gallon or 128 ounces is 79.36 ounces or just over half a gallon. In context of drinks, that’s 2 quarts or Gatorade and a 16 ounce bottle of soda
leaving the last swallow in the bottle.
But this assumes there is a maximum volume of gas in the hose.
In actuality, the gas valves incorporate a vacuum relief valve and there is likely no gas in the hose except for residual wetness which is of little to no consequence.
There ya have it!
Last edited:
Yea but which hole did the rabbit go in? :shock
There you go and oil in the right!!
You must have owned an Indian at one time or another :s | {"url":"https://www.hdtimeline.com/threads/popping-in-the-exhaust.1688/","timestamp":"2024-11-04T11:44:17Z","content_type":"text/html","content_length":"72004","record_id":"<urn:uuid:43ab0ee4-7f76-4170-8ed9-ad3477eb5c11>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00150.warc.gz"} |
Frontiers | Volume Prediction With Neural Networks
• Department of Mathematics, Bar-Ilan University, Ramat Gan, Israel
Changes in intraday trading volume are integral to any algorithmic trading strategy. Accordingly, forecasting the change in trading volume is paramount to better understanding the financial markets.
This paper introduces a new method to forecast the log change in trading volume, leveraging the power of Long Short Term Memory (LSTM) networks in conjunction with Support Vector Regression (SVR) and
Autoregressive (AR) models. We show that LSTM contributes to a more accurate forecast, particularly when constructed as part of a hybrid model with AR. The algorithm is extended to include data about
the time of day, helping the model associate the log change in trading volume with the current hour, which yields the best performance of all trials.
1. Introduction
In recent years, deep learning became the subject of a growing body of research in many disciplines, including applications in finance (Dixon et al., 2017). Despite its popularity, only a handful of
studies have been done on leveraging deep learning methods in volume prediction (Árpád Szűcs, 2017).
As a result of the growth in deep learning applications, neural networks and specifically Long Short Term Memory networks (LSTM) became popular. LSTM networks in particular demonstrated success in
natural language processing as well as in predicting the next element in a sequence or even the entire sequence. This ability can also be applied to prediction of financial trends, including change
in trading volume of stocks—a subject with high significance as it can be applied to assist in solving a wide variety of financial problems. For example, an algorithmic trader might use the
prediction of the trading volume to determine the size of a position on a certain security. Predicting the change in trading volume has applications for risk management, as well. For instance, a
trader may decide to limit intraday exposure, e.g., exposure throughout the trading day, in accordance with changes in trading volume. This area of research may also have some applications in
regulatory settings. A model that can predict the change in trading volume may be useful in recognizing irregular activity, such as a sharp increase in volume when a decline would be expected.
Despite its importance, thus far, only a limited number of papers have been published on this topic (Árpád Szűcs, 2017). Thus, the prediction of trading volume, and particularly the intraday change
in trading volume, is still an open subject with very limited research. This scarcity is even more pronounced when focusing on the use of deep learning methods and specifically LSTM in forecasting,
as well as combining LSTM with other algorithms to create hybrid models.
In this contribution, we leveraged the power of LSTM to predict the change in trading volume of S&P 500 ETF (NYSE:SPY) over the course of the trading day. We implemented LSTM on its own as well as a
hybrid model where we combined LSTM with other algorithms. Our results show that LSTM contributes to a superior prediction of the change in volume.
We also used a method called Support Vector Regression (SVR), a type of Support Vector Machine (SVM) first introduced in 1995 by Cortes and Vapnik (1995) and more thoroughly explored in Smola and
Schölkopf (2004). SVR works similarly to SVM, generating the predictions by finding a hyper-plane that is then used for the regression. As explained further below, we leveraged SVR in conjunction
with other algorithms to create several hybrid models. Our goal was to compare the performance of different approaches and discern whether combining such different approaches together yields any
improvement over using these same models individually.
2. Literature Review
Compared to price, on which plenty has been written, only a small number of articles have been published on predicting volume (Árpád Szűcs, 2017). Still, predicting and generally better understanding
volume remains important because many market players and traders are affected by the trading volume. In addition, price and volume are known to be positively correlated, a phenomenon that has been
studied at length, particularly during the 1980s by Karpoff (1987). These works focused on finding the long-term correlation between volume and delta price squared, defined as the square of the
change in price.
Studies show that the change in intraday trading volume may be affected by a variety of factors, including patterns in the opening, closing, auctions, news releases, and market microstructures, as
well as numerous other factors (Kissell, 2014). On the other hand, volume may also be used to predict market volatility, as shown by Fleming et al. (2008), Wagner and Marsh (2004), and Lamoureux and
Lastrapes (1990). Thus, forecasting volume is a complex task. This paper seeks to explore the usefulness of LSTM in predicting the change in overall intraday trading volume as well as compare the
performance of LSTM in conjunction with other models.
Several recent examples of attempts to predict volume behavior include Alvim et al. (2010) and Chen et al. (2016). In Alvim et al. (2010), the authors tried to predict volume using Partial Least
Squares (PLS) and Support Vector Regression (SVR). Both methods outperformed the benchmark, an approach based on the trading volume of the previous time intervals.
A second article is (Chen et al., 2016), where the authors used the Kalman Filter approach in order to predict intraday volume and Volume Weighted Average Price (VWAP), which is calculated by summing
the intraday number of shares multiplied by their price and divided by the daily total number of shares. The authors introduced a closed-form expectation-maximization in order to calibrate their
model. This forecasting approach outperformed their two benchmarks: (1) Moving Average (MA) and (2) Component Multiplicative Error Model.
While some limited work can be found on the prediction of actual volume and VWAP (Volume Weighted Average Price), papers that attempt to predict the change in volume are extremely rare. One
noteworthy article is (Podobnik et al., 2009), where the authors were successful in finding a cross-correlation between the change in trading volume, calculated as log of the daily difference in
volume, and the price.
Other than this study, to the best of our knowledge no other work has been published on studying the change in volume. This is surprising, because change in volume can be extremely useful for market
makers in their decision-making, especially when dealing with intraday intervals. For instance, certain algorithmic trading strategies might only succeed when trading activity is expected to increase
in the next few minutes. For such strategies, long term volume predictions would not be useful. Our research addresses this issue by comparing a few learning algorithms that focus on predicting the
next time stamp volume change based on the trading information from a relatively short window of recent activity.
Deep learning started to gain acceptance during the 1980s but recently grew in popularity due to the increase in parallel computation power and availability of massive amounts of data. This led to
the development of many types of neural networks, each geared toward solving a different problem. One of these, Recurrent Neural Networks (RNN), were intended for learning on sequential data x^1, x^
2…x^n (Goldberg, 2017).
The following formulas explain the structure of the RNN network by showing what is happening in each layer:
$R(si-1,xi)=f(si-1U+xiW)si=R(si-1,xi)yi=O(si) (1)$
Each layer produces two outputs: s[i] which is the information passed along the network and y[i], which is optional. We can choose a different structure that produces only one output at the last
layer. The s[i] vector serves as the network memory, which helps the network to keep track of previous inputs when producing the output. The function f is a non-linear function such as tanh, which is
applied element-wise. W and U are weight matrices that are learned using back propagation.
More recently, we witnessed the rise of Long Short Term Memory (LSTM) networks, which were introduced to address a basic flaw in the ability of RNN to deal with long term memory. LSTM networks are
able to handle the vanishing/exploding gradient problem, which was first introduced by Bengio et al. (1994) and further explored in Pascanu et al. (2013). In Hochreiter and Schmidhuber (1997), LSTM
networks employ multiplicative gate units to achieve this, adding a memory cell and gate units to the network. The idea is to provide an additional route for historical information to move through
the layers without being affected by the vanishing gradient phenomenon. In each layer t − 1 the output that passes on to the next layer consists of two vectors: c[t−1], which is the memory cell, and
s[t−1], which is similar to the information that is being passed in regular RNN networks. If we let “⊙” represent entry wise composition, then at layer t, the following algorithm is applied:
$ct=f⊙ct-1+i⊙z (2)$
where $f=\sigma \left({x}_{t}{W}^{xf}+{s}_{t-1}{W}^{sf}\right)$ is a gate that is used to control the information that passes from the past by f ⊙ c[t−1], which is the information to retain from
previous layers. The vector $i=\sigma \left({x}_{t}{W}^{xi}+{s}_{t-1}{W}^{si}\right)$ is a gate used to control the new information to add from the vector $z=tanh\left({x}_{t}{W}^{xz}+{s}_{t-1}{W}^
{sz}\right)$. The new information to add is determined by i ⊙ z. The weight matrices W^xf,W^sf,W^xz, and W^sz are all trained using back propagation. However, due to the paths created by the gates,
the gradients do not vanish and the long memory can flow through the different layers.
In recent years, there has been a growing body of research claiming to achieve better forecasting results with hybrid models that combine multiple learning algorithms as compared to a single
algorithm model. Hybrid models have been successful in financial research applications, as detailed in Cavalcante et al. (2016). One example is (Liang et al., 2009), where the authors predicted
future options prices using conventional pricing techniques combined with two learning models: Neural Networks and Support Vector Regression. The authors used this hybrid model on empirical data from
the Hong Kong options market and showed that it returned results superior to standard methods used for option pricing. We experimented with hybrid models as well.
3. Methodology
For our research, we used minute price and volume trading data of the S&P 500 ETF (NYSE:SPY) between 2012 and 2015. The data was purchased from QuantQuote.
We divided this data into three sections: train, development, and test. The train dataset was from Jan 1, 2012 to December 31, 2013. The development dataset was from Jan 1, 2014 to April 30, 2014.
The test dataset was from May 1, 2014 to September 30, 2014. Table 1 below outlines several descriptive statistics metrics on the three different datasets.
TABLE 1
We trained each of the algorithms described below on the train dataset but selected the best-performing parameters based on the lowest error we got on the development dataset. This was done to
achieve cross-validation, since the models are prone to overfitting on the train dataset. We used the parameters to evaluate performance on the test dataset and compared the results of each model. We
used the Tensorflow package to build and execute the LSTM algorithm as well as track the results.
We tested a total of nine methods to predict the change in trading volume of the S&P 500 ETF over the course of the trading day. These included LSTM and several other models explained further below.
In order to find the best way to predict change in log volume on a 10-min interval, we experimented with a few methods. The first method, labeled “AR,” was a simple Auto Regressive (AR) model on the
log of trading volume figures. We calculated AR using the following formula:
$v^i=avi-1+b (3)$
where ${\stackrel{^}{v}}_{i}$ represents the predicted log of the trading volume and v[i] represents the log of the actual trading volume. The parameters a, b are fitted using the intraday volume
data, e.g., from the beginning of the training up until the last known value i − 1. As the formula illustrates, each prediction is calculated as a linear combination of the last value. Lastly, we
generated the prediction of the change in the log volume by calculating ${ŷ}_{i}={\stackrel{^}{v}}_{i}-{v}_{i-1}$.
Initially, we fitted the AR model on the log volume of the train dataset. Next, we evaluated the differences, e.g., ŷ[i], on the test dataset. This AR method served as our benchmark.
We ran two tests on the log 10-min volume data to ensure that AR is appropriate for our purpose. First, to check whether the data is stationary, we used the Augmented Dickey–Fuller test (Dickey and
Fuller, 1979), which returned a result that allowed us to reject the null hypothesis that the data is non-stationary. The test output can be seen in Table 2. From the Next, we performed a lag
analysis, which illustrates that the auto-correlation decreases with the lag. The results of this analysis can be seen in Figure 1. Taken together, these provide support for using AR with a lag of 1,
or AR(1).
TABLE 2
FIGURE 1
The second method, labeled “LSTM,” involved running LSTM where the feature vector was comprised of change in log price and log volume over a 50-min window (a sequence of five 10-min intervals). Here,
we attempted to predict the change in log volume for the next 10-min interval. Specifically, for each 10-min interval t we defined a window W[t] as:
$WtT=(Δvt-1,Δvt-2,,…,Δvt-5, Δht-1,Δht-2,…,Δht-5, Δlt-1,Δlt-2,…,Δlt-5, Δct-1,Δct-2,…,Δct-5, Δot-1,Δot-2,…,Δot-5) (4)$
where Δv[t] is the change in volume, Δh[t] is the change in high price, Δl[t] is the change in low price, Δc[t] is the change in close price, and Δo[t] is the change in open price, all for a 10-min
interval. We chose the window size of 5 after some early trial and error suggested that it may have the best prediction potential. However, optimization of the window size along with other model
parameters may require additional research.
For our third method, labeled “LSTM-AR,” we added the AR predictions for the log price and log volume into the LSTM feature vector. We accomplished this by calculating the AR prediction set of the
log prices and log volumes. The prediction set was comprised of the open, close, high, and low prices during any given 10-min interval. We chose to leverage AR to predict the figures, then calculated
the delta between the prediction and the latest actual data. For example, we used AR to predict the next open, then subtracted from it the last known open to arrive at the delta. This was repeated
for each 10-min interval in the 50-min window. These delta figures were then incorporated into the LSTM feature vector.
For our fourth method, labeled “LSTM-SVR,” we created a hybrid model combining the results from LSTM with SVR. This was achieved by using the LSTM output as the SVR feature vector.
For our fifth method, labeled “LSTM-AR-SVR,” we used the model from our “LSTM-AR” method, then fed the output into the SVR feature vector.
One of the problems we encountered was that LSTM, by itself, could not capture the U-shape characteristic of the daily volume. This is because the LSTM can only look at a 5-min window, whereas the
U-shape typically becomes apparent when examining a longer period of time, spanning several hours or even an entire trading day. In an attempt to help LSTM better understand the daily trends in
volume, we decided to add the hour to the feature vector. We implemented this on the “LSTM,” “LSTM-AR,” “LSTM-SVR,” and “LSTM-AR-SVR” models, and labeled them “LSTM-HR,” “LSTM-AR-HR,” “LSTM-SVR-HR,”
“LSTM-AR-SVR-HR,” respectively.
The performance of the models were evaluated using three scores: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and the ability of the model to capture the correct direction of the change
(Correct Direction), e.g., whether the next timestamp change of log trading volume was positive or negative. We calculated each metric as shown in Table 3 below, where ŷ[i] represents the predicted
log change in the volume, y[i] represents the actual log change in the volume and N is the number of data points.
TABLE 3
4. Results
The results of the experiments are shown in Table 4 below, sorted in ascending order by the MAE value, e.g., the best result (lowest MAE) appear in the last row. The results are also displayed in
Figure 2.
TABLE 4
FIGURE 2
Figure 2. A comparison of the MAE, RMSE, and percentage of correct direction predictions from each trial. The hybrid model, combining LSTM with AR and hourly data, performed the best.
As can be seen from the table, LSTM-AR-HR gave the best performance of all models, with MAE of 0.7669 and correct direction of 0.7054. This represents a substantial improvement over the AR results
–1.0493 MAE and 0.6350 correct direction. All of the algorithms resulted in an improvement over the AR trial, yielding both lower MAE and higher correct direction.
Interestingly, the LSTM-SVR model produced a slightly lower value of MAE error but performed significantly better in the ability to predict the correct direction of the log change in volume. This can
be explained by the SVR's margin, which gives it the ability to understand and learn overall trends in data—in this case, the change in log volume. On the flip side, this also means that the SVR
model is less able to capture smaller, more nuanced changes, particularly over shorter time periods.
From these experiments, it is evident that LSTM contributes to a prediction algorithm that is superior to AR. Furthermore, the addition of the hour information into the feature vector further helps
LSTM understand and model the data, more so than combining LSTM with other models. However, combining LSTM with SVR and/or AR also improves the model's performance, although SVR is superior to AR
when each are combined with LSTM individually. As explained above, the best results are achieved by adding the hour data and combining LSTM with AR.
Since the hour of the day played an important role in the prediction, we further analyzed its effect and whether it can be used by itself to predict the change in log volume on intraday data. First,
the importance of intraday time is evident from Figure 3, which shows the average volume by hour over a 1 year period. In this graph, we can easily notice the U-shape of the average volume, e.g., in
certain hours in the day such as mid-day, volume tends to decrease, while at others, such as the early morning and late afternoon, volume tends to increase, on average.
FIGURE 3
Figure 3. Average trading volumes by hour during 2013. Although daily data can deviate substantially from the average, the typical U-shape is clearly visible, resulting from higher activity in the
early and late hours of the trading day along with a dip around mid-day.
However, attempting to predict the intraday change in log volume based on this phenomenon yields results that are far less accurate than the other methods we deployed. We attempted to predict the
change in log volume in a few ways. First, we tried to use the expected average volume, which gave us MAE of 1.1086 and RMSE of 2.256. Next, we used LSTM with a window of five 10-min intervals, where
the only feature we sent was the hour—similar to the other algorithms we used in this study. This yielded MAE of 0.9165 and RMSE of 1.3907. In both experiments, our ability to forecast the right
direction of the change in the log volume dropped below 60%. This indicates that the time of day, by itself, does not perform well in attempting to predict the change in log volume. In other words,
attempting to predict the log change in volume based on whether we would expect the volume to increase or decrease according to the time of day, is not a good strategy. This is because trading data
is volatile over individual days. Further, incorporating additional information about actual trading data brings additional information, and leveraging this data along with the power of LSTM is
valuable in improving predictions.
We also examined the errors generated by our best-performing model, “LSTM-AR-HR.” The errors were calculated as the difference between our model's prediction of the change and the actual change in
log volume for the interval. For our analysis, we checked for auto-correlations between the errors. The results, as can be seen in Figure 4, show that there is no auto-correlation between the errors
in the time series.
FIGURE 4
Figure 4. A graph of the error auto-correlation by lag. The thin band represents the 95% confidence interval.
5. Conclusion
In this paper, our goal was to test the performance of LSTM on its own as well as when combined with other models in predicting the log change in trading volume during the trading day. We compared
LSTM, LSTM combined with Supported Vector Regression (SVR), and LSTM combined with SVR and AR, and a combination of all three. We also added the hour into the feature vector, which proved helpful in
predicting the log change in volume. We attribute this improvement to the general trend in intraday trading volume, which typically resembles a U-shape with trading volume peaking during the early
and late trading hours in a day.
Predicting the change in volume is key in a variety of financial applications, including algorithmic trading, where knowing the change in trading volume can impact the trading strategy. In
particular, we focused on predicting the change in trading volume over a short timespan, which is helpful in adopting the most profitable strategy over the next few minutes. Future research can look
further into this topic by incorporating additional, newer models to improve predictions. It would also be interesting to explore the variation in the U-shape over the course of different trading
days to better understand and perhaps even predict the entire U-shape based on the U-shape of the preceding days, for example.
Data Availability Statement
The datasets for this manuscript are not publicly available because Purchase required—we purchased stock minute data from QuantQuote. Requests to access the datasets should be directed to
Author Contributions
As a primary researcher, DL was responsible for collecting and processing the relevant data, writing the code, experimenting with different algorithms, comparing results, and authoring most of the
paper. SH provided guidance throughout the project, debating and proposing additional methods to deploy. During the authoring stage, he brought up a multitude of thoughtful comments to help refine
the paper and ensure its fit with the institution's academic standards. MS supervised the project. She was instrumental in ideation, direction, and verifying all mathematical calculations. MS also
facilitated access to crucial resources, including the data sources as well as several colleagues that served as advisors and mentors throughout the project. She contributed valuable feedback
throughout the process that proved essential to obtaining quality results in a timely manner.
This research was based upon work supported by Google Cloud.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Alvim, L. G., dos Santos, C. N., and Milidiu, R. L. (2010). “Daily volume forecasting using high frequency predictors,” in Proceedings of the 10th IASTED International Conference, Vol. 674
(Innsbruck), 248.
Árpád Szűcs, B. (2017). Forecasting intraday volume: comparison of two early models. Finan. Res. Lett. 21, 249–258. doi: 10.1016/j.frl.2016.11.018
Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5, 157–166. doi: 10.1109/72.279181
Cavalcante, R. C., Brasileiro, R. C., Souza, V. L., Nobrega, J. P., and Oliveira, A. L. (2016). Computational intelligence and financial markets: a survey and future directions. Expert Syst. Appl.
55, 194–211. doi: 10.1016/j.eswa.2016.02.006
Chen, R., Feng, Y., and Palomar, D. (2016). Forecasting intraday trading volume: a kalman filter approach. SSRN Electron. J. doi: 10.2139/ssrn.3101695
Cortes, C., and Vapnik, V. (1995). Support-vector networks. Mach. Learn. 20, 273–297. doi: 10.1007/BF00994018
Dickey, D. A., and Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. J. Am. Stat. Assoc. 74, 427–431. doi: 10.1080/01621459.1979.10482531
Dixon, M., Polson, N., and Sokolov, V. (2017). Deep learning for spatio-temporal modeling: dynamic traffic flows and high frequency trading. arXiv preprint arXiv:1705.09851.
Fleming, J., Kirby, C., and Ostdiek, B. (2008). The specification of garch models with stochastic covariates. J. Futures Markets 28, 911–934. doi: 10.1002/fut.20340
Goldberg, Y. (2017). Neural network methods for natural language processing. Synth. Lect. Hum. Lang. Technol. 10, 1–309. doi: 10.2200/S00762ED1V01Y201703HLT037
Hochreiter, S., and Schmidhuber, J. (1997). Long short-term memory. Neural Comput. 9, 1735–1780. doi: 10.1162/neco.1997.9.8.1735
Karpoff, J. M. (1987). The relation between price changes and trading volume: a survey. J. Finan. Quant. Anal. 22, 109–126. doi: 10.2307/2330874
Kissell, R. (ed.). (2014). “Chapter 2 - market microstructure,” in The Science of Algorithmic Trading and Portfolio Management (San Diego, CA: Academic Press), 47–85.
Lamoureux, C. G., and Lastrapes, W. (1990). Heteroskedasticity in stock return data: volume versus garch effects. J. Finan. 45, 221–29. doi: 10.1111/j.1540-6261.1990.tb05088.x
Liang, X., Zhang, H., Xiao, J., and Chen, Y. (2009). Improving option price forecasts with neural networks and support vector regressions. Neurocomputing 72, 3055–3065. doi: 10.1016/
Pascanu, R., Mikolov, T., and Bengio, Y. (2013). “On the difficulty of training recurrent neural networks,” in International Conference on Machine Learning (Atlanta, GA), 1310–1318.
Podobnik, B., Horvatic, D., Petersen, A. M., and Stanley, H. E. (2009). Cross-correlations between volume change and price change. Proc. Natl. Acad. Sci. U.S.A. 106, 22079–22084. doi: 10.1073/
Smola, A. J., and Schölkopf, B. (2004). A tutorial on support vector regression. Stat. Comput. 14, 199–222. doi: 10.1023/B:STCO.0000035301.49549.88
Wagner, N., and Marsh, T. (2004). Surprise volume and heteroskedasticity in equity market returns. Quant. Fin. 5, 153–168. doi: 10.2139/ssrn.591206
Keywords: volume prediction, LSTM, neural networks, change in volume, finance, machine learning
Citation: Libman D, Haber S and Schaps M (2019) Volume Prediction With Neural Networks. Front. Artif. Intell. 2:21. doi: 10.3389/frai.2019.00021
Received: 27 March 2019; Accepted: 23 September 2019;
Published: 09 October 2019.
Copyright © 2019 Libman, Haber and Schaps. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in
other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic
practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Daniel Libman, daniel.libman@biu.ac.il | {"url":"https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2019.00021/full","timestamp":"2024-11-09T16:51:50Z","content_type":"text/html","content_length":"425988","record_id":"<urn:uuid:f19f74a4-1b7b-4806-af16-3dc697cdcd7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00791.warc.gz"} |
1. ^ Stillwell, John (2002), Mathematics and its history, Springer, p. 374, ISBN 978-0-387-95336-6
• Chandler, B.; Magnus, Wilhelm (December 1, 1982), The History of Combinatorial Group Theory: A Case Study in the History of Ideas, Studies in the History of Mathematics and Physical Sciences
(1st ed.), Springer, p. 234, ISBN 978-0-387-90749-9 | {"url":"https://www.knowpia.com/knowpedia/Combinatorial_group_theory","timestamp":"2024-11-08T12:05:37Z","content_type":"text/html","content_length":"69043","record_id":"<urn:uuid:270251c7-cfc8-46c3-ab8d-fb73518f05fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00689.warc.gz"} |
Good or bad Bayes?
My brother Andy pointed me to this discussion on Tamino’s Open Mind blog of Bayesian vs. frequentist statistical methods. It’s focused on a nice, clear-cut statistics problem from a textbook by
David MacKay, which can be viewed in either a frequentist or Bayesian way:
We are trying to reduce the incidence of an unpleasant disease called microsoftus. Two vaccinations, A and B, are tested on a group of volunteers. Vaccination B is a control treatment, a placebo
treatment with no active ingredients. Of the 40 subjects, 30 are randomly assigned to have treatment A and the other 10 are given the control treatment B. We observe the subjects for one year
after their vaccinations. Of the 30 in group A, one contracts microsoftus. Of the 10 in group B, three contract microsoftus. Is treatment A better than treatment B?
Tamino reproduces MacKay’s analysis and then proceeds to criticize it in strong terms. Tamino’s summary:
Let $\theta_A$ be the probability of getting "microsoftus" with treatment A, while $\theta_B$ is the probability with treatment B. He adopts a uniform prior, that all possible values of $\
theta_A$ and $\theta_B$ are equally likely (a standard choice and a good one). "Possible" means between 0 and 1, as all probabilities must be.
He then uses the observed data to compute posterior probability distributions for $\theta_A,~\theta_B$. This makes it possible to computes the probability that $\theta_A < \theta_B$ (i.e., that
you’re less likely to get the disease with treatment A than with B). He concludes that the probability is 0.990, so there’s a 99% chance that treatment A is superior to treatment B (the placebo).
Tamino has a number of objections to this analysis, which I think I agree with, although I’d express things a bit differently. To me, the problem with the above analysis is precisely the part that
Tamino says is “a standard choice and a good one”: the choice of prior.
MacKay’s choice of prior expresses the idea that, before looking at the data, we thought that all possible pairs of probabilities ($\theta_A,~\theta_B$) were equally likely. That prior is very
unlikely to be an accurate reflection of our actual prior state of belief regarding the drug. Before you looked at the data, you probably thought there was a non-negligible chance that the drug had
no significant effect at all — that is, that the two probabilities were exactly (or almost exactly) equal. So in fact your prior probability was surely not a constant function on the ($\theta_A,~\
theta_B$) plane — it had a big ridge running down the line $\theta_A$ = $\theta_B$. An analysis that assumes a prior without such a ridge is an analysis that assumes from the beginning that the drug
has a significant effect with overwhelming probability. So the fact that he concludes the drug has an effect with high probability is not at all surprising — it was encoded in his prior from the
The nicest way to analyze a situation like this from a Bayesian point of view is to compare two different models: one where the drug has no effect and one where it has some effect. MacKay analyzes
the second one. Tamino goes on to analyze both cases and compare them. He concludes that the probability of getting the observed data is 0.00096 under the null model (drug has no effect) and
0.00293 under the alternative model (drug has an effect).
How do you interpret these results? The ratio of these two probabilities is about 3. This ratio is sometimes called the Bayesian evidence ratio, and it tells you how to modify your prior
probability for the two models. To be specific,
Posterior probability ratio = Prior probability ratio x evidence ratio.
For instance, suppose that before looking at the data you thought that there was a 1 in 10 chance that the drug would have an effect. Then the prior probability ratio was (1/10) / (9/10), or 1/9.
After you look at the data, you “update” your prior probability ratio to get a posterior probability ratio of 1/9 x 3, or 1/3. So after looking at the data, you now think there’s a 1/4 chance that
the drug has an effect and a 3/4 chance that it doesn’t.
Of course, if you had a different prior probability, then you’d have a different posterior probability. The data can’t tell you what to believe; it can just tell you how to update your previous
As Tamino says,
Perhaps the best we can say is that the data enhance the likelihood that the treatment is effective, increasing the odds ratio by about a factor of 3. But, the odds ratio after this increase
depends on the odds ratio before the increase €” which is exactly the prior we don't really have much information on!
People often make statement like this as if they’re pointing out a flaw in the Bayesian analysis, but this isn’t a bug in the Bayesian analysis — it’s a feature! You shouldn’t expect the data to
tell you the posterior probabilities in a way that’s independent of the prior probabilities. That’s too much to ask. Your final state of belief will be determined by both the data and your prior
belief, and that’s the way it should be.
Incidentally, my research group’s most recent paper has to do with a problem very much like this situation: we’re considering whether a particular data set favors a simple model, with no free
parameters, or a more complicated one. We compute Bayesian evidence ratios just like this, in order to tell you how you should update your probabilities for the two hypotheses as a result of the
data. But we can’t tell you which theory to believe — just how much your belief in one should go up or down as a result of the data.
One thought on “Good or bad Bayes?”
1. In real model selection, of course, you have to go beyond any probabilistic method (such as Bayesian evidence): Probabilistic methods tell you how to *weight* different models in performing
integrals over the superset of all models. If you want to *choose* a model from among them–that is, make a hard decision–you must include not only probabilities but also utilities. So model
selection or choice goes way beyond any of this probabilistic discussion. | {"url":"https://blog.richmond.edu/physicsbunn/2010/04/03/good-or-bad-bayes/","timestamp":"2024-11-09T01:29:03Z","content_type":"text/html","content_length":"49829","record_id":"<urn:uuid:721f608a-25a4-4ee5-9374-5b58bd95089e>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00536.warc.gz"} |
My Imaginary Friend, Part 2
To complete this lesson, students need to know how to simplify the square root of a negative number and how to simplify i to a power.* This lesson is contained within an escape room. Students will
navigate through a spaceship and, along the way, will perform operations on complex numbers so they can make it to the escape pods in time. Students will see the real-world applications of these
operations through learning about electrical circuits. Students also will watch a video of an electrical engineer speaking about his career, describing how he uses electrical circuits, and explaining
how to divide complex numbers using complex conjugates. *See the "My Imaginary Friend, Part 1" lesson for the prerequisite content. This is a multimodality lesson, which means it includes
face-to-face, online, and hybrid versions of the lesson. The attachments also include a downloadable Common Cartridge file, which can be imported into a Learning Management System (LMS) such as
Canvas or eKadence. The cartridge includes interactive student activities and teacher's notes.
Essential Question(s)
Why do we perform operations on complex numbers?
Students read a story that sets the scene for the escape room in a spaceship. Students use their reasoning skills to turn given clues into a passcode as they learn how to navigate through the escape
Students are introduced to electrical circuits and perform calculations using electrical circuits. Students add and subtract complex numbers to find the total impedance, then multiply complex numbers
to find the voltage.
Students watch a video of an electrical engineer speaking about his career and describing how he uses electrical circuits. Students then learn about complex conjugates and how to use them to divide
complex numbers.
Students apply what they have learned about performing operations with complex numbers to progress through the escape room.
Students use the Fist to Five strategy to reflect on what they have learned during the lesson.
Instructional Formats
The term "Multimodality" refers to the ability of a lesson to be offered in more than one modality (i.e. face-to-face, online, blended). This lesson has been designed to be offered in multiple
formats, while still meeting the same standards and learning objectives. Though fundamentally the same lesson, you will notice that the different modalities may require the lesson to be approached
differently. Select the modality that you are interested in to be taken to the section of the course designed for that form of instruction.
• Lesson Slides (attached)
• Lesson Guide and Escape Room Script (attached; for teacher use; printed front/back)
• User Manual—Definitions handout (attached; one per student; printed front only)
• User Manual—Calculations handout (attached; one per student; printed front/back)
• User Manual—Calculations (Sample Responses) (attached; for teacher use)
• Power Control Hub handout (attached; one per student; printed front only)
• Cargo Bay handout (attached; one per student; printed front only)
• Pencils
• Student devices with internet access**
• Go With the Flow handout (optional; attached; one half-sheet per student; printed front only)
• Calibration Dial handout (optional; attached; creates 2 dials per document; printed front only)
**See the note in the Extend portion of this lesson for alternatives to having student devices.
Introduce the lesson using the attached Lesson Slides. Display slide 3 to share the lesson’s essential question. Display slide 4 to go over the lesson’s learning objectives. Review each of these with
students to the extent you feel necessary.
Display slide 5 and read aloud the directions to the class. Explain to students that they need to assemble a crew to escape successfully. Have students work in pairs throughout the lesson.
Display slide 6, which begins the escape room story. Read this screen aloud to the class. Transition through slides 7–8 and continue reading the story aloud.
Move to slide 9 and have students work with their partners to determine the three-letter passcode.
Give students a couple minutes to figure out the passcode before moving to the next slide. Display slide 10 to show students the correct passcode.
Transition through slides 11–12 and continue reading the story aloud. Display slide 13 and pass out the attached User Manual—Definitions handout. Give students a few minutes to read through the
Move to slide 14 and direct students’ attention to example 1 at the bottom of the handout. Then, transition through slides 15–17 to help students understand how to calculate the total impedance of an
electrical circuit.
Next, display slide 18. Have students work with their partners to find the total impedance of example 2, which is also located at the bottom of the User Manual—Definitions handout.
While students discuss, pass out the attached User Manual—Calculations handout. Then, ask for volunteers to explain why the total impedance is (30+3i) ohms.
Once students understand the two example problems, display slide 19. Ask students to work with their partners to find the total impedance for questions 1 and 2 on the User Manual—Calculations
Display slide 22 and have students work with their partners to find the voltage for questions 3 and 4 on the handout.
Display slide 25 and have students try to answer question 5 from the User Manual—Calculations handout. Give students a few minutes to struggle with the problem. Then, ask the class what operation
they need to use with complex numbers in this scenario. Ask for volunteers to explain why they need to divide complex numbers.
Move to slide 26. Introduce the "My Imaginary Friend, Part 2" video, which features an electrical engineer talking about his career, describing how he uses complex numbers, and explaining how to
divide complex numbers. Play the video.
The video begins with the electrical engineer interview, which ends at the 12:45 mark. The rest of the video demonstrates how to divide complex numbers using a complex conjugate.
After the video, display slide 27 so students can see the entire worked-out solution for question 5. Ask students to discuss the problem with their partners and ask one another questions.
As a class, ask students what questions they still have. After addressing any questions, move to slide 28 and have students find the current for question 6 on the handout.
Transition through slides 30–31 and continue reading the story aloud. Then, display slide 32 and pass out the attached Power Control Hub handout. Remind students to read everything on the handout
before they begin working on it with their partners.
While students work, monitor student progress while walking around the room with the Lesson Guide and Escape Room Script on hand. Be sure not to immediately check answers or give assistance. It is
important for students to have a little time to struggle as they work through these challenging problems.
As students start the third question on the handout, consider staying at your desk or a location in which students can easily and comfortably reach you to check their three-letter code. As noted in
the Lesson Guide and Escape Room Script, if students have the correct Transfer Power Code, then you should exchange students’ Power Control Hub handout for the attached Cargo Bay handout.
Students may finish these handouts at staggering times, so having quick reference to the Lesson Guide and Escape Room Script can help you ensure the activity goes smoothly. When students finish the
Cargo Bay handout and come to you to check their Release Code, let students who are correct know that they’ve escaped successfully (and that they must keep the code a secret).
Use the Power Control Hub and Cargo Bay handouts to determine what misconceptions students may have regarding operations with complex numbers.
Once the whole class has escaped successfully, display slide 33.
Display slide 34 and have students use the Fist to Five strategy to reflect on what they’ve learned during the lesson. On slide 34, ask students to focus their reflection on their understanding of
adding and subtracting complex numbers.
Then, display slide 35 and have students reflect on their understanding of multiplying complex numbers. Finally, display slide 36 and have students reflect on their understanding of dividing complex | {"url":"https://learn.k20center.ou.edu/lesson/1539","timestamp":"2024-11-01T22:43:25Z","content_type":"text/html","content_length":"113319","record_id":"<urn:uuid:712b3517-3fa3-4366-b6c2-5b1cf7dfd80f>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00174.warc.gz"} |
How do you verify that the function f(x)=x/(x+6) satisfies the hypotheses of The Mean Value Theorem on the given interval [0,1] and then find the number c that satisfy the conclusion of The Mean Value Theorem? | Socratic
How do you verify that the function #f(x)=x/(x+6)# satisfies the hypotheses of The Mean Value Theorem on the given interval [0,1] and then find the number c that satisfy the conclusion of The Mean
Value Theorem?
1 Answer
The MVT states that if f(x) is continuous in [a,b] (it obviously is) and derivable in (a,b) (it obviously is too), then $\exists$ at least one $c \in \left(a , b\right) : f \left(b\right) - f \left(a
\right) = f ' \left(c\right) \left(b - a\right)$
Notice the theorem doesn't give you the number of $c$s nor their values.
So we find them out:
$f \left(0\right) - f \left(1\right) = f ' \left(c\right) \left(0 - 1\right) \implies f ' \left(c\right) = \frac{1}{7}$
$\frac{1}{7} = \frac{\left(c + 6\right) - c}{c + 6} ^ 2 \implies {\left(c + 6\right)}^{2} = 42 \implies {c}_{1} = - 6 + \sqrt{42} , {c}_{2} = - 6 - \sqrt{42}$
We notice ${c}_{2} < 0$, so ${c}_{2}$ is not a root to be considered for MVT, the only choice we have left is ${c}_{1}$, and MVT assures us ${c}_{1} \in \left(0 , 1\right)$ without any kind of manual
Impact of this question
2870 views around the world | {"url":"https://socratic.org/questions/how-do-you-verify-that-the-function-f-x-x-x-6-satisfies-the-hypotheses-of-the-me","timestamp":"2024-11-03T22:21:48Z","content_type":"text/html","content_length":"34759","record_id":"<urn:uuid:10c167b8-d472-4baf-98ca-9ccd5be95c4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00241.warc.gz"} |
John Nash’s Nobel Prize Sells for $735,000
Best known as the subject of “A Beautiful Mind,” Nash made pioneering advances in the study of game theory
published : 06 February 2024
By the time John Nash turned 30 in 1958, he was being hailed as a mathematical genius—a wunderkind who had made pioneering advances in the field of game theory. But in subsequent decades, a
devastating struggle with schizophrenia hampered the mathematician’s career; although his ideas remained important, writes biographer Sylvia Nasar, “Nash, the man, was all but forgotten.”
Then, in 1994, Nash won the Nobel Prize in Economics. The recognition spurred “a long period of renewed activity and confidence” that coincided with an improvement in the newly minted laureate’s
mental health, according to Princeton University. Now, the award that transformed Nash’s later life has sold for $735,000 at auction.
Engraved with Alfred Nobel’s profile and Nash’s name, the 18-karat gold prize sold at Christie’s Fine Printed Books & Manuscripts auction last week. As Charles Boothe reports for the Bluefield Daily
Telegraph, proceeds will go to the John Nash Trust, while funds raised by the separate sale of Nash’s personal papers will benefit the National Alliance on Mental Illness. Per Christie’s, the trove
of auctioned documents includes Nash’s 1951 doctoral thesis on game theory, a handwritten lecture he delivered at Princeton, and a note from a high school math teacher telling Nash he “will really go
places” if he can manage to “organize [his] work.”
Today, Nash is perhaps best known as the subject of A Beautiful Mind, a 2001 biopic featuring Russell Crowe as the troubled mathematician. But during the mid-20th century, Nash was a bonafide
academic celebrity in his own right. He discovered the idea that would lead to his Nobel Prize while still a university student, and his doctoral thesis, published in the Annals of Mathematics in
1951, “established the mathematical principles of game theory,” according to Encyclopedia Britannica.
Game theory is, in essence, a type of mathematics that looks at interactions between two or more “players” in scenarios with “set rules and outcomes.” In the words of the New York Times’ Kenneth
Chang, the field focuses on “how to come up with a winning strategy in the game of life—especially when you do not know what your competitors are doing and the choices do not always look promising.”
The core concept of game theory is the Nash equilibrium, which Chang broadly defines as “a stable state in which no player can gain advantage through a unilateral change of strategy assuming the
others do not change what they are doing.”
Front and back of Nash's 1994 Nobel Prize medal Christie's Images Ltd. 2019
A classic example of a Nash equilibrium is the “Prisoner’s Dilemma.” In this scenario, two criminal conspirators are arrested and held separately, each told that if they confess and testify against
the other, they will be released without penalty. Their collaborator, meanwhile, will receive a 10-year jail sentence. “If both stay quiet, the prosecutors cannot prove the more serious charges and
both would spend just a year behind bars for lesser crimes,” Chang writes. “If both confess, the prosecutors would not need their testimony, and both would get eight-year prison sentences.”
In theory, the smartest approach would be for both suspects to stay silent: This way, each is guaranteed a light sentence. But Nash’s equilibrium posits that both are likely to confess; the criminals
can’t communicate, and refusing to confess only works as a strategy if both are on board. By confessing, each suspect can either win their freedom or spend eight years in prison. Remaining quiet,
meanwhile, will result in a sentence of either one year or 10.
Game theory is mainly used in economics, but its principles apply to a range of settings, including nuclear strategy, which Christie’s notes was a matter of dire concern during the 1950s.
In an essay written upon receiving the Nobel Prize, Nash, who died in a car crash in 2015, described the “mental disturbances” that began plaguing him in 1959. Admitting he had “spent times of the
order of five to eight months in hospitals in New Jersey, always on an involuntary basis and always attempting a legal argument for release,” he also spoke of recovery and a future he felt was laden
with promise.
“Statistically, it would seem improbable that any mathematician or scientist, at the age of 66, would be able through continued research efforts, to add much to his or her previous achievements,”
he wrote. “However, I am still making the effort and it is conceivable that with the gap period of about 25 years of partially deluded thinking providing a sort of vacation my situation may be
atypical. Thus I have hopes of being able to achieve something of value through my current studies or with any new ideas that come in the future.” | {"url":"https://www.function-variation.com/article27","timestamp":"2024-11-06T11:34:50Z","content_type":"text/html","content_length":"19252","record_id":"<urn:uuid:53b5f0cd-c113-48cd-a10f-c8dc5d9db98a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00192.warc.gz"} |
mp_arc 07-257
07-257 Bernhard Baumgartner, Heide Narnhofer, Walter Thirring
Analysis of quantum semigroups with GKS-Lindblad generators I. Simple generators (67K, LaTeX) Oct 30, 07
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. Semigroups describing the time evolution of open quantum systems in finite dimensional spaces have generators of a special form, known as Lindblad generators. The simple generators,
characterized by only one operator, are analyzed. The complete set of all the stationary states is presented in detail, including a formula to calculate a stationary state from the generating
operator. Also the opposite task can be fulfilled, to construct an evolution leading to a prescribed stationary state.
Files: 07-257.src( 07-257.keywords , Lindblad1.tex ) | {"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=07-257","timestamp":"2024-11-06T17:57:02Z","content_type":"text/html","content_length":"1809","record_id":"<urn:uuid:c9f507fc-a7bf-4918-99c4-ce438d1da8fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00513.warc.gz"} |
QuBit Measurements
Measuring Qubits
The Importance of Measurement for Qubits
We can only get information about a qubit in an unknown state by measuring it. During a measurement, the qubit collapses into exactly one of two values. The original state can no longer be
reconstructed and we only learn a small part of what there is to know about the state. If is an unknown state and the measurement result is , the only thing that can be said with certainty about is
that must have been true. Further information about can no longer be obtained, because the qubit has irreversibly lost its superposition as a result of the measurement.
Measurement of Individual Bits in a Register
We have already seen that we can also measure individual bits in a register, as this also has an effect on the measurement result for the other qubits.
How does the state of a register change by measuring a single qubit? We know that a register consisting of two qubits and is in the state
for any , and . The probability of measuring is
If we now measure , the register is in the state
The following applies:
This means: The amplitudes of and are retained proportionally, but they are normalized so that the measurement probabilities add up to 1 again.
Measurements in Bases other than the Computational Basis
So far, only and have been presented as possible measurement results. However, this approach ignores the fact that qubits are always measured with respect to a certain basis. Until now, this was the
so-called computational basis .
Reminder: A basis of an -dimensional vector space is a set of basis vectors for which the following applies: Each basis vector is -dimensional and the vectors are linearly independent (i.e. no vector
can be written as a linear combination of the other vectors). If all basis vectors have length and are pairwise orthogonal, we speak of an orthonormal basis.
From school one may remember that a two-dimensional vector space can have different bases. Thus, not only is a basis of , but also, for example, the set . Both and can be represented as a linear
combination of these vectors. They can be found as follows (example: ):
We are therefore looking for .
This results in two equations:
I) II)
Instead of representing a qubit in the computational basis (i.e. the z-axis of the Bloch sphere) as we can also choose (the x-axis) or (the y-axis) as basis of the two-dimensional vector space:
The bases are each normalized eigenvectors of the corresponding Pauli matrices with eigenvalues and . This also applies to the standard Z-basis and the Pauli Z-matrix. The eigenvector for eigenvalue
is and for it is :
In the other bases, the state then has the representation With
we can compute and . The result is:
The state is measured with probability and the state with , likewise the state with probability and the state with . Again, it applies:
Mathematically, a measurement with respect to a basis corresponds to a projection of the vector into the subspaces of the basis; in the figure, the subspaces are the axes of the coordinate system.
The measurement result then means that the vector was projected into the subspace . The probability of this corresponds to the length of the projected vector.
To ensure that the measurement results are mutually exclusive, bases are required for quantum systems in which two basis vectors and are orthogonal to each other; i.e.: . Otherwise, the measurement
of one basis vector could result in the other basis vector with probability , which would not make sense. As a basis, only valid qubit vectors are possible and since their length is always , every
basis for a quantum system is an orthonormal basis.
The Measurement Result
As a consequence, a qubit can not only be in superposition with respect to and , but also with respect to another basis. When measuring, the superposition with respect to the measurement basis is
destroyed and one of the corresponding basis states is assumed. So this means:
1. When a qubit is measured, one bit of classical information is obtained.
2. The qubit assumes a subsequent state, which can be a superposition with respect to another basis.
3. If the original state is known, the measurement tells you which subsequent state the qubit is in. This can be used for further calculations.
A more general picture of measurement is described by the projective measurement with respect to an observable. Observables are Hermitian operators that describe the quantum physical properties of a
system. In a projective measurement, the observable is defined as where is a projection onto the eigenvalue range of the observable . The measurement provides an eigenvalue of the observable and
transforms the system into the corresponding eigenvector . The probability of measuring the eigenvalue for a state is Typical observables are the Pauli matrices with possible eigenvectors (, , ) and
corresponding eigenvalues .
For example, the matrix can be composed of projections as follows:
If we want to calculate the probability of measuring for the state , the result is
We can also use the observable to calculate the expected value of the measurement of a state :
Important: The expected value is calculated from the eigenvalues of the observable, not from the measurement probabilities.
As an example, we calculate the expected value of the measurement in the Z-basis for the state :
In principle, we can consider any Hermitian matrix as an observable. For example, we can combine bases and measure in the ZX-basis of the Bloch sphere, or use non-unitary matrices to evaluate states
with respect to a cost function (relevant for hybrid algorithms later in the lecture).
If we measure the state , e.g., with respect to , we obtain the expected value: | {"url":"https://photonq.org/docs/quantum-computing-and-quantum-information/qubit-measurements/","timestamp":"2024-11-04T23:17:55Z","content_type":"text/html","content_length":"846905","record_id":"<urn:uuid:e6186b6e-9e27-499e-8651-6246c7be48f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00698.warc.gz"} |
Representations of \(q\)-Schur superalgebras at a root of unity
SMS scnews item created by John Enyang at Thu 28 Feb 2013 1649
Type: Seminar
Modified: Thu 28 Feb 2013 1652; Thu 28 Feb 2013 1701
Distribution: World
Expiry: 9 Mar 2013
Calendar1: 8 Mar 2013 1205-1255
CalLoc1: Carslaw 373
Auth: enyang@penyang.pc (assumed)
Algebra Seminar
Representations of \(q\)-Schur superalgebras at a root of unity
Friday 8th March, 12:05-12:55pm, Carslaw 373
Jie Du (UNSW)
Representations of \(q\)-Schur superalgebras at a root of unity
I will report on a classification of irreducible representations over the \(q\)-Schur superalgebra at a root of unity. We simply apply the relative norm map introduced by P. Hoefsmit and L. Scott in
1977. This map is the \(q\)-analogue of the usual trace map which has many important properties related to Mackey decomposition, Frobenius reciprocity, Nakayama relation, Higman's criterion, and so
on. By describing a basis for the \(q\)-Schur superalgebra in terms of relative norms, we may filter the algebra with a linear sequence of ideals associated with \(l\)-parabolic subgroups. In this
way, we may attach a defect group to a primitive idempotent. Primitive idempotents with the trivial defect group can be classified by \(l\)-regular partitions, and others can be classified via Brauer
This is joint work with H. Gu and J. Wang.
We will take the speaker to lunch after the talk.
See the Algebra Seminar web page for information about other seminars in the series.
John Enyang John.Enyang@sydney.edu.au
Calendar (ICS file) download, for import into your favourite calendar application
UNCLUTTER for printing
AUTHENTICATE to mark the scnews item as read | {"url":"https://www.maths.usyd.edu.au/s/scnitm/enyang-AlgebraSeminar-Du-Represe","timestamp":"2024-11-09T09:36:32Z","content_type":"text/html","content_length":"3641","record_id":"<urn:uuid:2abbe25e-fac2-46e5-ba73-98b6cb211cd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00870.warc.gz"} |
rnbo_splineinterp~ Reference - Max 8 Documentation
w [auto]
Set's the value for the 'w' control point. This is the leftsmost value of the spline.
x [auto]
Set's the value for the 'x' control point. This is the second value of the spline. The output curve is the segment between the middle two points (x and y).
y [auto]
Set's the value for the 'y' control point. This is the third value of the spline. The output curve is the segment between the middle two points (x and y).
z [auto]
Set's the value for the 'z' control point. This is the rightmost most value of the spline.
a [auto]
In the leftmost inlet, this is the input value to be interpolated.
w [auto]
Set's the value for the 'w' control point. This is the leftsmost value of the spline.
x [auto]
Set's the value for the 'x' control point. This is the second value of the spline. The output curve is the segment between the middle two points (x and y).
y [auto]
Set's the value for the 'y' control point. This is the third value of the spline. The output curve is the segment between the middle two points (x and y).
z [auto]
Set's the value for the 'z' control point. This is the rightmost most value of the spline.
out1 [signal]
The output curve is the segment between the middle two points (x and y).
Dynamic Attributes
These attributes can be modified in the code during execution using the set object
a [auto]
In the leftmost inlet, this is the input value to be interpolated.
reset [bang] (default: 0)
Banging this attribute will reset the object to its default state.
w [auto]
Set's the value for the 'w' control point. This is the leftsmost value of the spline.
x [auto]
Set's the value for the 'x' control point. This is the second value of the spline. The output curve is the segment between the middle two points (x and y).
y [auto]
Set's the value for the 'y' control point. This is the third value of the spline. The output curve is the segment between the middle two points (x and y).
z [auto]
Set's the value for the 'z' control point. This is the rightmost most value of the spline. | {"url":"https://docs.cycling74.com/legacy/max8/refpages/rnbo_splineinterp~","timestamp":"2024-11-03T06:49:38Z","content_type":"text/html","content_length":"54146","record_id":"<urn:uuid:31d7fa24-5fd5-49d7-8d04-30ffb83a9331>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00773.warc.gz"} |
Hamiltonian Mechanics
Explanations in this section should contain no formulas, but instead colloquial things like you would hear them during a coffee break or at a cocktail party.
The best book on Hamiltonian mechanics is The Lazy Universe by Coopersmith
The Hamiltonian function is defined on the cotangent bundle $T^\star(C)$, which is called phase space.
In contrast, the Lagrangian function is defined on the tangent bundle $T(C)$ of the configuration space $C$.
The map from $T^\star(C) \leftrightarrow T(C)$ is called Legendre transformation.
The phase space is endowed with a symplectic structure, called Poisson Bracket. The Poisson Bracket is an operation that eats two scalar fields $\Phi$, $\Psi$ on the manifold and spits out another
scalar field $\theta $:
$$ \theta = \{ \Phi,\Psi \}= \frac{\partial \Phi}{\partial p_a}\frac{\partial \Psi}{\partial q^a}-\frac{\partial \Phi}{\partial q_a}\frac{\partial \Psi}{\partial p^a}.$$
If we leave the $\Psi$ slot blank, we can use the Poisson bracket to define a differential operator $\{\Phi,\ \}$. This is a vector field and when in acts on $\Psi$, we get $\{\Phi, \Psi \}$. If we
use instead of $\Phi$, the Hamiltonian $H$, we get an differential operator $\{H,\ \}$ that 'points along' the trajectories on in phase space $T^\star(C)$ and describes exactly the evolution that we
get from Hamilton's equations.
In this sense, the dynamical evolution of a given system is completely described by the Hamiltonian (= a scalar function).
a ‘Hamiltonian’ $$H : T^* Q \to \mathbb{R}$$ or a ‘Lagrangian’ $$L : T Q \to \mathbb{R}$$ Instead, we started with Hamilton’s principal function $$S : Q \to \mathbb{R}$$ where $Q$ is not the usual
configuration space describing possible positions for a particle, but the ‘extended’ configuration space, which also includes time. Only this way do Hamilton’s equations, like the Maxwell relations,
become a trivial consequence of the fact that partial derivatives commute. https://johncarlosbaez.wordpress.com/2012/01/23/classical-mechanics-versus-thermodynamics-part-2/
a ‘Hamiltonian’ $$H : T^* Q \to \mathbb{R}$$ or a ‘Lagrangian’ $$L : T Q \to \mathbb{R}$$
Instead, we started with Hamilton’s principal function $$S : Q \to \mathbb{R}$$ where $Q$ is not the usual configuration space describing possible positions for a particle, but the ‘extended’
configuration space, which also includes time. Only this way do Hamilton’s equations, like the Maxwell relations, become a trivial consequence of the fact that partial derivatives commute. | {"url":"https://physicstravelguide.com/theories/classical_mechanics/hamiltonian","timestamp":"2024-11-11T22:51:09Z","content_type":"text/html","content_length":"78230","record_id":"<urn:uuid:3249a74f-05f6-4ffb-8c1d-4a616d2da6c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00701.warc.gz"} |
In solid mechanics, what is the correct vector form of the equations of motion for a plane elasticity problem?
Right answer is (a) D*σ+f=ρü
For explanation I would say: For plane elasticity problems, the equations of motion are one of the governing equations. The vector form of equations of motion is D*σ+f=ρü, where f denotes body force
vector, σ is the stress vector, u is the displacement vector, D is a matrix of differential operator and ρ is the density. | {"url":"https://qna.carrieradda.com/7392/solid-mechanics-what-the-correct-vector-form-equations-motion-plane-elasticity-problem","timestamp":"2024-11-08T07:37:50Z","content_type":"text/html","content_length":"93671","record_id":"<urn:uuid:c3724728-2d40-42e5-af75-a548a2d71bbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00138.warc.gz"} |
QFT (v0.44) | IBM Quantum Documentation
class qiskit.circuit.library.QFT(num_qubits=None, approximation_degree=0, do_swaps=True, inverse=False, insert_barriers=False, name=None)
Bases: BlueprintCircuit
Quantum Fourier Transform Circuit.
The Quantum Fourier Transform (QFT) on $n$ qubits is the operation
$|j\rangle \mapsto \frac{1}{2^{n/2}} \sum_{k=0}^{2^n - 1} e^{2\pi ijk / 2^n} |k\rangle$
The circuit that implements this transformation can be implemented using Hadamard gates on each qubit, a series of controlled-U1 (or Z, depending on the phase) gates and a layer of Swap gates. The
layer of Swap gates can in principle be dropped if the QFT appears at the end of the circuit, since then the re-ordering can be done classically. They can be turned off using the do_swaps attribute.
For 4 qubits, the circuit that implements this transformation is:
The inverse QFT can be obtained by calling the inverse method on this class. The respective circuit diagram is:
One method to reduce circuit depth is to implement the QFT approximately by ignoring controlled-phase rotations where the angle is beneath a threshold. This is discussed in more detail in https://
arxiv.org/abs/quant-ph/9601018 or https://arxiv.org/abs/quant-ph/0403071.
Here, this can be adjusted using the approximation_degree attribute: the smallest approximation_degree rotation angles are dropped from the QFT. For instance, a QFT on 5 qubits with approximation
degree 2 yields (the barriers are dropped in this example):
Construct a new QFT circuit.
Returns a list of ancilla bits in the order that the registers were added.
The approximation degree of the QFT.
The currently set approximation degree.
Return calibration dictionary.
The custom pulse definition of a given gate is of the form {'gate_name': {(qubits, params): schedule}}
Returns a list of classical bits in the order that the registers were added.
Whether the final swaps of the QFT are applied or not.
True, if the final swaps are applied, False if not.
Default value: 'include "qelib1.inc";'
Return the global phase of the circuit in radians.
Default value: 'OPENQASM 2.0;'
Whether barriers are inserted for better visualization or not.
True, if barriers are inserted, False if not.
Return any associated layout information about the circuit
This attribute contains an optional TranspileLayout object. This is typically set on the output from transpile() or PassManager.run() to retain information about the permutations caused on the input
circuit by transpilation.
There are two types of permutations caused by the transpile() function, an initial layout which permutes the qubits based on the selected physical qubits on the Target, and a final layout which is an
output permutation caused by SwapGates inserted during routing.
The user provided metadata associated with the circuit.
The metadata for the circuit is a user provided dict of metadata for the circuit. It will not be used to influence the execution or operation of the circuit, but it is expected to be passed between
all transforms of the circuit (ie transpilation) and that providers will associate any circuit metadata with the results it returns from execution of that circuit.
Return the number of ancilla qubits.
Return number of classical bits.
The number of qubits in the QFT circuit.
The number of qubits in the circuit.
Return a list of operation start times.
This attribute is enabled once one of scheduling analysis passes runs on the quantum circuit.
List of integers representing instruction start times. The index corresponds to the index of instruction in QuantumCircuit.data.
AttributeError – When circuit is not scheduled.
Type: list[QuantumRegister]
A list of the quantum registers associated with the circuit.
Returns a list of quantum bits in the order that the registers were added.
Invert this circuit.
The inverted circuit.
Return type
Whether the inverse Fourier transform is implemented.
True, if the inverse Fourier transform is implemented, False otherwise.
Return type | {"url":"https://docs.quantum.ibm.com/api/qiskit/0.44/qiskit.circuit.library.QFT","timestamp":"2024-11-15T02:42:24Z","content_type":"text/html","content_length":"231728","record_id":"<urn:uuid:dc3c9f43-4be9-447d-86e9-281d604bb499>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00348.warc.gz"} |
Real-Time Load Monitoring of Logistics Delivery Vehicles Using Deep Learning-Based Image Analysis
In real-time load monitoring of logistics delivery vehicles, accurately obtaining load information is a key aspect of achieving precise monitoring and intelligent management. The subpixel edge
detection technology in 2D images plays a crucial role in this process. Due to the typically complex shape and distribution of vehicle loads, traditional image processing methods struggle to
accurately locate the edges of objects at the pixel level, thereby affecting the accuracy of load calculations. Through subpixel edge detection technology, it is possible to enhance the precision of
load measurement and reduce errors caused by edge blurring or misjudgment. This is particularly important in dynamic and complex logistics delivery environments, where accurate edge detection helps
to address challenges such as lighting variations and occlusions.
The spline fitting interpolation problem for edge points in 2D images in real-time load monitoring of logistics delivery vehicles involves generating continuous and smooth edge curves from the
results of edge detection. Specifically, due to the potential discontinuities or noise interference in the actual edge points obtained from images, directly using these points for load calculations
can result in insufficient accuracy. Therefore, spline fitting interpolation technology is introduced to generate smooth curves between these discrete edge points, making the edge information more
complete and accurate. The core of this problem lies in selecting the appropriate spline function and performing effective interpolation, ensuring the continuity and smoothness of the edge curve in
the dynamic and complex logistics environment, thereby providing reliable foundational data for subsequent load calculations. Specifically, given the function d(a) with function values b[0],b[1],...,
b[v] at v + m nodes a[0],a[1],...,a[v], the task is to find a cubic spline function t(a) that satisfies:
$t\left(a_k\right)=b_k, u=0,1, \cdots, v$ (1)
In the spline fitting interpolation problem for edge points in 2D images in real-time load monitoring of logistics delivery vehicles, the choice of boundary conditions directly affects the smoothness
and accuracy of the edge curve. Considering the specific requirements of this study, the following boundary conditions are adopted:
(1) Fixed boundary conditions: At the two boundary nodes of the image, the first derivative of the edge curve is set to a fixed value, i.e., t′(a[0]) = l[0] and t′(a[v]) = l[v]. This condition is
suitable for scenarios in logistics vehicle load monitoring images where the direction and trend of the edge points are clearly defined. By specifying fixed derivative values, the slope of the edge
curve at the boundary is ensured to be consistent with the actual physical characteristics, thereby accurately reflecting the edge changes of the vehicle load.
(2) Second boundary conditions: This boundary condition sets the second derivative at the boundary nodes to a fixed value, i.e., t′′(a[0]) = l[0] and t′′(a[v]) = l[v]. Particularly, when l[0] and l
[v] are zero, it is referred to as the natural boundary condition. This condition is particularly common in logistics delivery vehicle load monitoring, as in practical applications, the edge curve at
the boundary usually tends to flatten, with the second derivative being zero. This natural boundary condition helps to generate a smooth edge curve, avoiding unnatural bends at the boundary, thereby
improving the accuracy of load monitoring.
(3) Periodic boundary conditions: This condition requires that the values of the first and second derivatives at the starting and ending boundaries of the image are equal, i.e., t′(a[0]) = t′(a[v])
and t′′(a[0]) = t′′(a[v]). For logistics delivery vehicle load monitoring images, this condition can be applied to scenarios where the load distribution is periodic or symmetric, ensuring consistency
of the edge curve at the starting and ending points of the image, thereby reducing calculation errors and discontinuities.
The process of determining the cubic spline function expression mainly includes the following key steps:
The extraction of edge points is fundamental. Through subpixel edge detection technology, the coordinates of the edge points in the load images of logistics delivery vehicles are accurately obtained.
These edge points are usually discrete and may contain some noise, requiring smoothing in subsequent processing. Specifically, in the given interval [a, b], there are:
$x_0=a_0<a_1<\cdots<a_V=y$ (2)
Given constants b[0],b[1],...,b[V], assuming the cubic spline function is represented by T[3], a function can be constructed to satisfy:
$T(a) \in T_3\left(a_1, a_2, \cdots, a_V\right)$ (3)
The constructed function satisfies the following interpolation conditions:
$T\left(a_k\right)=b_k, k=0,1, \ldots, V$ (4)
For each pair of adjacent edge points, a cubic polynomial needs to be constructed. This polynomial is spliced at all edge points to generate a continuous and smooth curve. Specifically, let L[k]
represent T″(a[k])(k =0,1,...,V). Since T(a) is a piecewise cubic polynomial in the interval nodes, T″(a) changes linearly within the segment interval [a[k][-1], a[k]]. Let g[k] = a[k]-a[k][-1], and
the linear interpolation function can be obtained from the two points (a[k][-1], L[k][-1]) and (a[k][-1], L[k]):
$T^{\prime \prime}(a)=L_{k-1} \frac{a_k-a}{g_k}+L_k \frac{a-a_{k-1}}{g_k}\left(a_{k-1} \leq a \leq a_k\right)$ (5)
Integrating the above equation twice yields the expression for T(a) within the segment interval [a[k][-1], a[k]] and the integral constant. When a ∈ [a[k][-1], a[k]], we have:
$\begin{aligned} T(a)= & L_{k-1} \frac{\left(a_k-a\right)^3}{6 g_k}+M_j \frac{\left(a-a_{k-1}\right)^3}{6 g_k} \\ & +\left(b_{k-1}-\frac{L_{k-1} g_k^2}{6}\right) \frac{a_k-a}{g_k}+\left(b_k-\frac{L_k
g_k^2}{6}\right) \frac{a-a_{k-1}}{g_k} \\ T^{\prime}(a) & =-L_{k-1} \frac{\left(a_k-a\right)^2}{2 g_k}+L_k \frac{\left(a-a_{k-1}\right)^2}{2 g_k} \\ & +\frac{b_k-b_{k-1}}{g_k}-\frac{L_k-L_{k-1}}{6}
g_k\end{aligned}$ (6)
Based on the above equation, to obtain T(a), it is necessary to determine each L[k](k=0,1,...,V) according to the continuity and smoothness conditions at the spline nodes, namely:
$T^{\prime}\left(a_k-0\right)=T^{\prime}\left(a_k+0\right)$ (7)
According to the above equation, we have:
$\begin{aligned} & T^{\prime}\left(a_k-0\right)=\frac{g_k}{6} L_{k-1}+\frac{h_j}{3} M_j+\frac{y_j-y_{j-1}}{h_j} \\ & T^{\prime \prime}\left(a_k+0\right)=-\frac{g_{k+1}}{3} L_k+\frac{g_{k+1}}{6} L_
{k+1}+\frac{b_k-b_{k-1}}{g_k}\end{aligned}$ (8)
Further, the following equation can be derived:
$\begin{aligned} & \frac{g_k}{6} L_{k-1}+\frac{g_k+g_{k+1}}{3} L_k+\frac{g_{k+1}}{6} L_{k+1} \\ & =\frac{b_{k+1}-b_k}{g_{k+1}}-\frac{b_k-b_{k-1}}{g_k}(k=1, \cdots, V-1)\end{aligned}$ (9)
The above equation provides V-1 equations containing V + m unknowns L[k](k=0,1,...,V). However, to uniquely determine each L[k](k=0,1,...,V), it is necessary to supplement the conditions of the
natural spline function. First, the value of the spline function at each edge point should be consistent with the original data points. Second, ensure that the first and second derivatives of each
polynomial are continuous at the boundary points. Third, set the values of the first or second derivatives at the first and last nodes according to the chosen boundary conditions. By these
conditions, a set of linear equations is established as follows:
$\left\{\begin{array}{l}2 L_0+\eta_0 L_1=f_0 \\ \omega_V L_{V-1}+2 L_V=f_V\end{array}\right.$ (10)
The linear equation system determined based on the above equation is:
$\left[\begin{array}{ccccccccc}2 & \eta_0 & & & & & & & \\ \omega_1 & 2 & \eta_1 & & & & & P & \\ & \omega_2 & 2 & \eta_2 & & & & & \\ & & \bullet & \bullet & \bullet & & & & \\ & & & \bullet & \
bullet & \bullet & & & \\ & & & & \bullet & \bullet & \bullet & & \\ & P & & & & \omega_{V-2} & 2 & \eta_{V-2} & \\ & & & & & & \omega_{V-1} & 2 & \eta_{V-1} \\ & & & & & & & \omega_V & 2\end{array}\
right]\left[\begin{array}{c}L_0 \\ L_1 \\ L_2 \\ \bullet \\ \bullet \\ \bullet \\ L_{V-2} \\ L_{V-1} \\ L_V\end{array}\right]=\left[\begin{array}{c}f_0 \\ f_1 \\ f_2 \\ \bullet \\ \bullet \\ \bullet
\\ f_{V-2} \\ f_{V-1} \\ f_V\end{array}\right]$ (11)
Solving the above equation yields the expression for the cubic spline interpolation function. | {"url":"https://www.iieta.org/journals/ts/paper/10.18280/ts.410408","timestamp":"2024-11-10T12:18:38Z","content_type":"text/html","content_length":"104913","record_id":"<urn:uuid:ddd6f642-77d5-4955-a30a-9de72c34fd38>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00739.warc.gz"} |
Chapter 16 Quadrilaterals Set 16.2
Chapter 16 Quadrilaterals Set 16.2
Question 1.
Draw ₹XYZW and answer the following:
i. The pairs of opposite angles.
ii. The pairs of opposite sides.
iii. The pairs of adjacent sides.
iv. The pairs of adjacent angles.
v. The diagonals of the quadrilateral.
vi. The name of the quadrilateral in different ways.
i. a. ∠XYZ and ∠XWZ
b. ∠YXW and ∠YZW
ii. a. side XY and side WZ
b. side XW and side YZ
iii. a. side XY and side XW
b. side WX and side WZ
c. side ZW and side ZY
d. side YZ and side YX
iv. a. ∠XYZ and ∠YZW
b. ∠YZW and ∠ZWX
c. ∠ZWX and ∠WXY
d. ∠WXY and ∠XYZ
v. Seg XZ and seg YW
vi. ₹XYZW
Question 2.
In the table below, write the number of sides the polygon has.
Names Quadrilateral Octagon Pentagon Heptagon Hexagon
Number of sides
Names Quadrilateral Octagon Pentagon Heptagon Hexagon
Number of sides 4 8 5 7 6
Question 3.
Look for examples of polygons in your surroundings. Draw them.
Question 4.
We see polygons when we join the tips of the petals of various flowers. Draw these polygons and write down the number of sides of each polygon.
Question 5.
Draw any polygon and divide it into triangular parts as shown here. Thus work out the sum of the measures of the angles of the polygon.
Hexagon ABCDEF can be divided in 4 triangles namely ∆BAF, ∆BFE, ∆BED and ∆BCD
Sum of the measures of the angles of a triangle = 180°
∴ Sum of measures of the angles of the polygon ABCDEF = Sum of the measures of all the four triangles
= 180° + 180° + 180°+ 180°
= 720°
∴ The sum of the measures of the angles of the given polygon (hexagon) is 720°.
Intext Questions and Activities
Question 1.
From your compass boxes, collect set squares of the same shapes and place them side by side in all possible different ways. What figures do you get? Write their names. (Textbook pg. no. 85)
a. Two set squares
b. Three set squares
c. four set squares
a. Two set squares
b. Three set squares
c. four set squares
Question 2.
Kaprekar Number. (Textbook pg. no. 86)
i. Take any 4-digit number in which all the digits are not the same.
ii. Obtain a new 4-digit number by arranging the digits in descending order.
iii. Obtain another 4-digit number by arranging the digits of the new number in ascending order.
iv. Subtract the smaller of these two new numbers from the bigger number. The difference obtained will be a 4-digit number. If it is a 3-digit number, put a 0 in the thousands place. Repeat the above
steps with the difference obtained as a result of the subtraction.
v. After some repetitions, you will get the number 6174. If you continue to repeat the same steps you will get the number 6174 every time. Let us begin with the number 8531.
8531 → 7173 → 6354 → 3087 → 8352 → 6174 → 6174
This discovery was made by the mathematician, Dattatreya Ramchandra Kaprekar. That is why the number 6174 was named the Kaprekar number. | {"url":"https://mhboardsolutions.xyz/chapter-16-quadrilaterals-set-16-2/","timestamp":"2024-11-06T11:12:31Z","content_type":"text/html","content_length":"155352","record_id":"<urn:uuid:6d75dfb5-3cc9-4601-be7f-995f35276629>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00574.warc.gz"} |
Math problems unanswerable due to physics paradox?
Or physics problems unanswerable due to a math paradox?
From Nature:
In 1931, Austrian-born mathematician Kurt Gödel shook the academic world when he announced that some statements are ‘undecidable’, meaning that it is impossible to prove them either true or
false. Three researchers have now found that the same principle makes it impossible to calculate an important property of a material — the gaps between the lowest energy levels of its electrons —
from an idealized model of its atoms.
The result also raises the possibility that a related problem in particle physics — which has a US$1-million prize attached to it — could be similarly unsolvable, says Toby Cubitt, a
quantum-information theorist at University College London and one of the authors of the study.
The finding, published on 9 December in Nature, and in a longer, 140-page version on the arXiv preprint server2, is “genuinely shocking, and probably a big surprise for almost everybody working
on condensed-matter theory”, says Christian Gogolin, a quantum information theorist at the Institute of Photonic Sciences in Barcelona, Spain. More.
Here’s the abstract:
We show that the spectral gap problem is undecidable. Specifically, we construct families of translationally-invariant, nearest-neighbour Hamiltonians on a 2D square lattice of d-level quantum
systems (d constant), for which determining whether the system is gapped or gapless is an undecidable problem. This is true even with the promise that each Hamiltonian is either gapped or gapless
in the strongest sense: it is promised to either have continuous spectrum above the ground state in the thermodynamic limit, or its spectral gap is lower-bounded by a constant in the
thermodynamic limit. Moreover, this constant can be taken equal to the local interaction strength of the Hamiltonian.
This implies that it is logically impossible to say in general whether a quantum many-body model is gapped or gapless. Our results imply that for any consistent, recursive axiomatisation of
mathematics, there exist specific Hamiltonians for which the presence or absence of a spectral gap is independent of the axioms.
These results have a number of important implications for condensed matter and many-body quantum theory. (Public access) – Toby Cubitt, David Perez-Garcia, Michael M. Wolf
Early overheard comments say that the issue turns on undecidability and could have big implications for naturalism.
Note: Posting light until later this evening, due to O’Leary for News’ alternate day job.
Follow UD News at Twitter!
mohammadnursyamsu @ 2 -
Peter Rowlands and Bernard Diaz have found a way to circumvent Godel’s theorem.
They don't provide any evidence (like a proof) for this. They don't even get started on this, and in mathematics just stating a claim isn't enough.Bob O'H[
December 17, 2015
02:12 AM
I like physorg's write up of the paper:
Quantum physics problem proved unsolvable: Godel and Turing enter quantum physics - December 9, 2015 Excerpt: A mathematical problem underlying fundamental questions in particle and quantum
physics is provably unsolvable,,, It is the first major problem in physics for which such a fundamental limitation could be proven. The findings are important because they show that even a
perfect and complete description of the microscopic properties of a material is not enough to predict its macroscopic behaviour.,,, A small spectral gap - the energy needed to transfer an
electron from a low-energy state to an excited state - is the central property of semiconductors. In a similar way, the spectral gap plays an important role for many other materials.,,, Using
sophisticated mathematics, the authors proved that, even with a complete microscopic description of a quantum material, determining whether it has a spectral gap is, in fact, an undecidable
question.,,, "We knew about the possibility of problems that are undecidable in principle since the works of Turing and Gödel in the 1930s," added Co-author Professor Michael Wolf from
Technical University of Munich. "So far, however, this only concerned the very abstract corners of theoretical computer science and mathematical logic. No one had seriously contemplated this
as a possibility right in the heart of theoretical physics before. But our results change this picture. From a more philosophical perspective, they also challenge the reductionists' point of
view, as the insurmountable difficulty lies precisely in the derivation of macroscopic properties from a microscopic description." per physorg
In other words, this is bad news for reductive materialists who would like to describe everything in the universe, as well as the universe itself (i.e. inflation), in a bottom up fashion. Around
the 13:20 minute mark of the following video Pastor Joe Boot comments on the self-defeating nature of the atheistic/materialistic worldview in regards to providing an overarching ‘design plan’
"If you have no God, then you have no design plan for the universe. You have no prexisting structure to the universe.,, As the ancient Greeks held, like Democritus and others, the universe is
flux. It's just matter in motion. Now on that basis all you are confronted with is innumerable brute facts that are unrelated pieces of data. They have no meaningful connection to each other
because there is no overall structure. There's no design plan. It's like my kids do 'join the dots' puzzles. It's just dots, but when you join the dots there is a structure, and a picture
emerges. Well, the atheists is without that (final picture). There is no preestablished pattern (to connect the facts given atheism)." Pastor Joe Boot - Defending the Christian Faith – video
A few more notes: Georg Cantor’s part in incompleteness is briefly discussed here in this excerpt from the preceding video
Georg Cantor - The Mathematics Of Infinity – video http://www.disclose.tv/action/viewvideo/66285/George_Cantor_The_Mathematics_of_Infinity/
Kurt Godel's part in bringing the incompleteness theorem to fruition can be picked up here in this excerpt:
Kurt Gödel - Incompleteness Theorem - video http://www.metacafe.com/w/8462821
A bit more solid connection between Cantor and Godel’s work is illuminated here:
Naming and Diagonalization, from Cantor to Godel to Kleene - 2006 Excerpt: The first part of the paper is a historical reconstruction of the way Godel probably derived his proof from Cantor's
diagonalization, through the semantic version of Richard. The incompleteness proof-including the fixed point construction-result from a natural line of thought, thereby dispelling the
appearance of a "magic trick". The analysis goes on to show how Kleene's recursion theorem is obtained along the same lines. http://www.citeulike.org/group/3214/article/1001747
An overview of how Godel’s incompleteness applies to computer’s is briefly discussed in the following except of the video:
Alan Turing & Kurt Gödel - Incompleteness Theorem and Human Intuition - video http://www.metacafe.com/watch/8516356/ Kurt Gödel published On Formally Undecidable Propositions of Principia
Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system there are true statements which cannot be proved in the system. This topic was further
developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples for
undecidable questions. - wiki
As to the implications of his incompleteness theorem, Godel stated this:
"Either mathematics is too big for the human mind, or the human mind is more than a machine." Kurt Gödel As quoted in Topoi : The Categorial Analysis of Logic (1979) by Robert Goldblatt, p.
Notes as to how Godel's work relates to ID
"In an elegant mathematical proof, introduced to the world by the great mathematician and computer scientist John von Neumann in September 1930, Gödel demonstrated that mathematics was
intrinsically incomplete. Gödel was reportedly concerned that he might have inadvertently proved the existence of God, a faux pas in his Viennese and Princeton circle. It was one of the
famously paranoid Gödel's more reasonable fears." George Gilder, in Knowledge and Power : The Information Theory of Capitalism and How it is Revolutionizing our World (2013), Ch. 10: Romer's
Recipes and Their Limits Conservation of information, evolution, etc - Sept. 30, 2014 Excerpt: Kurt Gödel’s logical objection to Darwinian evolution: "The formation in geological time of the
human body by the laws of physics (or any other laws of similar nature), starting from a random distribution of elementary particles and the field is as unlikely as the separation of the
atmosphere into its components. The complexity of the living things has to be present within the material [from which they are derived] or in the laws [governing their formation]." Gödel - As
quoted in H. Wang. “On `computabilism’ and physicalism: Some Problems.” in Nature’s Imagination, J. Cornwall, Ed, pp.161-189, Oxford University Press (1995). Gödel’s argument is that if
evolution is unfolding from an initial state by mathematical laws of physics, it cannot generate any information not inherent from the start – and in his view, neither the primaeval
environment nor the laws are information-rich enough.,,, More recently this led him (Dembski) to postulate a Law of Conservation of Information, or actually to consolidate the idea, first put
forward by Nobel-prizewinner Peter Medawar in the 1980s. Medawar had shown, as others before him, that in mathematical and computational operations, no new information can be created, but new
findings are always implicit in the original starting points – laws and axioms. http://potiphar.jongarvey.co.uk/2014/09/30/conservation-of-information-evolution-etc/ Evolutionary Computing:
The Invisible Hand of Intelligence - June 17, 2015 Excerpt: William Dembski and Robert Marks have shown that no evolutionary algorithm is superior to blind search -- unless information is
added from an intelligent cause, which means it is not, in the Darwinian sense, an evolutionary algorithm after all. This mathematically proven law, based on the accepted No Free Lunch
Theorems, seems to be lost on the champions of evolutionary computing. Researchers keep confusing an evolutionary algorithm (a form of artificial selection) with "natural evolution." ,,,
Marks and Dembski account for the invisible hand required in evolutionary computing. The Lab's website states, "The principal theme of the lab's research is teasing apart the respective roles
of internally generated and externally applied information in the performance of evolutionary systems." So yes, systems can evolve, but when they appear to solve a problem (such as generating
complex specified information or reaching a sufficiently narrow predefined target), intelligence can be shown to be active. Any internally generated information is conserved or degraded by
the law of Conservation of Information.,,, What Marks and Dembski prove is as scientifically valid and relevant as Gödel's Incompleteness Theorem in mathematics. You can't prove a system of
mathematics from within the system, and you can't derive an information-rich pattern from within the pattern.,,, http://www.evolutionnews.org/2015/06/evolutionary_co_1096931.html What Does
"Life's Conservation Law" Actually Say? - Winston Ewert - December 3, 2015 Excerpt: All information must eventually derive from a source external to the universe, http://www.evolutionnews.org
Verse and Music:
John1:1 "In the beginning was the Word, and the Word was with God, and the Word was God." of note: ‘the Word’ in John1:1 is translated from ‘Logos’ in Greek. Logos is also the root word from
which we derive our modern word logic Joy Williams - 2000 Decembers ago https://www.youtube.com/watch?v=4W8K3OhxVSw
December 14, 2015
04:31 PM
Peter Rowlands and Bernard Diaz have found a way to circumvent Godel's theorem. It's based on deriving maths from rewriting. This means that 1 is a rewrite ("copy") of 0. So the 1 is derived from
0, rather than the 1 is obtained by counting. The 0 and 1 therefore have a boolean interchangeable relationship. This way all the mathematical operators and numbers are derived. I think pure
mathematics is the obvious theory of everything. There will never be any theory of physics which is not mathematical. http://arxiv.org/ftp/cs/papers/0209/0209026.pdf "Mathematics can be shown to
be constructible using this mechanism, with an order which is more coherent than one produced by starting with integers. By rejecting the ‘loaded information’ that the integers represent, and
basing our mathematics on an immediate zero totality, we believe that we are able to produce a mathematical structure which has the potential of avoiding the incompleteness indicated by Gödel’s
theorem. (Conventional approaches, based on the primacy of the number system, havenecessarily led to the discovery that a more primitive structure cannot be recovered than the one initially
assumed.) From this mathematical structure, we have been ableto develop an insight into how physics works, and using this to suggest a process that leads naturally to a formulation for quantum
December 14, 2015
03:14 PM
I think it's a physics problem that is unanswerable due to a mathematical "paradox".daveS[
December 14, 2015
09:30 AM
You must be logged in to post a comment. | {"url":"https://uncommondescent.com/intelligent-design/math-problems-unanswerable-due-to-physics-paradox/","timestamp":"2024-11-06T08:25:16Z","content_type":"text/html","content_length":"92343","record_id":"<urn:uuid:22f2df06-eac1-468a-b24d-dea9c4103d4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00874.warc.gz"} |
Stochastic Processes in context of Financial Mathematics
28 Aug 2024
Stochastic Processes in Financial Mathematics: Understanding Randomness in Finance
In financial mathematics, stochastic processes play a crucial role in modeling and analyzing the behavior of financial markets. A stochastic process is a mathematical object that describes a random
phenomenon over time or space. In this article, we will delve into the world of stochastic processes and explore their applications in finance.
What are Stochastic Processes?
A stochastic process is a sequence of random variables indexed by time or space. It is characterized by its probability distribution, which specifies the likelihood of different outcomes at each
point in time or space. In financial mathematics, stochastic processes are used to model the behavior of assets, such as stock prices, interest rates, and exchange rates.
Types of Stochastic Processes
There are several types of stochastic processes, including:
1. Discrete-time Markov Chain: A discrete-time Markov chain is a stochastic process that takes values in a finite or countable set. It is characterized by its transition matrix, which specifies the
probability of moving from one state to another.
2. Continuous-time Markov Process: A continuous-time Markov process is a stochastic process that takes values in a finite or countable set and evolves continuously over time. It is characterized by
its intensity function, which specifies the rate at which transitions occur.
3. Brownian Motion: Brownian motion is a stochastic process that models the random movement of particles suspended in a fluid. It is characterized by its drift coefficient, which specifies the
average direction of movement, and its diffusion coefficient, which specifies the variance of the movement.
Applications of Stochastic Processes in Finance
Stochastic processes have numerous applications in finance, including:
1. Options Pricing: Stochastic processes are used to model the behavior of underlying assets, such as stock prices or interest rates, in options pricing models.
2. Risk Management: Stochastic processes are used to quantify and manage risk in financial portfolios.
3. Portfolio Optimization: Stochastic processes are used to optimize portfolio performance by selecting the optimal mix of assets.
Here are some key formulas related to stochastic processes:
1. Transition Matrix: The transition matrix for a discrete-time Markov chain is given by:
P(i, j) = P(X_{t+1} = j X_t = i)
where P(i, j) is the probability of moving from state i to state j.
1. Intensity Function: The intensity function for a continuous-time Markov process is given by:
λ(x, t) = λ(x) * h(t)
where λ(x, t) is the rate at which transitions occur from state x at time t, λ(x) is the stationary intensity, and h(t) is the hazard function.
1. Drift Coefficient: The drift coefficient for Brownian motion is given by:
μ = E[X_t]
where μ is the drift coefficient, X_t is the value of the process at time t, and E[.] is the expected value.
1. Diffusion Coefficient: The diffusion coefficient for Brownian motion is given by:
σ^2 = Var(X_t)
where σ^2 is the variance of the process, X_t is the value of the process at time t, and Var[.] is the variance.
Stochastic processes are a fundamental concept in financial mathematics, allowing us to model and analyze the behavior of financial markets. By understanding the different types of stochastic
processes and their applications, we can better quantify and manage risk in financial portfolios.
Related articles for ‘Financial Mathematics’ :
• Reading: Stochastic Processes in context of Financial Mathematics
Calculators for ‘Financial Mathematics’ | {"url":"https://blog.truegeometry.com/tutorials/education/68ef92e5290fb700c0f7b8e4f8864ed4/JSON_TO_ARTCL_Stochastic_Processes_in_context_of_Financial_Mathematics.html","timestamp":"2024-11-08T10:49:53Z","content_type":"text/html","content_length":"22159","record_id":"<urn:uuid:900266ad-24e2-4d86-a923-9658f12efe87>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00551.warc.gz"} |
MathCelebrity - Online Calculators
Create Account
Sign in with Facebook, Apple, or Google
Upgrade for memory courses, no ads, extra credit templates, and more...
Build Confidence
Learn at your own pace and eliminate math anxiety for good...
Learn Step by Step
We'll walk you through how to easily solve math problems | {"url":"https://www.mathcelebrity.com/online-math-tutor.php","timestamp":"2024-11-05T18:50:59Z","content_type":"application/xhtml+xml","content_length":"48272","record_id":"<urn:uuid:82007434-73ae-40d3-98bf-1cb1ed7c7299>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00297.warc.gz"} |
An illustration of permutation testing in GWAS
Without loss of generality, consider a genetic association study with 10 SNPs weakly associated with a quantitative trait. We would like to assess the significance of these SNPs after multiple
testing correction.
10 SNPs are independent
Let’s simulate the genotype data first, assuming the SNPs are independent, and a sample size of $N=1,000$ unrelated individuals,
# set random seed
# Sample from a multivariate normal distribution for SNP matrix X, as if the SNP data has been normalized,
n <- 1000
p <- 10
R <- diag(p)
mu <- rep(0,p)
X <- MASS::mvrnorm(n, mu = mu, Sigma = R)
Now we simulate the 10 SNP effects, assuming 0.5% heritability in each SNP,
sigma2 <- 0.005
b <- rnorm(p, 0, sqrt(sigma2))
then simulate phenotype vector $y$ as $y = Xb + e$ where $e$ is a vector of $N(0, \sqrt{1-\sigma^2})$,
y <- X %*% b + rnorm(p, 0, sqrt(1-sigma2))
And perform association analysis,
test_assoc = function(X,y) {
res = susieR::univariate_regression(X,y)
p = pnorm(res$betahat/res$sebetahat)
<ol class=list-inline>
Permutation testing scheme 1 – equivalence to analytical p-value
Here we perform permutation testing on the data as follows:
1. Permute the phenotype vector
2. Perform association tests and get the permuted p-value for each SNP
3. Compare the permuted p-value with the original p-value for each SNP. If the permutated p-value is smaller than the original p-value we call it a “success” and otherwise “failure”
4. Repeat 1 - 3 numerous times. Empirical p-value is defined by the total number of success divided by the total number of permutations
This scheme should result in a similar p-value as the original analytical p-value. Let’s use 10,000 permutations for this problem:
test_perm_1 = function(X,y,K=10000) {
p_orig = test_assoc(X,y)
res = matrix(0, K, ncol(X))
for (i in 1:K) {
p_perm = test_assoc(X,sample(y))
res[i,] = p_orig >= p_perm
return(apply(res, 2, mean))
p_perm_1 = test_perm_1(X,y)
This case permutation and analytical p-values are the same.
Permutation testing scheme 2 – minimum p-value
Here we perform permutation testing on the data as follows:
1. Permute the phenotype vector
2. Perform association tests and get the minimum p-value among the 10 permuted tests
3. Compare the minimum p-value with each of the 10 SNPs’ original p-value. If the minimum p-value is smaller than the original p-value we call it a “success” and otherwise “failure”
4. Repeat 1 - 3 numerous times. Empirical p-value is defined by the total number of success divided by the total number of permutations
Let’s use 10,000 permutations for this problem:
test_perm_2 = function(X,y,K=10000) {
p_orig = test_assoc(X,y)
res = matrix(0, K, ncol(X))
for (i in 1:K) {
p_perm = test_assoc(X,sample(y))
min_p = min(p_perm)
res[i,] = sapply(p_orig, function(x) x>=min_p)
return(apply(res, 2, mean))
p_perm_2 = test_perm_2(X,y)
If this permutation scheme is correct, we would expect that it is comparable to a Bonferroni correction of testing 10 independent hypothesis
bonferroni_correct = function(p) 1 - (1 - p) ^ length(p)
<ol class=list-inline>
plot(bonferroni_correct(p_orig), p_perm_2)
<ol class=list-inline>
The Bonferroni and this permutation theme give the same result because by definition, both Bonferroni and this permutation assess if the most significant association is real. By using the minimum
p-value, permutation test gives the empirical distribution of the most extreme statistic. | {"url":"https://wanggroup.org/computing_tutorial/multiple_testing_permutation","timestamp":"2024-11-09T04:23:03Z","content_type":"text/html","content_length":"42304","record_id":"<urn:uuid:0c001ccd-59c4-46e3-aef9-9c88d69085d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00793.warc.gz"} |
The Berkelmans–Pries dependency function: A generic measure of dependence between random variables | Journal of Applied Probability | Cambridge Core
1. Introduction
As early as 1958, Kruskal [Reference Kruskal14] stated that ‘There are infinitely many possible measures of association, and it sometimes seems that almost as many have been proposed at one time or
another’. Many years later, even more dependency measures have been suggested. Yet, and rather surprisingly, there still does not exist consensus on a general dependency function. Often the statement
‘Y is dependent on X’ means that Y is not independent of X. However, there are different levels of dependency. For example, random variable (RV) Y can be fully determined by RV X (i.e. $Y(\omega)=f(X
(\omega))$ for all $\omega \in \Omega$ and for a measurable function f), or only partially.
But how should we quantify how much Y is dependent on X? Intuitively, and assuming that the dependency measure is normalized to the interval [0, 1], we would say that if Y is fully determined by X
then the dependency of Y with respect to X is as strong as possible, and so the dependency measure should be 1. On the other side of the spectrum, if X and Y are independent, then the dependency
measure should be 0; and, vice versa, it is desirable that dependence 0 implies that X and Y are stochastically independent. Note that the commonly used Pearson correlation coefficient does not meet
these requirements. In fact, many examples exist where Y is fully determined by X while the correlation is zero.
Taking a step back, why is it actually useful to examine dependencies in a dataset? Measuring dependencies between the variables can lead to critical insights, which will lead to improved data
analysis. First of all, it can reveal important explanatory relationships. How do certain variables interact? If catching a specific disease is highly dependent on the feature value of variable X,
research should be done to investigate if this information can be exploited to reduce the number of patients with this disease. For example, if hospitalization time is dependent on a healthy
lifestyle, measures can be taken to try to improve the overall fitness of a population. Dependencies can therefore function as an actionable steering rod. It is, however, important to keep in mind
that dependency does not always mean causality. Dependency relations can also occur due to mere coincidence or as a by-product of another process.
Dependencies can also be used for dimensionality reduction. If Y is highly dependent on X, not much information is lost when only X is used in the dataset. In this way, redundant variables or
variables that provide little additional information can be removed to reduce the dimensionality of the dataset. With fewer dimensions, models can be trained more efficiently.
In these situations a dependency function can be very useful. However, finding the proper dependency function can be hard, as many attempts have already been made. In fact, most of us have a ‘gut
feeling’ for what a dependency function should entail. To make this feeling more mathematically sound, Rényi [Reference Rényi18] proposed a list of ideal properties for a dependency function. A long
list of follow-up papers (see the references in Table 1) use this list as the basis for a wish list, making only minor changes to it, adding or removing some properties.
In view of the above, the contribution of this paper is threefold:
• We determine a new list of ideal properties for a dependency function.
• We present a new dependency function and show that it fulfills all requirements.
• We provide Python code to determine the dependency function for the discrete and continuous case (https://github.com/joris-pries/BP-Dependency).
The remainder of this paper is organized as follows. In Section 2, we summarize which ideal properties have been stated in previous literature. By critically assessing these properties, we derive a
new list of ideal properties for a dependency function (see Table 2) that lays the foundation for a new search for a general-purpose dependency function. In Section 3, the properties are checked for
existing methods, and we conclude that there does not yet exist a dependency function that has all the desired properties. Faced by this, in Section 4 we define a new dependency function and show in
Section 5 that this function meets all the desired properties. Finally, Section 6 outlines the general findings and addresses possible future research opportunities.
2. Desired properties of a dependency function
What properties should an ideal dependency function have? In this section we summarize previously suggested properties. Often, these characteristics are posed without much argumentation. Therefore,
we analyze and discuss which properties are actually ideal and which properties are believed to be not relevant, or even wrong.
In Table 1, a summary is given of 22 ‘ideal properties’ found in previous literature, grouped into five different categories. These properties are denoted by I.1–22. From these properties we derive a
new set of desirable properties denoted by II.1–8; see Table 2. Next, we discuss the properties suggested in previous literature and how the new list is derived from them.
Desired property II.1. (Asymmetry.) At first glance, it seems obvious that a dependency function should adhere to property I.13 and be symmetric. However, this is a common misconception for the
dependency function. Y can be fully dependent on X, but this does not mean that X is fully dependent on Y. Lancaster [Reference Lancaster15] indirectly touched upon this same point by defining mutual
complete dependence. First it is stated that Y is completely dependent on X if $Y=f(X)$ . X and Y are called mutually completely dependent if X is completely dependent on Y and vice versa. Thus, this
indirectly shows that dependence should not necessarily be symmetric, otherwise the extra definition would be redundant. In [Reference Lancaster15] the following great asymmetric example was given.
Example 2.1. Let $X\sim \mathcal{U}(0,1)$ be uniformly distributed and let $Y = -1$ if $X\leq \frac{1}{2}$ and $Y= 1$ if $X > \frac{1}{2}$ . Here, Y is fully dependent on X, but not vice versa.
To drive the point home even more, we give another asymmetric example.
Example 2.2. X is uniformly randomly drawn from $\{1,2,3,4\}$ , and ${Y \,:\!=\, X \,(\mathrm{mod}\,2)}$ . Y is fully dependent on X, because given X the value of Y is deterministically known. On the
other hand, X is not completely known given Y. Note that ${Y=1}$ still leaves the possibility for ${X=1}$ or ${X=3}$ . Thus, when assessing the dependency between variable X and variable Y, Y is
fully dependent on X, whereas X is not fully dependent on Y. In other words, ${\text{Dep} (X, Y) \neq \text{Dep} (Y, X)}$ .
In conclusion, an ideal dependency function should not always be symmetric. To emphasize this point even further, we change the notation of the dependency function. Instead of $\text{Dep} (X, Y)$ ,
we will write $\text{Dep} (Y \mid X)$ for how much Y is dependent on X. Based on this, property I.13 is changed into II.1.
Desired property II.2. (Range.) An ideal dependency function should be scaled to the interval [0, 1]. Otherwise, it can be very hard to draw meaningful conclusions from a dependency score without a
known maximum or minimum. What would a score of 4.23 mean without any information about the possible range? Therefore, property I.1 is retained. A special note on the range for the well-known Pearson
correlation coefficient [Reference Press, Teukolsky, Vetterling and Flannery17], which is $[{-}1,1]$ : The negative or positive sign denotes the direction of the linear correlation. When examining
more complex relationships, it is unclear what ‘direction’ entails. We believe that a dependency function should measure by how much variable Y is dependent on X, and not necessarily in which way. In
summary, we require $0\leq\text{Dep} (Y \mid X)\leq 1$ .
Desired property II.3. (Independence and dependency 0.) If Y is independent of X, it should hold that the dependency achieves the lowest possible value, namely zero. Otherwise, it is vague what a
dependency score lower than the dependency between two independent variables means. A major issue of the commonly used Pearson correlation coefficient is that zero correlation does not imply
independence. This makes it complicated to derive conclusions from a correlation score. Furthermore, note that if Y is independent of X, it should automatically hold that X is also independent of Y.
In this case, X and Y are independent, because otherwise some dependency relation should exist. Thus, we require $\text{Dep} (Y \mid X)=0\iff X\text{ and }Y\text{ are independent}$ .
Desired property II.4. (Functional dependence and dependency 1.) If Y is strictly dependent on X (and thus fully determined by X), the highest possible value should be attained. It is otherwise
unclear what a higher dependency would mean. However, it is too restrictive to demand that the dependency is only 1 if Y is strictly dependent on X. Rényi [Reference Rényi18] stated ‘It seems at the
first sight natural to postulate that $\delta(\xi, \eta) = 1$ only if there is a strict dependence of the mentioned type between $\xi$ and $\eta$ , but this condition is rather restrictive, and it is
better to leave it out’. Take, for example, $Y\sim \mathcal{U}(-1,1)$ and $X\,:\!=\, Y^2$ . Knowing X reduces the infinite set of possible values for Y to only two ( $\pm\sqrt{X}$ ), whereas it would
reduce to one if Y was fully determined by X. It would be very restrictive to enforce $\text{Dep} (Y \mid X) < 1$ , as there is only an infinitesimal difference compared to the strictly dependent
case. Summarizing, we require $Y=f(X)\rightarrow \text{Dep} (Y \mid X)=1$ .
Desired property II.5. (Unambiguity.) Kruskal [Reference Kruskal14] stated ‘It is important to recognize that the question “Which single measure of association should I use?” is often unimportant.
There may be no reason why two or more measures should not be used; the point I stress is that, whichever ones are used, they should have clear-cut population interpretations.’ It is very important
that a dependency score leaves no room for ambiguity. The results should meet our natural expectations. Therefore, we introduce a new requirement based on a simple example. Suppose we have a number
of independent RVs and observe one of these at random. The dependency of each random variable on the observed variable should be equal to the probability it is picked. More formally, let ${Y_1,Y_2,\
dots,Y_N, \text{and}\ S}$ be independent variables, with S a selection variable such that ${\mathbb{P}(S=i)=p_i}$ and ${\sum_{i=1}^N p_i=1}$ . When X is defined as ${X = \sum_{i = 1}^{N} \textbf{1}_
{S = i} \cdot Y_i}$ , it should hold that ${\text{Dep} (Y_i \mid X) = p_i}$ for all ${i\in \{1,\dots, N\}}$ . Stated simply, the dependency function should give the desired results in specific
situations where we can argue what the outcome should be. This is one of these cases.
Desired property II.6. (Generally applicable.) Our aim is to find a general dependency function, which we denote by $Dep(X\mid Y)$ . This function must be able to handle all kinds of variables:
continuous, discrete, and categorical (even nominal). These types of variables occur frequently in a dataset. A general dependency function should be able to measure the dependency of a categorical
variable Y on a continuous variable X. Stricter than I.9–12, we want a single dependency function that is applicable to any combination of these variables.
There is one exception to this generality. In the case that Y is almost surely constant, it is completely independent as well as completely determined by X. Arguing what the value of a dependency
function should be in this case is similar to arguing about the value of $\frac{0}{0}$ . Therefore, we argue that in this case it should be either undefined or return some value that represents the
fact that Y is almost surely constant (for example $-1$ , since this cannot be normally attained).
Desired property II.7. (Invariance under isomorphisms.) Properties I.14–20 discuss when the dependency function should be invariant. Most are only meant for variables with an ordering, as ‘strictly
increasing’, ‘translation’, and ‘scaling’ are otherwise ill-defined. As the dependency function should be able to handle nominal variables, we assume that the dependency is invariant under
isomorphisms, see II.7. Note that this is a stronger assumption than I.14–20. Compare Example 2.2 with Example 2.3. It should hold that $\text{Dep} (Y \mid X) = \text{Dep} (Y^{\prime} \mid X^{\
prime})$ and $\text{Dep} (X \mid Y) = \text{Dep} (X^{\prime} \mid Y^{\prime})$ , as the relationship between the variables is the same (only altered using isomorphisms). So, for any isomorphisms f
and g we require $\text{Dep} (g(Y) \mid f(X))=$ $\text{Dep} (Y \mid X)$ .
Example 2.3. Let X ′ be uniformly randomly drawn from $\{\circ, \triangle, \square, \lozenge \}$ , and $Y^{\prime} = \clubsuit$ if $X^{\prime} \in \{\circ, \square\}$ and $Y^{\prime} = \spadesuit$ if
$X^{\prime} \in \{\triangle, \lozenge\}$ .
Desired property II.8. (Non-increasing under functions of X.) Additionally, $\text{Dep} (Y \mid X)$ should not increase if a measurable function f is applied to X since any dependence on f(X)
corresponds to a dependence on X (but not necessarily the other way around). The information gained from knowing X can only be reduced, never increased by applying a function.
However, though it might be natural to expect the same for functions applied to Y, consider once again Example 2.2 (but with X and Y switched around) and the following two functions: $f_1(Y)\,:\!=\,Y
\,(\mathrm{mod}\,2)$ and $f_2(Y)\,:\!=\, \lceil({Y}/{2}) \rceil$ . Then $f_1(Y)$ is completely predicted by X and should therefore have a dependency of 1, while $f_2(Y)$ is independent of X and
should therefore have a dependency of 0. So the dependency should be free to increase or decrease for functions applied to Y. To conclude, for any measurable function f we require $\text{Dep} (Y \mid
f(X))\leq$ $\text{Dep} (Y \mid X)$ .
2.1. Exclusion of Pearson correlation coefficient as a special case
According to properties I.21 and I.-22, when X and Y are normally distributed the dependency function should coincide with or be a function of the Pearson correlation coefficient. However, these
properties lack good reasoning for why this would be ideal. It is not obvious why this would be a necessary condition. Moreover, there are many known problems and pitfalls with the correlation
coefficient [Reference Embrechts, McNeil and Straumann4, Reference Janse, Hoekstra, Jager, Zoccali, Tripepi, Dekker and van Diepen11], so it seems undesirable to force an ideal dependency function to
reduce to a function of the correlation coefficient when the variables are normally distributed. This is why we exclude these properties.
3. Assessment of the desired properties for existing dependency measures
In this section we assess whether existing dependency functions have the properties listed above. In doing so, we limit ourselves to the most commonly used dependency measures. Table 3 shows which
properties each investigated measure adheres to.
Although the desired properties listed in Table 2 seem not too restrictive, many dependency measures fail to have many of these properties. One of the most commonly used dependency measures, the
Pearson correlation coefficient, does not even satisfy one of the desirable properties. Furthermore, almost all measures are not asymmetric. The one measure that comes closest to fulfilling all the
requirements is the uncertainty coefficient [Reference Press, Teukolsky, Vetterling and Flannery17]. This is a normalized asymmetric variant of the mutual information measure [Reference Press,
Teukolsky, Vetterling and Flannery17], where the discrete variant is defined as
\begin{align*}C_{XY}&= \frac{I(X,Y)}{H(Y)} = \frac{\sum_{x,y}p_{X,Y}(x,y) \log \left (\frac{p_{X,Y}(x,y)}{p_X(x) \cdot p_Y(y)} \right )}{- \sum_{y}p_Y(y) \log (p_Y(y))},\end{align*}
where H(Y) is the entropy of Y and I(X, Y) is the mutual information of X and Y. Note that throughout the paper we use the following notation: ${p_{X}(x)=\mathbb{P}(X=x)}$ , ${p_{Y}(y)=\mathbb{P}(Y=
y)}$ , and ${p_{X,Y}(x,y)=\mathbb{P}(X=x,Y=y)}$ . In addition, for a set H we define ${p_{X}(H)=\mathbb{P}(X\in H)}$ (and similarly for $p_Y$ and $p_{X,Y}$ ).
However, the uncertainty coefficient does not satisfy properties II.5 and II.6. For example, if $Y\sim \mathcal{U}(0,1)$ is uniformly drawn, the entropy of Y becomes
\begin{align*} H(Y) = - \int_{0}^{1} f_Y(y)\ln(f_Y(y))\,\mathrm{d} y = - \int_{0}^{1} 1 \cdot \ln(1)\,\mathrm{d} y = 0.\end{align*}
Thus, for any X, the uncertainty coefficient is now undefined (division by zero). Therefore, the uncertainty coefficient is not as generally applicable as property II.6 requires.
Two other measures that satisfy many (but not all) properties are mutual dependence [Reference Agarwal, Sacre and Sarma1] and maximal correlation [Reference Gebelein5]. Mutual dependence is defined
as the Hellinger distance [Reference Hellinger9] $d_h$ between the joint distribution and the product of the marginal distributions, defined as (cf. [Reference Agarwal, Sacre and Sarma1])
(3.1) d(X,Y) \triangleq d_h(f_{XY}(x,y), f_X(x)\cdot f_Y(y)).
Maximal correlation is defined as (cf. [Reference Rényi18])
(3.2) S(X, Y)= \sup_{f,g} R(f(X), g(Y)),
where R is the Pearson correlation coefficient, and where f, g are Borel-measurable functions such that R(f(X), g(Y)) is defined [Reference Rényi18].
Clearly, (3.1) and (3.2) are symmetric. The joint distribution and the product of the marginal distributions does not change by switching X and Y. Furthermore, the Pearson correlation coefficient is
symmetric, making the maximal correlation also symmetric. Therefore, neither measure has property II.1.
There are two more measures (one of which is a variation of the other) which satisfy many (but not all) properties, and additionally closely resemble the measure we intend to propose. Namely, the
strong mixing coefficient [Reference Bradley2],
\begin{align*} \alpha(X,Y)=\sup_{A\in\mathcal{E}_X,B\in\mathcal{E}_Y} \{ |\mu_{X,Y}(A\times B)-\mu_X(A)\mu_Y(B)| \},\end{align*}
and its relaxation, the $\beta$ -mixing coefficient [Reference Bradley2],
\begin{align*} \beta(X,Y)=\sup \Bigg \{\frac12\sum_{i=1}^I\sum_{j=1}^J|(\mu_{X,Y}(A_i\times B_j)-\mu_X(A_i)\mu_Y(B_j))| \Bigg \},\end{align*}
where the supremum is taken over all finite partitions $(A_1,A_2,\dots,A_I)$ and $(B_1,B_2,\dots,B_J)$ of $E_X$ and $E_Y$ with $A_i\in\mathcal{E}_X$ and $B_j\in\mathcal{E}_Y$ . However, these
measures fail the properties II.1, II.4, and II.5.
4. The Berkelmans–Pries dependency function
After devising a new list of ideal properties (see Table 2) and showing that these properties are not fulfilled by existing dependency functions (see Table 3), we will now introduce a new dependency
function that will meet all requirements. Throughout, we refer to this function as the Berkelmans–Pries (BP) dependency function.
The key question surely is: What is dependency? Although this question deserves an elaborate philosophical study, we believe that measuring the dependency of Y on X is essentially measuring how much
the distribution of Y changes on average based on the knowledge of X, divided by the maximum possible change. This is the key insight on which the BP dependency function is based. To measure this, we
first have to determine the difference between the distribution of Y with and without conditioning on the value of X times the probability that X takes on this value (Section 4.1). Secondly, we have
to measure what the maximum possible change in probability mass is, which is used to properly scale the dependency function and make it asymmetric (see Section 4.2).
4.1. Definition of the expected absolute change in distribution
We start by measuring the expected absolute change in distribution (UD), which is the difference between the distribution of Y with and without conditioning on the value of X times the probability
that X takes on this value. For discrete RVs, we obtain the following definition.
Definition 4.1. (Discrete UD.) For any discrete RVs X and Y,
\begin{equation*} \text{UD} (X, Y)\,{:\!=}\,\sum_{x} p_X(x) \cdot \sum_{y} \vert p_{Y\vert X=x}(y) - p_Y(y)\vert . \end{equation*}
More explicit formulations of UD for specific combinations of RVs are given in Appendix B. For example, when X and Y remain discrete and take values in $E_X$ and $E_Y$ , respectively, it can
equivalently be defined as
\begin{equation*} \text{UD} (X, Y)\,{:\!=}\,2\sup_{A\subset E_X \times E_Y} \Bigg \{\sum_{(x,y)\in A}(p_{X,Y}(x,y)-p_X(x) \cdot p_Y(y)) \Bigg \}.\end{equation*}
Similarly, for continuous RVs, we obtain the following definition for UD.
Definition 4.2. (Continuous UD.) For any continuous RVs X and Y,
\begin{equation*} \text{UD} (X, Y)\,{:\!=}\,\int_{\mathbb{R}}\int_{\mathbb{R}}\vert f_{X,Y}(x,y)-f_X(x)f_Y(y)\vert \,\mathrm{d} y \,\mathrm{d} x. \end{equation*}
Note that this is the same as $\Delta_{L_1}$ [Reference Capitani, Bagnato and Punzo3].
In the general case, UD is defined in the following manner.
Definition 4.3. (General UD.) For any ${X\,:\, (\Omega,\mathcal{F},\mu)\rightarrow (E_X,\mathcal{E}(X))}$ and $Y\,:\, (\Omega,\mathcal{F},\mu)\rightarrow (E_Y,\mathcal{E}(Y))$ , UD is defined as
\begin{equation*} \text{UD} (X, Y) \,{:\!=}\,2\sup_{A\in\mathcal{E}(X)\bigotimes\mathcal{E}(Y)} \{ \mu_{(X,Y)}(A)-(\mu_X {\times}\mu_Y)(A) \},\end{equation*}
where ${\mathcal{E}(X)\bigotimes\mathcal{E}(Y)}$ is the $\sigma$ -algebra generated by the sets ${C\times D}$ with ${C\in\mathcal{E}(X)}$ and ${D\in\mathcal{E}(Y)}$ . Furthermore, $\mu_{(X,Y)}$
denotes the joint probability measure on ${\mathcal{E}(X)\bigotimes\mathcal{E}(Y)}$ , and ${\mu_X \times \mu_Y}$ is the product measure.
4.2. Maximum UD given Y
Next, we have to determine the maximum of UD for a fixed Y in order to scale the dependency function to [0, 1]. To this end, we prove that, for a given Y, X fully determines $Y \Rightarrow \text{UD}
(X, Y) {\geq} \text{UD} (X^{\prime}, Y)$ for any RV X ′.
The full proof for the general case is given in Appendix C.4, which uses the upper bound determined in Appendix C.3. However, we show the discrete case here to give some intuition about the proof.
Let ${C_y= \{x\mid p_{X,Y}(x,y)\geq p_X(x)\cdot p_Y(y) \}}$ ; then
\begin{align*} \text{UD} (X, Y) & = 2\sum_y (p_{X,Y}(C_y\times\{y\})-p_X(C_y) \cdot p_Y(y) )\\ & \leq 2\sum_y ({\min}\{p_X(C_y), p_Y(y)\}-p_X(C_y)\cdot p_Y(y) ) \\ & = 2\sum_y ({\min} \{p_X(C_y) \
cdot (1-p_Y(y)), (1-p_X(C_y)) \cdot p_Y(y) \} )\\ & \leq 2\sum_y (p_Y(y) \cdot (1-p_Y(y)))\\ & =2\sum_y (p_Y(y)-p_Y(y)^2) = 2 \Bigg(1-\sum_y p_Y(y)^2 \Bigg), \end{align*}
with equality if and only if both inequalities are equalities. Which occurs if and only if $p_{X,Y} (C_y\times\{y\} )=p_X(C_y)=p_Y(y)$ for all y. So we have equality when, for all y, the set $C_y$
has the property that $x\in C_y$ if and only if $Y=y$ . Or equivalently, $Y=f(X)$ for some function f. Thus,
\begin{equation*} UD(X,Y)\leq 2\Bigg(1-\sum_y p_Y(y)^2 \Bigg),\end{equation*}
with equality if and only if $Y=f(X)$ for some function f.
Note that this holds for every X that fully determines Y. In particular, for ${X\,:\!=\,Y}$ it now follows that $\text{UD} (Y, Y) = 2\cdot \big(1-\sum_y p_Y(y)^2\big)\geq \text{UD} (X^{\prime}, Y)$
for any RV X ′.
4.3. Definition of the Berkelmans–Pries dependency function
Finally, we can define the BP dependency function to measure how much Y is dependent on X.
Definition 4.4. (BP dependency function.) For any RVs X and Y, the Berkelmans–Pries dependency function is defined as
\begin{align*} \text{Dep} (Y \mid X)\,:\!=\, \left \{ \begin{array}{c@{\quad}l} \frac{\text{UD} (X, Y)}{\text{UD} (Y, Y)} & \text{if} \, {Y} \, \text{is not a.s. constant,} \\[4pt] \text{undefined} &
\text{if} \, {Y} \, \text{is trivial (has an atom of size 1).} \\ \end{array}\right . \end{align*}
This is the difference between the distribution of Y with and without conditioning on the value of X times the probability that X takes on this value, divided by the largest possible difference for
an arbitrary X ′. Note that $\text{UD} (Y, Y)=0$ if and only if Y is almost surely constant (see Appendix C.4), which leads to division by zero. However, we previously argued in Section 2 that if Y
is almost surely constant, it is completely independent as well as completely determined by X. It should therefore be undefined.
5. Properties of the Berkelmans–Pries dependency function
Next, we show that our new BP dependency function satisfies all the requirements from Table 2. To this end, we use properties of UD (see Appendix C) to derive properties II.1–8.
5.1. Property II.1 (Asymmetry)
In Example 2.1 we have ${\text{UD} (X, Y) = 1}$ , ${\text{UD} (X, X) = 2}$ , and ${\text{UD} (Y, Y) = 1}$ . Thus,
\begin{align*} \text{Dep} (Y \mid X) &= \frac{\text{UD} (X, Y)}{\text{UD} (Y, Y)} = 1, \\ \text{Dep} (X \mid Y) &= \frac{\text{UD} (X, Y)}{\text{UD} (X, X)} = \frac{1}{2}.\end{align*}
Therefore, we see that ${\text{Dep} (Y \mid X) \neq \text{Dep} (X \mid Y)}$ for this example, thus making the BP dependency asymmetric.
5.2. Property II.2 (Range)
In Appendix C.2, we show that, for every X, Y, ${\text{UD} (X, Y) \geq 0}$ . Furthermore, in Appendix C.3 we prove that $\text{UD} (X, Y)\leq 2\big (1-\sum_{y\in d_Y} \mu_Y(\{y\})^2\big )$ for all
RVs X. In Appendix C.4 we show for almost all cases that this bound is tight for $\text{UD} (Y, Y)$ . Thus, it must hold that $0\leq \text{UD} (X, Y)\leq \text{UD} (Y, Y)$ , and it then immediately
follows that ${0\leq \text{Dep} (Y \mid X)\leq 1}$ .
5.3. Property II.3 (Independence and dependency 0)
In Appendix C.2 we prove that $\text{UD} (X, Y) = 0 \Leftrightarrow X$ and Y are independent. Furthermore, note that ${\text{Dep} (Y \mid X)=0}$ if and only if ${\text{UD} (X, Y) = 0}$ . Thus, $\text
{Dep} (Y \mid X) = 0 \Leftrightarrow X$ and Y are independent.
5.4. Property II.4 (Functional dependence and dependency 1)
In Section C.4, we show that if X fully determines Y, and X ′ is any RV, we have $\text{UD} (X, Y)\geq\text{UD} (X^{\prime}, Y)$ . This holds in particular for $X\,:\!=\,Y$ . Thus, if X fully
determines Y it follows that $\text{UD} (X, Y)=\text{UD} (Y, Y)$ , so $\text{Dep} (Y \mid X) = {\text{UD} (X, Y)}/{\text{UD} (Y, Y)} = 1$ . In conclusion, if there exists a measurable function f such
that $Y=f(X)$ , then $\text{Dep} (Y \mid X)=1$ .
5.5. Property II.5 (Unambiguity)
We show the result for discrete RVs here; for the proof of the general case see Appendix C.5. Let E be the range of the independent ${Y_1,Y_2,\dots,Y_N}$ . By definition, $\mathbb{P}(X=x)=$ $\sum_j\
mathbb{P}(Y_j=x) \cdot \mathbb{P}(S=j)$ , so, for all ${i\in \{1, \dots N\}}$ ,
\begin{align*} \text{UD} (X, Y_i) & = 2\sup_{A\subset E\times E}\Bigg \{\sum_{(x,y)\in A} (\mathbb{P}(X = x,Y_i = y)-\mathbb{P}(X = x)\mathbb{P}(Y_i = y)) \Bigg \}\\& =2\sup_{A\subset E\times E} \
Bigg \{ \sum_{(x,y)\in A}\Bigg ( \sum_j\mathbb{P}(Y_j = x,Y_i = y,S = j)-\mathbb{P}(X = x)\mathbb{P}(Y_i = y)\Bigg )\Bigg \}\\& =2\sup_{A\subset E\times E} \Bigg \{ \sum_{(x,y)\in A}\Bigg ( \sum_{j\
neq i}\mathbb{P}(Y_j = x)\mathbb{P}(Y_i = y)\mathbb{P}(S = j)\\& \quad +\mathbb{P}(Y_i = x,Y_i = y)\mathbb{P}(S = i)-\sum_j\mathbb{P}(Y_j = x)\mathbb{P}(S = j)\mathbb{P}(Y_i = y) \Bigg ) \Bigg\}\\& =
2\sup_{A\subset E\times E} \Bigg\{ \sum_{(x,y)\in A}\left ( p_i\mathbb{P}(Y_i = x,Y_i = y)-p_i\mathbb{P}(Y_i = x)\mathbb{P}(Y_i = y)\right ) \Bigg\}\\& =p_i\cdot\text{UD} (Y_i, Y_i).\end{align*}
This leads to
\begin{align*} \text{Dep} (Y_i \mid X)&=\frac{\text{UD} (X, Y_i)}{\text{UD} (Y_i, Y_i)} = \frac{p_i\cdot\text{UD} (Y_i, Y_i)}{\text{UD} (Y_i, Y_i)}=p_i.\end{align*}
Therefore, we can conclude that property II.5 holds.
5.6. Property II.6 (Generally applicable)
The BP dependency measure can be applied for any combination of continuous, discrete, and categorical variables. It can handle arbitrarily many RVs as input by combining them. Thus, the BP dependency
function is generally applicable.
5.7. Property II.7 (Invariance under isomorphisms)
In Appendix C.6 we prove that applying a measurable function to X or Y does not increase UD. Thus, it must hold for all isomorphisms f, g that
\begin{equation*}\text{UD} (X, Y)=\text{UD} (f^{-1}(f(X)), g^{-1}(g(Y))) \leq \text{UD} (f(X), g(Y)) \leq \text{UD} (X, Y).\end{equation*}
Therefore, all inequalities are actually equalities. In other words, $\text{UD} (f(X), g(Y))=\text{UD} (X, Y)$ .
It now immediately follows for the BP dependency measure that
\begin{equation*}\text{Dep} (g(Y) \mid f(X))=\frac{\text{UD} (f(X), g(Y))}{\text{UD} (g(Y), g(Y))}=\frac{\text{UD} (X, Y)}{\text{UD} (Y, Y)}=\text{Dep} (Y \mid X),\end{equation*}
and thus property II.7 is satisfied.
5.8. Property II.8 (Non-increasing under functions of X)
In Appendix C.6 we prove that transforming X or Y using a measurable function does not increase UD. In other words, for any measurable function f, $\text{UD} (f(X), Y) \leq \text{UD} (X, Y)$ .
Consequently, property II.8 holds for the BP dependency function, as
\begin{equation*} \text{Dep} (Y \mid f(X))=\frac{\text{UD} (f(X), Y)}{\text{UD} (Y, Y)}\leq\frac{\text{UD} (X, Y)}{\text{UD} (Y, Y)}=\text{Dep} (Y \mid X).\end{equation*}
6. Discussion and further research
Motivated by the need to measure and quantify the level dependence between random variables, we have proposed a general-purpose dependency function. The function meets an extensive list of important
and desired properties, and can be viewed as a powerful alternative to the classical Pearson correlation coefficient, which is often used by data analysts today.
While it is recommended to use our new dependency function, it is important to understand the limitations and potential pitfalls of the new dependency function; we now discuss these aspects.
The underlying probability density function of an RV is often unknown in practice; instead, a set of outcomes is observed. These samples can then be used (in a simple manner) to approximate any
discrete distribution. However, this is generally not the case for continuous variables. There are two main categories for dealing with continuous variables: either (i) the observed samples are
combined using kernel functions into a continuous function (kernel density estimation [Reference Gramacki6]), or (2) the continuous variable is reduced to a discrete variable using data binning. The
new dependency measure can be applied thereafter.
A main issue is that the dependency measure is dependent on the parameter choices of either kernel density estimation or data binning. To illustrate this, we conduct the following experiment. Let ${X
\sim\mathcal{U}(0,1)}$ , and define ${Y = X + \epsilon}$ with ${\epsilon\sim\mathcal{N}(0,0.1)}$ . Next, we draw 5000 samples of X and $\epsilon$ and determine each corresponding Y. For kernel
density estimation we use Gaussian kernels with constant bandwidth. The result of varying the bandwidth on the dependency score can be seen in Figure 1(a). With data binning, both X and Y are binned
using bins with fixed size. Increasing or decreasing the number of bins changes the size of the bins. The impact of changing the number of bins on the dependency score can be seen in Figure 1(b).
The main observation from Figures 1(a) and 1(b) is that the selection of the parameters is important. In the case of kernel density estimation, we see the traditional trade-off between over-fitting
when the bandwidth is too small and under-fitting when the bandwidth is too large. On the other hand, with data binning, we see different behavior: having too few bins seems to overestimate the
dependency score, and as the number of bins increases the estimator of the dependency score decreases up to a certain point, after which it starts increasing again. The bottom of the curve seems to
be marginally higher than the true dependency score of 0.621.
This observation raises a range of interesting questions for future research. For example, are the dependency scores estimated by binning consistently higher than the true dependency? Is there a
correction that can be applied to get an unbiased estimator? Is the minimum of this curve an asymptotically consistent estimator? Which binning algorithms give the closest approximation to the true
An interesting observation with respect to kernel density estimation is that it appears that at a bandwidth of 0.1 the estimator of the dependency score is close to the true dependency score of
approximately 0.621. However, this parameter choice could only be made if the underlying probability process was known a priori.
Yet, there is another challenge with kernel density estimation, when X consists of many variables or feature values. Each time Y is conditioned on a different value of X, either the density needs to
be estimated again or the estimation of the joint distribution needs to be integrated. Both can rapidly become very time-consuming. When using data binning, it suffices to bin the data once.
Furthermore, no integration is required, making it much faster. Therefore, our current recommendation would be to bin the data and not use kernel density estimation.
Another exciting research avenue would be to fundamentally explore the set of functions that satisfy all desired dependency properties. Is the BP dependency the only measure that fulfills all
conditions? If two solutions exist, can we derive a new solution by smartly combining them? Without property II.5, any order-preserving bijection of [0, 1] with itself would preserve all properties
when applied to a solution. However, property II.5 does restrict the solution space. It remains an open problem whether this is restrictive enough to result in a unique solution: the BP dependency.
Appendix A. Notation
The following general notation is used throughout the appendices. Let $X\,:\,(\Omega,\mathcal{F},\mathbb{P})\to (E_X,\mathcal{E}_X)$ and ${Y\,:\,(\Omega,\mathcal{F},\mathbb{P})\to (E_Y,\mathcal{E}
_Y)}$ be RVs. Secondly, let ${\mu_{X}(A)=\mathbb{P}(X^{-1}(A))}$ , ${\mu_{Y}(A)=\mathbb{P}(Y^{-1}(A))}$ be measures induced by X and Y on $(E_X,\mathcal{E}_X)$ and $(E_Y,\mathcal{E}_Y)$ ,
respectively. Furthermore, ${\mu_{X,Y}(A)=\mathbb{P}(\{\omega\in\Omega\mid (X(\omega),Y(\omega))\in A\})}$ is the joint measure and ${\mu_X \times \mu_Y}$ is the product measure on ${(E_X\times E_Y,\
mathcal{E}_X\bigotimes\mathcal{E}_Y)}$ generated by $(\mu_X\times\mu_Y)(A\times B)=\mu_X(A)\mu_Y(B)$ .
Appendix B. Formulations of UD
In this appendix we give multiple formulations of UD. Depending on the type of RVs, the following formulations can be used.
B.1. General case
For any X, Y, UD is defined as
(B.1) \text{UD} (X, Y) & \,:\!=\, \sup_{A\in\mathcal{E}(X)\bigotimes\mathcal{E}(Y)} \{ \mu_{(X,Y)}(A)-(\mu_X \times\mu_Y)(A) \} \nonumber \\ & \quad + \sup_{B\in\mathcal{E}(X)\bigotimes\mathcal{E}
(Y)} \{ (\mu_X \times\mu_Y)(B)-\mu_{(X,Y)}(B) \} \nonumber \\ & = 2\sup_{A\in\mathcal{E}(X)\bigotimes\mathcal{E}(Y)} \{ \mu_{(X,Y)}(A)-(\mu_X \times\mu_Y)(A) \}.
B.2. Discrete RVs only
When X, Y are discrete RVs, (B.1) simplifies into
\begin{equation*} \text{UD} (X, Y) \,:\!=\, \sum_{x,y} \vert p_{X,Y}(x,y)-p_X(x)\cdot p_Y(y) \vert,\end{equation*}
or, equivalently,
\begin{equation*} \text{UD} (X, Y)\,{:\!=}\,\sum_{x} p_X(x) \cdot \sum_{y} \vert p_{Y\vert X=x}(y) - p_Y(y)\vert .\end{equation*}
Similarly, when X and Y take values in $E_X$ and $E_Y$ , respectively, (B.1) becomes
\begin{align*} \text{UD} (X, Y) & \,:\!=\, \sup_{A\subset E_X \times E_Y} \Bigg \{\sum_{(x,y)\in A}(p_{X,Y}(x,y)-p_X(x)p_Y(y)) \Bigg \} \\ & \quad + \sup_{A\subset E_X \times E_Y} \Bigg \{\sum_{(x,y)
\in A}(p_X(x)p_Y(y)-p_{X,Y}(x,y)) \Bigg\} \\ & = 2\sup_{A\subset E_X \times E_Y} \Bigg\{\sum_{(x,y)\in A}(p_{X,Y}(x,y)-p_X(x)p_Y(y))\Bigg \}.\end{align*}
B.3. Continuous RVs only
When X, Y are continuous RVs, (B.1) becomes
\begin{equation*} \text{UD} (X, Y)\,{:\!=}\,\int_{\mathbb{R}}\int_{\mathbb{R}}\vert f_{X,Y}(x,y)-f_X(x)f_Y(y)\vert \,\mathrm{d} y \,\mathrm{d} x,\end{equation*}
or, equivalently,
\begin{equation*} \text{UD} (X, Y) \,{:\!=}\,\int_{\mathbb{R}} f_X(x)\int_{\mathbb{R}}\vert f_{Y\vert X=x}(y)-f_Y(y)\vert \, \mathrm{d} y \, \mathrm{d} x.\end{equation*}
Another formulation (more measure theoretical) would be:
\begin{equation*} \text{UD} (X, Y)\,{:\!=}\,2\cdot\sup_{A\in \mathcal{B}(\mathbb{R}^2)} \bigg \{ \int_{A}(f_{X,Y}(x,y)-f_X(x)f_Y(y))\,\mathrm{d} y \,\mathrm{d} x \bigg \} .\end{equation*}
B.4. Mix of discrete and continuous
When X is discrete and Y is continuous, (B.1) reduces to
\begin{equation*}\text{UD} (X, Y) \,{:\!=}\, \sum_{x} p_X(x)\int_{y}\vert f_{Y\vert X = x}(y)-f_Y(y)\vert \,\mathrm{d} y.\end{equation*}
Vice versa, if X is continuous and Y is discrete, (B.1) becomes
Appendix C. UD Properties
In this appendix we prove properties of UD that are used in Section 5 to show that the BP dependency measure satisfies all the properties in Table 2.
C.1. Symmetry
For the proofs below it is useful to show that $\text{UD} (X, Y)$ is symmetric, i.e. $\text{UD} (X, Y) = \text{UD} (Y, X)$ for every X, Y. This directly follows from the definition, as
\begin{align*} \text{UD} (X, Y) & = 2 \sup_{A\in\mathcal{E}_X\bigotimes\mathcal{E}_Y} \{ \mu_{(X,Y)}(A)-(\mu_X\times\mu_Y)(A) \}\\& = 2 \sup_{A\in\mathcal{E}_Y\bigotimes\mathcal{E}_X} \{\mu_{(Y,X)}
(A)-(\mu_Y\times\mu_X)(A) \}\\& =\text{UD} (Y, X).\end{align*}
C.2. Independence and UD $= 0$
Since we are considering a measure of dependence, it is useful to know what the conditions for independence are. Below we show that we have independence of X and Y if and only if $\text{UD} (X, Y)=0$
Note that
\begin{align*}\text{UD} (X, Y) & =\sup_{A\in\mathcal{E}_X\bigotimes\mathcal{E}_Y} \{\mu_{(X,Y)}(A)-(\mu_X \times\mu_Y)(A) \}\\& \quad +\sup_{B\in\mathcal{E}_X\bigotimes\mathcal{E}_Y} \{(\mu_X \times\
mu_Y)(B)-\mu_{(X,Y)}(B) \}\\& \geq (\mu_{(X,Y)}(E_X\times E_Y)-(\mu_X \times\mu_Y)(E_X\times E_Y) )\\& \quad +((\mu_X \times\mu_Y)(E_X\times E_Y)-\mu_{(X,Y)}(E_X\times E_Y) )\\& =0,\end{align*}
with equality if and only if $\mu_{(X,Y)}=\mu_X\times\mu_Y$ on $\mathcal{E}_X\bigotimes\mathcal{E}_Y$ , so if and only if X and Y are independent. So, in conclusion, the following properties are
• X and Y are independent random variables.
• $\text{UD} (X, Y)=0$ .
C.3. Upper bound for a given Y
To scale the dependency function it is useful to know what the range of $\text{UD} (X, Y)$ is for a given random variable Y. We already know it is bounded below by 0 (see Appendix C.2). However, we
have not yet established an upper bound. What follows is a derivation of the upper bound.
A $\mu_Y$ -atom A is a set such that $\mu_Y(A)>0$ and, for any $B\subset A$ , $\mu_Y(B)\in\{0,\mu_Y(A)\}$ . Consider the equivalence relation $\sim$ on $\mu_Y$ -atoms characterized by $S\sim T$ if
and only if $\mu_Y(S\triangle T)=0$ . Then let I be a set containing exactly one representative from each equivalence class. Note that I is countable, so we can enumerate the elements $A_1,A_2,A_3,\
dots$ Additionally, for any $A,B\in I$ we have $\mu_Y(A\cap B)=0$ .
Next, we define $B_i\,:\!=\,A_i\setminus\bigcup_{j=1}^{i-1}A_j$ to obtain a set of disjoint $\mu_Y$ -atoms. In what follows we assume I to be infinite, but the proof works exactly the same for finite
I when you replace $\infty$ with $|I|$ .
Let $E_Y^*\,:\!=\,E_Y\setminus\bigcup_{j=1}^\infty B_j$ , so that the $B_j$ and the $E^*_Y$ form a partition of $E_Y$ . Furthermore, let $b_j\,:\!=\,\mu_Y(B_j)$ be the probabilities of being in the
individual atoms in I (and therefore the sizes corresponding to the equivalence classes of atoms). We now have, for any RV X,
(C.1) \text{UD} (X, Y) & = 2\sup_{A\in\mathcal{E}_X\bigotimes\mathcal{E}_Y} \{ \mu_{X,Y}(A)-(\mu_X\times\mu_Y)(A) \}\nonumber \\ & \leq 2\sup_{A\in\mathcal{E}_X\bigotimes\mathcal{E}_Y} \{ \mu_{X,Y}
(A\cap (E_X\times E^*_Y) ) - (\mu_X\times\mu_Y) (A\cap (E_X\times E^*_Y)) \} \nonumber \\[-5pt] & \quad + 2\sup_{A\in\mathcal{E}_X\bigotimes\mathcal{E}_Y} \Bigg \{ \sum_{j=1}^{\infty} (\mu_{X,Y} (A\
cap (E_X\times B_j)) \nonumber \\ & \qquad \qquad \qquad \qquad \qquad -(\mu_X\times\mu_Y) (A\cap (E_X\times B_j) ) ) \Bigg \}.
Now note that the first term is at most $\mu_Y(E^*_Y)=1-\sum_{i=1}^\infty b_i$ . To bound the second term, we examine each individual term of the summation. First, we note that the set of finite
unions of ‘rectangles’ (Cartesian products of elements in $\mathcal{E}_X$ and $\mathcal{E}_Y$ )
$$\matrix{ {R \,:\!=\, \Bigg \{C \in {{\cal E}_X} \bigotimes {{\cal E}_{\cal Y}}\mid {\text{ there exists }}k \in \mathbb{N} {\text{ such that}}} \hfill \cr {\quad \quad \quad \quad \quad \quad \quad
\quad \quad \quad \quad \quad \quad \quad \,\,\, C = \bigcup\limits_{i = 1}^k {({A_i} \times {B_i})} ,{\text{ with, for all }}i,{A_i} \in {{\cal E}_X} \wedge {B_i} \in {{\cal E}_Y} \Bigg \} } \hfill
\cr } $$
is an algebra. Therefore, for any $D\in\mathcal{E}_X\bigotimes\mathcal{E}_Y$ and $\epsilon>0$ , there exists a $D_{\epsilon}\in R$ such that $\nu(D_\epsilon\triangle D)<\epsilon$ , where $\nu\,:\!=\,
\mu_{X,Y}+(\mu_X\times\mu_Y)$ . Specifically, for $A \cap (E_X\times B_j)$ and $\epsilon>0$ there exists a $B_{j,\epsilon}\in R$ such that ${\nu(B_{j,\epsilon}\triangle A\cap (E_X\times B_j))<\
epsilon}$ and $B_{j,\epsilon}\subset E_X\times B_j$ holds, since intersecting with this set only decreases the expression while remaining in R.
Thus, we have
\begin{equation*} |\mu_{X,Y}(A\cap (E_X\times B_j))-\mu_{X,Y}(B_{j,\epsilon})|+|(\mu_X\times\mu_Y)(A\cap (E_X\times B_j))-(\mu_X\times\mu_Y)(B_{j,\epsilon})|<\epsilon.\end{equation*}
Therefore, it must hold that
\begin{equation*}\mu_{X,Y}(A\cap (E_X\times B_j))-(\mu_X\times\mu_Y)(A\cap (E_X\times B_j)) \leq \mu_{X,Y}(B_{j,\epsilon})-(\mu_X\times\mu_Y)(B_{j,\epsilon})+\epsilon.\end{equation*}
Since $B_{j,\epsilon}$ is a finite union of ‘rectangles’, we can also write it as a finite union of k disjoint ‘rectangles’ such that $B_{j,\epsilon}=\bigcup_{i=1}^k S_i\times T_i$ with $S_i\in\
mathcal{E}_X$ and $T_i\in\mathcal{E}_Y$ for all i. It now follows that
\begin{equation*}\mu_{X,Y}(B_{j,\epsilon})-(\mu_X\times\mu_Y)(B_{j,\epsilon})+\epsilon=\epsilon+\sum_{i=1}^k \mu_{X,Y}(S_i\times T_i)-(\mu_X\times\mu_Y)(S_i\times T_i).\end{equation*}
For all i we have $T_i\subset B_j$ such that either $\mu_Y(T_i)=0$ or $\mu_Y(T_i)=b_j$ , since $B_j$ is an atom of size $b_j$ . This allows us to separate the sum:
\begin{align*} \epsilon + \sum_{i=1}^k \mu_{X,Y} (S_i\times T_i)-(\mu_X\times\mu_Y)(S_i\times T_i) & = \epsilon \\ & \quad + \sum_{i:\mu_Y(T_i)=0} (\mu_{X,Y}(S_i\times T_i)-(\mu_X(S_i)\times \mu_Y
(T_i)) \\ & \quad + \sum_{i:\mu_Y(T_i)=b_j} (\mu_{X,Y}(S_i\times T_i)-(\mu_X(S_i)\times \mu_Y(T_i)) \\ & = \star.\end{align*}
The first sum is equal to zero, since $\mu_{X,Y}(S_i\times T_i)\leq \mu_Y(T_i)=0$ . The second sum is bounded above by $\mu_{X,Y}(S_i\times T_i)\leq \mu_{X,Y}(S_i\times B_j)$ . By defining $S^{\
prime}=\bigcup_{i:\mu_Y(T_i)=b_j}S_i$ , we obtain
\begin{align*}\star& \leq \epsilon+0+\sum_{i:\mu_Y(T_i)=b_j}(\mu_{X,Y}(S_i\times B_j)-b_j \cdot \mu_X(S_i))\\& =\epsilon+\mu_{X,Y}(S^{\prime}\times B_j)-b_j \cdot \mu_x(S^{\prime})\\& \leq \epsilon +
\min \left \{ (1-b_j) \cdot \mu_X(S^{\prime}), b_j\cdot (1-\mu_X(S^{\prime})\right \}\\& \leq \epsilon +b_j-b_j^2.\end{align*}
But, since this is true for any $\epsilon > 0$ , we have
\begin{equation*}\mu_{X,Y}(A\cap (E_X\times B_j))-(\mu_X\times\mu_Y)(A\cap (E_X\times B_j))\leq b_j-b_j^2.\end{equation*}
Plugging this back into (C.1) gives
\begin{align*} \text{UD} (X, Y) & \leq 2\sup_{A\in\mathcal{E}_X\bigotimes\mathcal{E}_Y} \{ \mu_{X,Y}(A\cap (E_X\times E^*_Y))-(\mu_X\times\mu_Y)(A\cap (E_X\times E^*_Y)) \} \\ & \quad + 2\sup_{A\in\
mathcal{E}_X\bigotimes\mathcal{E}_Y} \Bigg \{ \sum_{j=1}^\infty (\mu_{X,Y}(A\cap (E_X\times B_j))-(\mu_X\times\mu_Y)(A\cap (E_X\times B_j)) ) \Bigg \} \\ & \leq 2\Bigg (1-\sum_{i=1}^\infty b_i \Bigg
)+2\cdot \sum_{j=1}^\infty (b_j-b_j^2) \\ & = 2\Bigg (1-\sum_{i=1}^\infty b_i^2 \Bigg ).\end{align*}
Note that in the continuous case the summation is equal to 0, so the upper bound simply becomes 2. In the discrete case, where $E_Y$ is the set in which Y takes its values, the expression becomes $\
text{UD} (X, Y)\leq 2 \big (1-\sum_{i\in E_Y}\mathbb{P}(Y=i)^2 \big )$ .
C.4. Functional dependence attains maximum UD
Since we established an upper bound in Appendix C.3, the next step is to check whether this bound is actually attainable. What follows is a proof that this bound is achieved for any random variable X
for which $Y=f(X)$ for some measurable function f.
Let $Y=f(X)$ for some measurable function f; then $\mu_X(f^{-1}(C))=\mu_Y(C)$ for all $C\in\mathcal{E}_Y$ . Let the $\mu_Y$ -atoms $B_j$ and $E_Y^*$ be the same as in Appendix C.3. Since $E_Y^*$
contains no atoms, for every $\epsilon>0$ there exists a partition $T_1,\dots,T_k$ for some $k\in\mathbb{N}$ such that $\mu_Y(T_i)<\epsilon$ for all i. Then, consider the set $K=\big(\bigcup_i (f^
{-1}(T_i)\times T_i)\big)\cup\big(\bigcup_j (f^{-1}(B_j)\times B_j)\big)$ . It now follows that
\begin{align*}\text{UD} (X, Y) & = 2\sup_{A\in\mathcal{E}_Y\bigotimes\mathcal{E}_Y}\mu_{X,Y}(A)-(\mu_X\times\mu_Y)(A)\\& \geq 2\mu_{X,Y}(K)-(\mu_X\times\mu_Y)(K) \\ & = 2\Bigg (\sum_i(\mu_{X,Y}(f^
{-1}(T_i)\times T_i)-\mu_X(f^{-1}(T_i))\mu_Y(T_i))\\& \quad +\sum_j(\mu_{X,Y}(f^{-1}(B_j)\times B_j)-\mu_X(f^{-1}(B_j))\mu_Y(B_j)) \Bigg )\\& \geq 2 \Bigg (\sum_i(\mu_Y(T_i)-\epsilon*\mu_Y(T_i))+\
sum_j(b_j-b_j^2)\Bigg )\\& = 2\Bigg (\Bigg(1-\sum_j b_j\Bigg)-\epsilon\Bigg(1-\sum_j b_j\Bigg)+\sum_j (b_j-b_j^2) \Bigg ).\end{align*}
But, since this holds for any $\epsilon>0$ , we have $\text{UD} (X, Y)\geq 2\big(1-\sum_j b_j^2\big)$ . As this is also the upper bound from Appendix C.3, equality must hold. Thus, we can conclude
that $\text{UD} (X, Y)$ is maximal for Y if $Y=f(X)$ (so, in particular, if $X=Y$ ). As a result, for any RVs $X_1,X_2,Y$ with $Y=f(X_1)$ for some measurable function f, we have $\text{UD} (X_1, Y)\
geq\text{UD} (X_2, Y)$ . Note that a corollary of this proof is that $\text{UD} (Y, Y)=0$ if and only if there exists a $\mu_Y$ -atom $B_i$ with $\mu_Y(B_i)=1$ .
C.5. Unambiguity
In Section 5, we show for discrete RVs that property II.5 holds. In this section, we prove the general case. Let $Y_1,\dots,Y_N$ and S be independent RVs where S takes values in $1,\dots,N$ with $\
mathbb{P}(S=i)=p_i$ . Finally, define $X\,:\!=\,Y_S$ . Then we will show that $\text{Dep} (Y_i \mid X)=p_i$ .
Let $\mathcal{E}$ be the $\sigma$ -algebra on which the independent $Y_i$ are defined. Then we have $\mu_{X,Y_i,S}(A\times\{j\})=\mu_{Y_j,Y_i}(A)\mu_S(\{j\})=p_j\mu_{Y_j,Y_i}(A)$ for all j.
Additionally, we have $\mu_X(A)=\sum_j p_j\mu_{Y_j}(A)$ . Lastly, due to independence for $i\neq j$ , we have $\mu_{Y_j,Y_i}=\mu_{Y_j}\times\mu_{Y_i}$ . Combining all this gives
\begin{align*} \text{UD} (X, Y_i) & = 2\sup_{A\in\mathcal{E}\times\mathcal{E}}\left \{\mu_{X,Y_i}(A)-(\mu_X\times\mu_{Y_i})(A)\right \}\\& =2\sup_{A\in\mathcal{E}\times\mathcal{E}}\Bigg \{\sum_{j}\
mu_{X,Y_i,S}(A\times\{j\})-\sum_j p_j(\mu_{Y_j}\times\mu_{Y_i})(A)\Bigg \}\\& = 2\sup_{A\in\mathcal{E}\times\mathcal{E}} \Bigg \{ \sum_j p_j (\mu_{Y_j,Y_i}(A)-(\mu_{Y_j}\times\mu_{Y_i})(A))\Bigg\}\\&
= 2\sup_{A\in\mathcal{E}\times\mathcal{E}} \{p_i (\mu_{Y_i,Y_i}(A)-(\mu_{Y_i}\times\mu_{Y_i})(A) ) \}\\& = p_i\cdot\text{UD} (Y_i, Y_i).\end{align*}
C.6. Measurable functions never increase UD
Next, we prove another useful property of UD: applying a measurable function to one of the variables does not increase the UD. Let ${f\,:\,(E_X,\mathcal{E}_X)\to (E_{X^{\prime}},\mathcal{E}_{X^{\
prime}})}$ be a measurable function. Then ${h\,:\, E_X\times E_Y\to E_{X^{\prime}}\times E_{Y}}$ with ${h(x,y)=(f(x),y)}$ is measurable. Now it follows that
\begin{align*}\text{UD} (f(X), Y) & =2\sup_{A\in\mathcal{E}_{X^{\prime}}\bigotimes\mathcal{E}_Y} \{\mu_{(f(X),Y)}(A)-(\mu_{f(X)} \times\mu_{Y})(A) \}\\& =2\sup_{A\in\mathcal{E}_{X^{\prime}}\bigotimes
\mathcal{E}_Y} \{\mu_{(X,Y)}(h^{-1}(A))-(\mu_{X} \times\mu_{Y})(h^{-1}(A)) \},\end{align*}
with ${h^{-1}(A)\in \mathcal{E}_X\bigotimes\mathcal{E}_Y}$ . Thus,
\begin{align*}\text{UD} (f(X), Y) & \leq 2\sup_{A\in\mathcal{E}_X\bigotimes\mathcal{E}_Y}(\mu_{(X,Y)}(A)-(\mu_X \times\mu_Y)(A))\\& =\text{UD} (X, Y).\end{align*}
Appendix C.1 proved that UD is symmetric. Therefore, for ${g\,:\, E_Y\to E_{Y^{\prime}}}$ , $\text{UD} (X, g(Y))\leq \text{UD} (X, Y)$ .
The authors wish to thank the anonymous referees for their useful comments, which led to a significant improvement of the readability and quality of the paper.
Funding information
There are no funding bodies to thank relating to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article. | {"url":"https://core-cms.prod.aop.cambridge.org/core/journals/journal-of-applied-probability/article/berkelmanspries-dependency-function-a-generic-measure-of-dependence-between-random-variables/35623DB91C22CA64F3878EB7F134EC27","timestamp":"2024-11-07T10:05:46Z","content_type":"text/html","content_length":"1049976","record_id":"<urn:uuid:65940f59-3e40-482f-a519-c2f92094d2d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00247.warc.gz"} |
Python Plotting With Matplotlib (Overview) – Real Python
Python Plotting With Matplotlib (Overview)
A picture is worth a thousand words, and with Python’s matplotlib library, it fortunately takes far less than a thousand words of code to create a production-quality graphic.
However, matplotlib is also a massive library, and getting a plot to look just right is often achieved through trial and error. Using one-liners to generate basic plots in matplotlib is relatively
simple, but skillfully commanding the remaining 98% of the library can be daunting.
In this beginner-friendly course, you’ll learn about plotting in Python with matplotlib by looking at the theory and following along with practical examples. While learning by example can be
tremendously insightful, it helps to have even just a surface-level understanding of the library’s inner workings and layout as well.
By the end of this course, you’ll:
• Know the differences between PyLab and Pyplot
• Grasp the key concepts in the design of matplotlib
• Understand plt.subplots()
• Visualize arrays with matplotlib
• Plot by combining pandas and matplotlib
This course assumes you know a tiny bit of NumPy. You’ll mainly use the numpy.random module to generate “toy” data, drawing samples from different statistical distributions. If you don’t already have
matplotlib installed, see the documentation for a walkthrough before proceeding. | {"url":"https://realpython.com/lessons/python-plotting-matplotlib-overview/","timestamp":"2024-11-05T13:12:59Z","content_type":"text/html","content_length":"48544","record_id":"<urn:uuid:c5fac08e-457a-4e48-83b6-c91bf27cc62b>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00736.warc.gz"} |
How can pairwise correlations be calculated in R for two different matrices with partially matched features? - IT Solutions | Free tech support | SolutionFall.ComHow can pairwise correlations be calculated in R for two different matrices with partially matched features?
You must login to ask a question.
I would like to request a more advanced case scenario for analyzing high-throughput biological data to estimate pairwise correlation coefficients. The input data consist of two numeric matrices with
identical columns. The first matrix, named mat1, contains phosphoproteomic IDs as feature-names, structured as “X1_Protein_1”, “X2_Protein_2”, “X56_Protein_3”, etc. Each entry before the “” separator
signifies a distinct phosphositeID, while the part after the “” represents the proteinID, potentially leading to multiple features for the same proteinID.
Conversely, the second matrix, mat2, features row names as proteinIDs in the format “Protein_1”, “Protein_2”, “Protein_46”, etc. The rownames in mat2 partially match the second part of the feature
names in mat1.
I attempted to illustrate a correlation analysis using a sample example provided below. | {"url":"https://solutionfall.com/question/how-can-pairwise-correlations-be-calculated-in-r-for-two-different-matrices-with-partially-matched-features/","timestamp":"2024-11-08T14:33:06Z","content_type":"text/html","content_length":"100245","record_id":"<urn:uuid:c0bae215-7592-4fdd-b954-01ab8cd0c715>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00343.warc.gz"} |
A circle's center is at (8 ,1 ) and it passes through (2 ,5 ). What is the length of an arc covering ( pi ) /6 radians on the circle? | HIX Tutor
A circle's center is at #(8 ,1 )# and it passes through #(2 ,5 )#. What is the length of an arc covering #( pi ) /6 # radians on the circle?
Answer 1
#color(indigo)("Arc Length " s = r * theta = 1.2 pi = 3.77 " units"#
Given : $\text{centre (8,1), Point on circumference (2,5) } , \theta = \frac{\pi}{6}$
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the length of an arc covering π/6 radians on the circle, you first need to find the radius of the circle using the given points. Then, you can use the formula for the length of an arc:
Length of arc = radius * angle
Given points: Center of the circle: (8, 1) Point on the circle: (2, 5)
Using the distance formula, you can find the distance between these two points, which represents the diameter of the circle. Then, divide the diameter by 2 to find the radius.
Radius = (1/2) * √((8 - 2)^2 + (1 - 5)^2) = (1/2) * √((6)^2 + (-4)^2) = (1/2) * √(36 + 16) = (1/2) * √52 = √13
Now, you can find the length of the arc covering π/6 radians:
Length of arc = √13 * (π/6) = (π/6) * √13
So, the length of the arc covering π/6 radians on the circle is (π/6) * √13.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/a-circle-s-center-is-at-8-1-and-it-passes-through-2-5-what-is-the-length-of-an-a-8f9afa33de","timestamp":"2024-11-05T22:41:17Z","content_type":"text/html","content_length":"573344","record_id":"<urn:uuid:e30eda34-2a57-422f-b1dc-9f8f4fdd40e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00790.warc.gz"} |
IET Electronics Letters Template
IET Electronics Letters LaTeX template/sample downloaded from Author guide - Electronics Letters.
\documentclass[twocolumn]{el-author} %\usepackage[...]{...} This has been commented out as we are not using any additional packages here. On the whole, they should be unnecessary. \newcommand{\hH}{\
hat{H}} \newcommand{\D}{^\dagger} \newcommand{\ua}{\uparrow} \newcommand{\nc}{\newcommand} \nc{\da}{\downarrow} \nc{\hc}{\hat{c}} \nc{\hS}{\hat{S}} \nc{\bra}{\langle} \nc{\ket}{\rangle} \nc{\eq}
{equation (\ref} \nc{\h}{\hat} \nc{\hT}{\h{T}}\nc{\be}{\begin{eqnarray}} \nc{\ee}{\end{eqnarray}}\nc{\rd}{\textrm{d}}\nc{\e}{eqnarray}\nc{\hR}{\hat{R}}\nc{\Tr}{\mathrm{Tr}} \nc{\tS}{\tilde{S}}\nc{\
tr}{\mathrm{tr}}\nc{\8}{\infty}\nc{\lgs}{\bra\ua,\phi|}\nc{\rgs}{|\ua,\phi\ket} \nc{\hU}{\hat{U}}\nc{\lfs}{\bra\phi|}\nc{\rfs}{|\phi\ket}\nc{\hZ}{\hat{Z}}\nc{\hd}{\hat{d}}\nc{\mD}{\mathcal{D}} \nc{\
bd}{\bar{d}}\nc{\bc}{\bar{c}}\nc{\mc}{\mathcal}\nc{\ea}{eqnarray}\nc{\mG}{\mathcal{G}}\nc{\bce}{\begin{center}} \nc{\ece}{\end{center}} \date{12th December 2011} \begin{document} \title{Instructions
and example template for \LaTeX{} submissions to \emph{Electronics Letters}} \author{J. Smith and A. N. Other} \abstract{This document describes how to use the el-author.cls file and how to format
your \LaTeX submissions correctly for \emph{Electronics Letters}. It also serves as a template, so that you can simply copy the text from this example .tex file and replace it with your own. We have
tried to cover the basic tools and commands you might need, but there may be some more unusual fields, etc, not described. Do not hesitate to contact us if you encounter any problems. The structure
is as follows: we introduce the basic notations and preamble, and then provide some example text, followed by the references. For simplicity we have left the source code out of this document and
refer the reader to the sample.tex file itself, from which to copy and paste.} \maketitle \section{Introduction} \verb"el-author.cls" is used in a similar fashion to the standard \verb"article.cls"
file. However, the \verb"el-author.cls" file must be copied into the same directory as the .tex file you wish to compile for submission. Most of the preamble needed for including packages for
mathematics or for displaying images is included within the \verb".cls" file itself, whereas more exotic packages will have to be included manually. If you prefer to review your document in single
column format or double spaced you can include this in the options of the document class with the command - inside the square brackets - \verb"[doublespace, onecolumn]". Tables are straightforward to
include (check the \verb".tex" file for details), and will format automatically: \begin{table}[h] \processtable{Coefficients and remainders for distribution KK ($k = 0.05$, $v = 3$, $c_{1} = 1.5$,
$c_{2} = 4.5$)} {\begin{tabular}{|l|l|l|}\hline $n$ & $a_{n}^{2}$ & $r_{k}(1)$\\\hline 0 & 3.602576748428 & 1.493719547999\\\hline 1 & 1.384791111989 & 0.108928436101\\\hline 2 & 0.108600438794 &
0.000327997399\\\hline 3 & 0.000275794597 & 0.000052202814\\\hline 4 & 0.000027616892 & 0.000024585922\\\hline 5 & 0.000018178621 & 0.000006407300\\\hline \end{tabular}}{} \end{table} Note that we
used \verb"[h]" after the \verb"\begin{table}" command to force the table to be included exactly at that location. The same can be done for all tables and figures: \begin{figure}[h] \centering{\
includegraphics[width=60mm]{imagefile1a}} \caption{The Keldysh contour before extension of the real axis to infinity \source{}} \end{figure} In the next section we provide a short example manuscript,
which includes images and their captions. In \verb"sample.tex" we have added some comments explaining how to use \verb"\source{...}" to include subcaptions, and how to format equations over more than
one line. For more information on submitting and \emph{Electronics Letters} house style, see the author guide at http://www.theiet.org/resources/journals/eletters/authors.cfm. \section{Kondo effect
in new places} With poor man's scaling \cite{1} and the success of the Bethe ansatz, the equilibrium Kondo effect has become something of a solved problem (Anderson's withering remarks concerning the
Bethe ansatz notwithstanding). However, there are two situations where it is \emph{not} properly understood. The Kondo lattice is of particular interest in the heavy fermion compounds, and is far
beyond the scope of the current work, and we refer the reader to \cite{2} and references therein. Similarly, the case where the Kondo impurity is not in a metal but a superconductor, is not dealt
with in the present work. Of interest are non-equilibrium effects, and this is typically realized in quantum dots. In the typical quantum dot set-up, as described in the introduction, the dot weakly
connects together two electron seas, the leads. It is understood that the phenomenon of the Coulomb blockade limits conductance through the dot unless the charge induced on the dot by the gate is Q=\
left(N+\frac{1}{2}\right)e Consequently, we find sharp peaks in the conductance of the dot at these degeneracy points. However, for $T<T_{K}$ new behaviour is observed, as in Fig. \ref
{kondodotresistance}. The original conduction peaks of figure of the classic Coulomb blockade exist when the occupancy is effectively half integer. Hence, we expect at integer occupancy \emph
{suppression} of the conductance. This is indeed observed if $N$ is even. However, for $T<T_{K}$ and $N$ odd, we see that the conductance is not \emph{fully} suppressed. The difference is clear: for
even occupancy, the spin of the dot will be zero, as there will be as many up as down electrons. However, for odd filling, the $N+1$th electron will contribute a spin-half, causing the dot to behave
as a Kondo-like impurity. We will discuss what consequences the Kondo-nature of the dot has, but first we will explain exactly \emph{how} it acquires this nature. \section{The Schrieffer-Wolff
transformation} To set up the non-equilibrium Kondo problem - formally - we introduce the two-channel Anderson Hamiltonian H_{2C}=&\sum_{\alpha k\sigma}\epsilon_{\alpha k}\hat{c}^{\dagger}_{\alpha k\
sigma}\hat{c}_{\alpha k\sigma}+U\hat{d}^{\dagger}_{\uparrow}\hat{d}_{\uparrow}\hat{d}^{\dagger}_{\downarrow}\hat{d}_{\downarrow}\nonumber\\ &+\sum_{\sigma}\epsilon_{d}\hat{d}^{\dagger}_{\sigma}\hat
{d}_{\sigma}+\sum_{\alpha k\sigma}[t_{\alpha}\hat{c}^{\dagger}_{\alpha k\sigma}\hat{d}_{\sigma}+h.c.] The subscript $\alpha$ is the channel label, for the dot case left and right. The physical idea
is that the dot is already at half-integer occupancy. The Hubbard $U$ is recognized as the charging energy (the energy required to add another electron) which we assume to be much larger than the
mean level spacing in the dot, so that we may consider only one level, $\epsilon_{d}$. The hybridization, $t_{\alpha}$ is the tunneling energy through the potential barriers connecting the dot to the
leads, and is assumed to be point like. It is clear that the dot behaves exactly as the original Anderson impurity model, with the addition of lead indices, and this Hamiltonian has been studied \
emph{perturbatively}. However, the Schrieffer-Wolff transformation can be performed exactly as before: H_{2K}=\sum_{\alpha k\sigma}\epsilon_{\alpha k}\hat{c}^{\dagger}_{\alpha k\sigma}\hat{c}_{\alpha
k\sigma}+\sum_{\alpha\beta\sigma\tau}\underbrace{\frac{t^{*}_{\alpha}t_{\beta}}{U}}_{J_{\alpha\beta}}\hat{c}^{\dagger}_{\alpha\sigma}(r=0)\sigma^{a}_{\sigma\tau}\hat{c}_{\beta\tau}(r=0)S^{a} In the
following we will assume that the coupling to the left and right leads is identical, $J_{\alpha\beta}$, we may perform the sum over leads, giving \label{2lead} H_{2K}=\sum_{\alpha k\sigma}\epsilon_{\
alpha k}\hat{c}^{\dagger}_{\alpha k\sigma}\hat{c}_{\alpha k\sigma}+J\{[\hat{c}_{L\sigma}^{\dagger}(r=0)+\hat{c}_{R\sigma}^{\dagger}(r=0)]\nonumber\\ \times\sigma^{a}_{\sigma\tau}[\hat{c}_{L\tau}(r=0)
+\hat{c}_{R\tau}(r=0)]\}S^{a} \begin{figure} \centering{\includegraphics[width=60mm]{imagefile2a}} \caption{Quantum dot resistance for $T\ll T_{K}$ and $T\gg T_{K}$ \source{For high temperatures
(dashed line) the Coulomb blockade remains} \source{For lower temperatures (solid line) the Coulomb blockade is overcome}}\label{kondodotresistance} \end{figure} \section{Some Simple Results for Two
Leads} If we assume that the kinetic term takes the same form for the left lead as the right lead (in equilibrium), a simple (Bogoliubov) rotation of basis will transform $H_{2K}$ into the standard
one channel Kondo model: \label{kondolies} \hH_{2K}^{1}=\sum_{k}\varepsilon_{k}\hat{c}_{\alpha k\sigma}\D\hat{c}_{\alpha k\sigma}+ J\sum_{\alpha}\hat{s}(0)\cdot\hat{S} such that the new Hamiltonian
is diagonal in the lead index, and so $\alpha$ behaves as an additional degeneracy. This procedure is justified if we wish to \emph{perturbatively} analyze the conductance of the dot. For bias
voltages, $V$ much lower than the Kondo temperature, this seems reasonable, as the only true energy scale for the Kondo model is $T_{K}$. Given that $\hH_{2K}^{1}$ is diagonal in lead index, the same
techniques as are used for the equilibrium case apply. Indeed, performing the poor man's scaling procedure and using Fermi's golden rule, it is straightforward to recover the result, for $T\gg T_{K}
$: &G_{1}\sim G_{0}\nu_{0}J\nonumber\\ &G_{0}\sim{\ln^{2}(T/T_{k})} A full and more careful treatment, recovers the numerical factors: G_{1}&=\frac{2e^{2}}{h}\frac{4\Gamma_{L}\Gamma_{R}}{(\Gamma_{L}+
\Gamma_{R})^{2}}\frac{3\pi^{2}/16}{\ln^{2}(T/T_{k})}\nonumber\\ &\equiv G_{0}\frac{3\pi^{2}/16}{\ln^{2}(T/T_{k})} We emphasize that this is valid \emph{only} for $T\gg T_{K}\gg V$. At temperatures
below $T_{K}$, the coupling diverges, so that the dominant term in equation (\ref{kondolies}) is \hH_{coup}=J\sum_{\alpha}\hat{s}(0)\cdot\hat{S} As was discussed for the one channel problem, the
ground state is a singlet, with zero spin, and we expect the scattering in the dot to be suppressed, and so to leading order, the the conductance reduces to $G_{2}=G_{0}$. Perturbative corrections
have been found, \cite{3}, which yield G_{2}=G_{0}\left[1-\left(\frac{\pi T}{T_{K}}\right)^{2}\right] Thus, we can define two regions for the conductance, both for $V\ll T_{K}$: \begin{array}{cc} G_
{1}=G_{0}\frac{3\pi^{2}/16}{\ln^{2}(T/T_{k})}, & \;\;\;\;T\gg T_{K} \\ G_{2}=G_{0}, & \;\;\;\;T\ll T_{K} \end{array} So, we see that as we lower the temperature below $T_{K}$, for an odd-integer
Coulomb blockade valley, the conductance is no longer exponentially suppressed. As we have stressed, these results are valid \emph{only} for $V\ll T_{K}$. The next step is to introduce an arbitrary
voltage via the kinetic term in equation (\ref{kondolies}): \hH_{2K}^{1}=\sum_{k\sigma}(\varepsilon_{k}-eV)\hat{c}_{L k\sigma}\D\hat{c}_{L k\sigma}+ \sum_{k\sigma}\varepsilon_{k}\hat{c}_{R k\sigma}\D
\hat{c}_{R k\sigma}+ J\sum_{\alpha}\hat{s}(0)\cdot\hat{S} If we assume that $V\gg T$, we can again divide into two regions: \tilde{G}=\left\{\begin{array}{cc} \tilde{G}_{1}, & \;\;\;\;\;\;\;V\gg T_
{K} \\ \tilde{G}_{2}, & \;\;\;\;\;\;\;V\ll T_{K} \end{array}\right. Here, the previous work is based on the idea that $eV$ now plays the same role as temperature. That is, in the R.G. flow, we cut at
$eV$, and the perturbation analysis of \cite{3} is now for low voltage, so we find &\tilde{G}_{1}=G_{0}\frac{3\pi^{2}/16}{\ln^{2}(eV/T_{k})}\nonumber\\ &\tilde{G}_{2}=G_{0}\left[1-\left(\frac{\pi eV}
{T_{K}}\right)^{2}\right] However, we argue that the approximations used are not entirely reasonable. From the work of N. d'Ambrumenil and B. Muzykantskii, on the non-equilibrium x-ray problem (to
which the Kondo problem can be related), it is \emph{not} sufficient to decouple the leads, rotate basis and then simply reintroduce the voltage. It is clear (in the very least as a precaution), that
a full treatment of the \emph{true} two lead Kondo Hamiltonian of equation (\ref{2lead}) is required. With this in mind, in the next chapter, we will be following the calculation of Anderson, Yuval
and Hamann, in which they map the Kondo Hamiltonian onto a two dimensional Coulomb gas. %\vfill\pagebreak \section{Conclusion} We have derived some results for the two lead Kondo problem in various
limits. We have shown the suppression of the Coulomb blockade, and observed that this suppression can be viewed as a kind of delocalisation caused by the Kondo singlet across the dot. The above
treatment required us to neglect the bias potential and then to rotate our two lead problem to a diagonal basis. However, it is not clear that this is a controlled or reasonable approach. In fact,
the presence of voltage in the non-diagonal Green's function of related x-ray problems implies that the voltage cannot be treated perturbatively, and that a generalisation of non-eequilibrium
Riemann-Hilbert techniques may be necessary. \vskip3pt \ack{This work has been supported by The IET} \vskip5pt \noindent J. Smith and A. N. Other (\textit{The IET, Stevenage, UK}) \vskip3pt \noindent
E-mail: jsmith@theiet.org \begin{thebibliography}{} \bibitem{1} Anderson, P.: `A poor man's derivation of scaling laws for the Kondo problem', \textit{J. Phys. C.}, 1960, \textbf{3}, p. 2436 \bibitem
{2} Coleman, P.: `1/N expansion for the Kondo lattice', \textit{Phys. Rev. B}, 1983, \textbf{28}, pp. 5255-5262 \bibitem{3} Ludwig, I. and Ludwig A. W. W.: `Kondo effect induced by a magnetic field',
\textit{Phys. Rev. B}, 2001, \textbf{64}, p. 045328 \end{thebibliography} \end{document} %\begin{table}[b] %\processtable{Coefficients and remainders for distribution KK ($k = 0.05$, %$v = 3$, $c_{1}
= 1.5$, $c_{2} = 4.5$)} %{\begin{tabular}{|l|l|l|}\hline %$n$ & $a_{n}^{2}$ & $r_{k}(1)$\\\hline %0 & 3.602576748428 & 1.493719547999\\\hline %1 & 1.384791111989 & 0.108928436101\\\hline %2 &
0.108600438794 & 0.000327997399\\\hline %3 & 0.000275794597 & 0.000052202814\\\hline %4 & 0.000027616892 & 0.000024585922\\\hline %5 & 0.000018178621 & 0.000006407300\\\hline %\end{tabular}}{} %\end
{table} % %So, the basic preamble and main body will be: %\verb"\documentclass[twocolumn]{el-author}"\\ %\verb"\usepackage[...]{packages}"\\ %\verb"\date{12 December 2012}"\\ %\verb"\title{...}"\\ %\
verb"\author{...}"\\ %\verb"\abstract{...}"\\ %\verb"\maketitle{...}"\\ %\verb"\begin{document}"\\ %\verb"..."\\ %\verb"\section{...}"\\ %\verb"..."\\ %\verb"\section{..}"\\ %\verb"..."\\ %\verb"\end | {"url":"https://tr.overleaf.com/latex/templates/iet-electronics-letters-template/dhznhpbsnwxd","timestamp":"2024-11-12T09:52:47Z","content_type":"text/html","content_length":"51684","record_id":"<urn:uuid:ab06319f-bc51-4971-b437-8abb329bdc22>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00679.warc.gz"} |
PLVECT(3plplot) PLplot API PLVECT(3plplot)
plvect - Vector plot
plvect(u, v, nx, ny, scale, pltr, pltr_data)
Draws a plot of vector data contained in the matrices (u[nx][ny],v[nx][ny]) . The scaling factor for the vectors is given by scale. A transformation routine pointed to by pltr with a pointer
pltr_data for additional data required by the transformation routine to map indices within the matrices to the world coordinates. The style of the vector arrow may be set using plsvect(3plplot).
Redacted form: plvect(u, v, scale, pltr, pltr_data) where (see above discussion) the pltr, pltr_data callback arguments are sometimes replaced by a tr vector with 6 elements, or xg and yg array
arguments with either one or two dimensions.
This function is used in example 22.
A pair of matrices containing the x and y components of the vector data to be plotted.
Dimensions of the matrices u and v.
Parameter to control the scaling factor of the vectors for plotting. If scale = 0 then the scaling factor is automatically calculated for the data. If scale < 0 then the scaling factor is
automatically calculated for the data and then multiplied by -scale. If scale > 0 then the scaling factor is set to scale.
A callback function that defines the transformation between the zero-based indices of the matrices u and v and world coordinates.For the C case, transformation functions are provided in the
PLplot library: pltr0(3plplot) for the identity mapping, and pltr1(3plplot) and pltr2(3plplot) for arbitrary mappings respectively defined by vectors and matrices. In addition, C callback
routines for the transformation can be supplied by the user such as the mypltr function in examples/c/x09c.c which provides a general linear transformation between index coordinates and world
coordinates.For languages other than C you should consult the PLplot documentation for the details concerning how PLTRANSFORM_callback(3plplot) arguments are interfaced. However, in general, a
particular pattern of callback-associated arguments such as a tr vector with 6 elements; xg and yg vectors; or xg and yg matrices are respectively interfaced to a linear-transformation routine
similar to the above mypltr function; pltr1(3plplot); and pltr2(3plplot). Furthermore, some of our more sophisticated bindings (see, e.g., the PLplot documentation) support native language
callbacks for handling index to world-coordinate transformations. Examples of these various approaches are given in examples/<language>x09*, examples/<language>x16*, examples/<language>x20*,
examples/<language>x21*, and examples/<language>x22*, for all our supported languages.
Extra parameter to help pass information to pltr0(3plplot), pltr1(3plplot), pltr2(3plplot), or whatever callback routine that is externally supplied. | {"url":"https://manpages.opensuse.org/Tumbleweed/plplot-devel/plvect.3plplot.en.html","timestamp":"2024-11-03T03:15:28Z","content_type":"text/html","content_length":"20666","record_id":"<urn:uuid:e61450fd-ee3a-46e6-9451-1c3f3b839982>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00867.warc.gz"} |
Chaos theory is a branch of mathematics focusing on the study of chaos states of dynamical systems whose apparently random states of disorder and irregularities are often governed by deterministic
laws that are highly sensitive to initial conditions. When employing mathematical theorems, one should remain careful about whether their hypotheses are valid within the frame of the questions
considered. Among such hypotheses in the domain of dynamics, a central one is the continuity of time and space (that an infinity of points exists between two points). This hypothesis, for example,
may be invalid In the cognitive neurosciences of perception, where a finite time threshold often needs to be considered.
Birth of the chaos theory
Poincaré and phase space
With the work of Laplace, the past and the future of the solar system could be calculated and the precision of this calculation depended on the capacity to know the initial conditions of the system,
a real challenge for “geometricians,” as alluded to by d’Holbach and Le Verrier. Henri Poincaré developed another point of view, as follows: in order to study the evolution of a physical system over
time, one has to construct a model based on a choice of laws of physics and to list the necessary and sufficient parameters that characterize the system (differential equations are often in the
model). One can define the state of the system at a given moment, and the set of these system states is named phase space.
The phenomenon of sensitivity to initial conditions was discovered by Poincaré in his study of the the n-body problem, then by Jacques Hadamard using a mathematical model named geodesic flow, on a
surface with a nonpositive curvature, called Hadamard’s billards. A century after Laplace, Poincaré indicated that randomness and determinism become somewhat compatible because of the long-erm
A very small cause, which eludes us, determines a considerable effect that we cannot fail to see, and so we say that this effect Is due to chance. If we knew exactly the laws of nature and the state
of the universe at the initial moment, we could accurately predict the state of the same universe at a subsequent moment. But even If the natural laws no longer held any secrets for us, we could
still only know the state approximately. If this enables us to predict the succeeding state to the same approximation, that is all we require, and we say that the phenomenon has been predicted, that
It Is governed by laws. But this is not always so, and small differences in the initial conditions may generate very large differences in the final phenomena. A small error in the former will lead to
an enormous error In the latter. Prediction then becomes impossible, and we have a random phenomenon.
This was the birth of chaos theory.
Lorenz and the butterfly effect
Rebirth of chaos theory
Edward Lorenz, from the Massachusetts Institute of Technology (MIT) is the official discoverer of chaos theory. He first observed the phenomenon as early as 1961 and, as a matter of irony, he
discovered by chance what would be called later the chaos theory, in 1963, while making calculations with uncontrolled approximations aiming at predicting the weather. The anecdote is of interest:
making the same calculation rounding with 3-digit rather than 6-digit numbers did not provide the same solutions; indeed, in nonlinear systems, multiplications during iterative processes amplify
differences in an exponential manner. By the way, this occurs when using computers, due to the limitation of these machines which truncate numbers, and therefore the accuracy of calculations.
Lorenz considered, as did many mathematicians of his time, that a small variation at the start of a calculation would Induce a small difference In the result, of the order of magnitude of the initial
variation. This was obviously not the case, and all scientists are now familiar with this fact. In order to explain how important sensitivity the to initial conditions was, Philip Merilees, the
meteorologist who organized the 1972 conference session where Lorenz presented his result, chose himself the title of Lorenz’s talk, a title that became famous: “Predictability: does the flap of a
butterfly’s wing in Brazil set off a tornado in Texas?” This title has been cited and modified in many articles, as humorously reviewed by Nicolas Witkowski. Lorenz had rediscovered the chaotic
behavior of a nonlinear system, that of the weather, but the term chaos theory was only later given to the phenomenon by the mathematician James A. Yorke, in 1975. Lorenz also gave a graphic
description of his findings using his computer. The figure that appeared was his second discovery: the attractors.
The golden age of chaos theory
Felgenbaum and the logistic map
Mitchell Jay Feigenbaum proposed the scenario called period doubling to describe the transition between a regular dynamics and chaos. His proposal was based on the logistic map introduced by the
biologist Robert M. May in 1976. While so far there have been no equations this text, I will make an exception to the rule of explaining physics without writing equations, and give here a rather
simple example. The logistic map is a function of the segment [0,1] within itself defined by:
where n = 0, 1, … describes the discrete time, the single dynamical variable, and 0≤r≤4 is a parameter. The dynamic of this function presents very different behaviors depending on the value of the
parameter r:
For 0≤r≤3, the system has a fixed point attractor that becomes unstable when r=3.
Pour 3<r<3,57…, the function has a periodic orbit as attractor, of a period of 2^n where n is an integer that tends towards infinity when r tends towards 3,57…
When r=3,57…, the function then has a Feigenbaum fractal attractor.
When over the value of r=4, the function goes out of the interval [0,1]
COURTESY- Christian Oestreicher, Department of Public Education, State of Geneva, Switzerland;^* E-mail:hc.eg.ude@rehciertseo.naitsirhc
Article Name
Chaos theory is a branch of mathematics focusing on the study of chaos states of dynamical systems whose apparently random states of disorder and irregularities are often governed by deterministic
laws that are highly sensitive to initial conditions.
Rajarshi Dey
Publisher Name
Soul Of Mathematics
Publisher Logo
One comment on “CHAOS THEORY”
1. The initial state is a kaotic state of informational equilibrium
where the surplus of information at equilibrium is translated into order
therefore energy, light and sound ….
this occurs following The Meissner effect (or Meissner–Ochsenfeld effect) | {"url":"https://soulofmathematics.com/index.php/chaos-theory/","timestamp":"2024-11-12T03:48:07Z","content_type":"text/html","content_length":"135081","record_id":"<urn:uuid:b6b5c27f-35c4-4136-aec9-e2083619dfe1>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00282.warc.gz"} |
Multivariate bias corrections of climate simulations: which benefits for which losses?
Articles | Volume 11, issue 2
© Author(s) 2020. This work is distributed under the Creative Commons Attribution 4.0 License.
Multivariate bias corrections of climate simulations: which benefits for which losses?
Climate models are the major tools to study the climate system and its evolutions in the future. However, climate simulations often present statistical biases and have to be corrected against
observations before being used in impact assessments. Several bias correction (BC) methods have therefore been developed in the literature over the last 2 decades, in order to adjust simulations
according to historical records and obtain climate projections with appropriate statistical attributes. Most of the existing and popular BC methods are univariate, i.e., correcting one physical
variable and one location at a time and, thus, can fail to reconstruct inter-variable, spatial or temporal dependencies of the observations. These remaining biases in the correction can then affect
the subsequent analyses. This has led to further research on multivariate aspects for statistical postprocessing BC methods. Recently, some multivariate bias correction (MBC) methods have been
proposed, with different approaches to restore multidimensional dependencies. However, these methods are not yet fully apprehended by researchers and practitioners due to differences in their
applicability and assumptions, therefore leading potentially to different results. This study is intended to intercompare four existing MBCs to provide end users with aid in choosing such methods for
their applications. For evaluation and illustration purposes, these methods are applied to correct simulation outputs from one climate model through a cross-validation method, which allows for the
assessment of inter-variable, spatial and temporal criteria. Then, a second cross-validation method is performed for assessing the ability of the MBC methods to account for the multidimensional
evolutions of the climate model. Additionally, two reference datasets are used to assess the influence of their spatial resolution on (M)BC results. Most of the methods reasonably correct
inter-variable and intersite correlations. However, none of them adjust correctly the temporal structure as they generate bias-corrected data with usually weak temporal dependencies compared to
observations. Major differences are found concerning the applicability and stability of the methods in high-dimensional contexts and in their capability to reproduce the multidimensional changes in
the model. Based on these conclusions, perspectives for MBC developments are suggested, such as methods to adjust not only multivariate correlations but also temporal structures and allowing
multidimensional evolutions of the model to be accounted for in the correction.
Received: 02 Mar 2020 – Discussion started: 06 Mar 2020 – Revised: 08 May 2020 – Accepted: 11 May 2020 – Published: 15 Jun 2020
Representing precisely the climate system and the interactions between its components is a major challenge not only for climate modellers but also for scientists working on impact, mitigation and
adaptation issues relating to climate change. Indeed, it is now common that climate change impact studies, e.g., in hydrology, environmental science or economics, use global and regional climate
model (GCM and RCM) simulations as inputs into impact models. However, in spite of continued scientific progress in climate modeling, climate simulations often remain biased compared to observations
(Christensen et al., 2008). This means that their statistical attributes such as mean, variance, extreme or even dependence structures between several variables and/or sites can differ from those
calculated based on historical records. Therefore, using plain simulations can significantly affect the results of impact studies.
To solve this issue, many statistical bias correction (BC) methods have been developed, in order to correct the statistical discrepancies of the simulations before climate change assessment studies.
Most of the BC methods in use are designed to adjust univariate distribution features of climate variables, such as the mean (e.g., Delta method, Xu, 1999), the variance (e.g., simple scaling
adjustment, Berg et al., 2012) or quantiles (e.g., “quantile-mapping”, Haddad and Rosenfeld, 1997). The last technique received notable success, since it permits the adjustment of the mean, the
variance and any quantile of the climate variables. Its theoretical framework has been conducive to the development of multiple versions of quantile-based methods (e.g., Panofsky and Brier, 1958;
Déqué, 2007; Gudmundsson et al., 2012; Vrac et al., 2012). However, all these univariate BC methods are designed to correct variables independently, i.e., are applied separately for each physical
variable at each specific location (e.g., grid cell). Although univariate distribution features are adjusted according to references, it can generate inappropriate multivariate situations where the
dependence structure between variables and sites is not corrected from the model and misrepresented (Maraun, 2013) or even modified. Ignoring the observed inter-variable and intersite dependencies in
the correction procedure can result in obtaining corrected outputs with inappropriate physical laws and, thereby, distorting the results of impact studies (Zscheischler et al., 2019). It is therefore
of paramount importance to adjust the dependence structures of climate simulations, in addition to 1-dimensional characteristics, before using it in subsequent studies.
These methodological issues have led up to the recent development of a few multivariate bias correction (MBC) methods. Not only do these methods adjust univariate distribution features, they also are
aimed at correcting the dependence structure of climate simulations. Recent studies have shown that univariate BC methods can already provide adequate results for certain specific regional impact
studies (Yang et al., 2015; Casanueva et al., 2018) and that using MBC methods does not necessarily present substantial benefits (Räty et al., 2018). However, this does not call into question the
interest of MBC methods as these specific results cannot be generalized to each method and application. In particular, MBC methods could be valuable in larger-scale impact modeling frameworks such as
compound events, where the combination of physical processes across multiple spatial and temporal scales leads to significant impacts (Zscheischler et al., 2018). As mentioned by Vrac (2018) and
completed by Robin et al. (2019), MBC methods may be grouped into three main categories of approaches: the “marginal/dependence” correction approach, the “successive conditional” correction approach
and the “all-in-one” correction approach. The marginal/dependence category is made up of multivariate bias adjustment methods correcting separately the marginal distributions and the dependence
relationships of climate simulations (e.g., Bárdossy and Pegram, 2012; Mehrotra and Sharma, 2016; Vrac, 2018; Nahar et al., 2018; Cannon, 2018a). In the all-in-one category, multivariate BC methods
correct the 1-dimensional marginal properties and dependence structures altogether at the same time (e.g., Robin et al., 2019). At last, successive conditional MBC methods perform successive
corrections, conditionally on the variables already corrected (e.g., Bárdossy and Pegram, 2012; Dekens et al., 2017). In particular, this last category has two major limitations. First, the quality
of the correction can change depending on the ordering of the variables to correct (see, e.g., Piani and Haerter, 2012). Second, the number of variables already corrected increases at each iteration
step, which progressively reduces the number of data available for the correction, making it less and less robust. Accordingly, these methodological limits call into question the applicability of
successive conditional BC methods for multivariate bias adjustment of high-dimensional climate simulations.
Additionally to the methodological distinction described above, the few existing multivariate BC methods are based on the use of different statistical techniques. They may also present differences in
terms of assumptions and philosophical features, e.g., deterministic versus stochastic. Consequently, the quality of the correction outputs can vary largely from one method to another, depending on
their characteristics. It is hence crucial, in particular for end users, to carefully evaluate the suitability of these multivariate BC methods and identify their advantages and limits, not only
between the different categories of methodological approaches but also between the different statistical techniques and assumptions. In this study, we present an analysis of four multivariate BC
methods and assess their performances in terms of adjustment of dependence structures for temperature and precipitation time series. We focus in particular our intercomparison on methods belonging to
the marginal/dependence and the all-in-one categories. Due to the previously mentioned limitations of the successive conditional approach, no methods belonging to this category are investigated. The
selected four MBC methods present differences in terms of conceptual features, statistical techniques used and assumptions. In particular, MBCs with different assumptions about nonstationarity are
selected, i.e., differing in how they consider the simulated multidimensional changes between present (i.e calibration) and future (i.e., projection) periods in the correction procedure. Moreover, in
order to assess the potential benefits of using multivariate BC methods relative to univariate ones, one univariate quantile-mapping-based BC method is included in the study as a benchmark. It
provides a more extensive intercomparison framework in which quality of BC outputs can be assessed and compared by evaluating univariate, inter-variable, spatial and temporal properties, as well as
multidimensional changes.
In addition, each BC method is applied to correct climate model outputs over France and three subregions according to two distinct reference datasets with different spatial resolutions. This permits
one to assess the potential influence of the reference spatial resolution on bias correction results and to delineate guidance on relevant good practices for end users concerning this aspect.
This paper is organized as follows: Sect. 2 describes the model and reference data used, and Sect. 3 presents the BC methods intercompared. Then, Sect. 4 presents the experimental setup used in this
study, while Sect. 5 displays the results of the intercomparison. Finally, our findings are summarized, discussions are given and perspectives for future research are proposed in Sect. 6.
2Model and reference data
Institut Pierre-Simon Laplace (IPSL) coupled model (Marti et al., 2010; Dufresne et al., 2013) daily data with a 1.25^∘ by 2.5^∘ spatial resolution are used in this study as model data to be
corrected. Simulations of the scenario of atmospheric CO[2] concentration pathway associated with a radiative forcing of +8.5Wm^−2 (RCP 8.5 scenario, i.e., the scenario with highest CO[2]
concentration) are selected. Daily temperature (T2) and precipitation (PR) time series from 1 January 1979 to 31 December 2016 are extracted over the geographical area of France ([42, 51^∘N]×[−5,
10^∘E]), which corresponds to 321 continental grid cells.
As BC methods require a reference dataset to adjust the simulations, daily temperature and precipitation time series with a 0.5^∘ by 0.5^∘ spatial resolution are first used from the “WATCH Forcing
Data methodology applied to ERA‐Interim data” (WFDEI) from the EU WATCH project (Weedon et al., 2014) over the same geographical area of France. Note that, as spatial resolution between WFDEI and
IPSL-CM5 are different, IPSL model data are regridded by a nearest-neighbor technique to associate each IPSL grid cell with its nearest WFDEI grid cell center. Hence, in the following, the IPSL data
will be used at the 0.5^∘ spatial resolution corresponding to that of the WFDEI reference dataset.
To assess the influence of the reference spatial resolution on BC results, we use another reference gridded dataset for France with finer resolution: the “Systeme d’Analyze Fournissant des
Renseignements Atmosphériques à la Neige” (SAFRAN) reanalysis dataset (Vidal et al., 2010). Daily T2 and PR time series from SAFRAN have a 8km×8km spatial resolution and divide France into 8981
continental grid cells. IPSL data are regridded to the 8km×8km SAFRAN resolution using the nearest-neighbor technique. Once regridded IPSL simulations are obtained, each MBC method can be
applied. However, as some MBC algorithms have difficulties in practice in very high-dimensional contexts (here for 8981 grid cells), we restrict the application of MBCs with SAFRAN reference dataset
over the Brittany region, located in the northwest part of France ([47, 49^∘N]×[−5, 2^∘E]), which corresponds to 345 continental grid cells. Note that we selected this subregion of Brittany for
SAFRAN, i.e., at fine resolution, in order to have a similar number of grid cells as for France selected with the WFDEI reference dataset, i.e., at coarser resolution. MBC methods have also been
applied and evaluated over two others subregions of 345 grid cells located, respectively, around the Paris area and in southeast France. For the sake of clarity, as same results were obtained for
each of the subregions, we will only present the results for Brittany in the rest of this study.
3Multivariate bias correction methods
This section presents a brief description of the univariate BC method and the four multivariate BC methods implemented in this study. As a reminder, results from the univariate CDF-t method serve as
a benchmark to measure the benefits of considering multivariate aspects in the correction procedure instead of using univariate BC methods. For the sake of clarity, Table 1 provides a concise summary
of the different attributes that make the BC methods distinct.
3.1Cumulative Distribution Function – Transform (CDF-t)
The “Cumulative Distribution Function – Transform” (CDF-t) method is a univariate BC method initially proposed by Michelangeli et al. (2009) to correct the univariate distribution of a modeled
climate variable. Since then, CDF-t has been applied for various studies (e.g., Tramblay et al., 2013; Tobin et al., 2015; Defrance et al., 2017; Famien et al., 2018; Guo et al., 2018) and specific
variants have been developed (e.g., Kallache et al., 2011; Vrac et al., 2016). The CDF-t approach applies, independently to each variable, a univariate transfer function T, which permits one to link
the cumulative distribution function (CDF) of a variable of interest in the model simulations to that of the reference dataset. By assuming that T is valid in a climate different from that of the
calibration period, a new CDF for the bias-corrected variable over the projection period is generated. Then, a quantile–quantile approach is performed between the new (reference) CDF and the CDF from
the model simulations during the projection period to derive bias-corrected data. This two-step procedure permits one to take into account potential changes (between calibration and projection
periods) of the univariate distribution in the correction procedure. For the special case of precipitation, the “Singularity Stochastic Removal” version of CDF-t (Vrac et al., 2016) is applied to
correct both precipitation occurrences and intensities. More details about CDF-t can be found in Appendix A. In the following subsections, the four MBC methods are presented.
3.2Rank Resampling For Distributions and Dependences (R^2D^2)
The “Rank Resampling For Distributions And Dependences” (R^2D^2) method, developed by Vrac (2018) in the context of marginal/dependence category, is an extension of the “Empirical Copula – Bias
Correction” (EC-BC; Vrac and Friederichs, 2015) method. R^2D^2 is based on a reordering technique named the Schaake Shuffle. Originally described by John C. Schaake in 2002, it was introduced in the
scientific literature by Clark et al. (2004) to postprocess temperature and precipitation forecasts from numerical weather prediction models. This shuffling technique consists of reordering a sample
such that its rank structure corresponds to the rank structure of a reference sample and, thus, allows the reconstruction of multivariate dependence structures. The Schaake Shuffle has already been
applied for various applications in climate science, such as ensemble postprocessing (e.g., Möller et al., 2013; Schefzik et al., 2013), and in numerous studies (e.g., Voisin et al., 2010; Verkade
et al., 2013). According to the marginal/dependence category to which it belongs, the R^2D^2 method performs first a univariate correction to adjust the marginal distribution of each climate
variable. In this study, CDF-t is used for this first step, but it has to be noted that other univariate methods can be employed. Instead of directly applying the Schaake Shuffle and reproducing the
temporal structure of the reference (as in Vrac and Friederichs, 2015), the method introduces some variability to the timing of the events, by allowing for the possibility to select a “reference
dimension” for the Schaake Shuffle, i.e., one physical variable at one given site, for which rank chronology remains unchanged. Reconstruction of inter-variable and spatial correlations of the
reference is then performed using the Schaake Shuffle with the constraint of preserving the rank structure for the reference dimension. Note that the R^2D^2 method can generate as many corrections as
the number of variables to be corrected and all with identical inter-variable and spatial dependencies but with different temporal structures, depending on the selected reference dimension. Hence, R^
2D^2 introduces some stochasticity in the bias correction. For practical reasons, in the following, we will reduce the number of corrected outputs: only R^2D^2 corrections with reference dimensions
located either in Paris or in the center of Brittany (respectively, for France and Brittany regions) will be analyzed in Sect. 5. It must also be noted that by using the Schaake Shuffle technique, R^
2D^2 assumes by construction the inter-variable and spatial dependence structures (i.e., the rank correlations, or copulas) to be stable in time. Some more mathematical details about R^2D^2 are
expressed in Appendix B.
3.3Dynamical Optimal Transport Correction (dOTC)
The “Dynamical Optimal Transport Correction” (dOTC) method was developed by Robin et al. (2019), in the all-in-one category, i.e., correcting the marginal distributions and dependence structures
altogether at the same time. Based on optimal transport theory, it is a generalization of the univariate quantile mapping techniques to the multivariate case. dOTC is aimed at constructing a
multivariate transfer function, called a transport plan, to perform bias correction by minimizing a cost function associated with the transformation of a multivariate distribution to another.
Multivariate distribution of a biased random variable and its correction are linked together through this particular transfer function, where for any value of the variable to correct is associated a
conditional law linking the biased value and its correction. Corrections are then picked (partially) randomly from these conditional laws, introducing some stochasticity into the bias correction
As for univariate quantile mapping, the way the transfer function is constructed for dOTC plays a decisive role in the obtained bias correction outputs. As explained before, the univariate method
CDF-t is able to learn the change in univariate distributions between the calibration and the projection periods for the climate model and transfers this change to the observational world. Following
this philosophy in a multivariate context, dOTC is designed to learn not only the change in univariate distributions but also the change in multidimensional properties of the model and allows them to
be transferred the corrections. Contrary to R^2D^2, it assumes nonstationarity of the dependence (copula) structure between the calibration and the projection periods and permits the evolution of the
model (e.g., induced by climate change) to be taken into account in the bias correction procedure. More explanations about dOTC are given in Appendix C.
3.4Multivariate Bias Correction with N-dimensional probability density function transform (MBCn)
The “Multivariate Bias Correction with N-dimensional probability density function transform” (MBCn) was developed by Cannon (2018a) in the context of the marginal/dependence category. Based on an
adaptation of an image processing algorithm used to transfer color information, MBCn permits one to transfer statistical characteristics of a reference multivariate distribution to the multivariate
distribution of climate model variables. Being part of the marginal/dependence category, univariate distributions of climate variables are first adjusted using a 1-dimensional BC (1d-BC) method. For
this step, MBCn uses the quantile-delta mapping method (QDM; Cannon et al., 2015) that preserves absolute or relative changes in quantiles, e.g., for, respectively, variables like temperature or
ratio variables like precipitation. Once univariate distributions are corrected, the dependence structure is adjusted by using an iterative process. At each step, data are multiplied by random
orthogonal rotation matrices to partially decorrelate the climate variables to correct. QDM corrections are then applied on (partially) decorrelated data before the recorrelation step with the
inverse random matrices. This step (i.e., including rotation, QDM corrections and back rotation) is repeated iteratively until convergence is reached between the multivariate distributions of
reference and climate simulations during the calibration period. Indeed, those iterations permit correcting the dependence structure of the model. Moreover, by doing so – and similarly to dOTC – MBCn
allows changes in the dependence structure to be in accordance with the model changes. More details about MBCn can be found in Appendix D.
3.5Matrix recorrelation (MRec)
Bárdossy and Pegram (2012) presented an MBC, hereafter referred to as “matrix recorrelation” (MRec). The latter lies in the all-in-one category and relies on a matrix recorrelation technique. The
MRec method consists of first transforming separately each variable of both model and references to the univariate normal distribution with Gaussian quantile–quantile method. This transformation step
is particularly appropriate for variables with mixed distributions (e.g., precipitation composed of wet and dry days), for which computing a Pearson correlation matrix on Gaussianized data instead of
raw data permits their dependence structure to be better described. Then, a combination of “decorrelation” and “recorrelation” steps using decompositions of correlation matrices through singular
value decomposition (SVD, Beltrami, 1873; Jordan, 1874a, b; Stewart, 1993) is applied on the Gaussianized model data, forcing its Pearson correlation matrix to match that of the Gaussianized
observed data during the calibration period. For the projection period, the same “decorrelation–recorrelation” matrix is directly applied on Gaussianized model data, which permits the preservation
of, for the projection period, the potential changes in correlations as simulated by the model. Finally, for both periods, a quantile–quantile back transformation is applied separately for each
variable between recorrelated variables and references to correct marginal distributions. See Appendix E for more details.
Contrary to the R2D2, dOTC and MBCn methods presented previously, MRec differs in being designed to correct only a particular feature of the multivariate dependence structure, here Pearson
correlations. Implicitly, it makes the assumption that Pearson correlation values are sufficient to determine the full multivariate dependence structure, which can be called into question for
variables with skewed and heavy tailed distributions (like precipitation) and with potentially complex interactions that Pearson correlation cannot capture as a whole. For this reason, implementing
the MRec algorithm in the present intercomparison study permits the comparison of the performances of an MBC method based on such an assumption relative to methods intended to correct the
non-Gaussian dependence structure of climate simulations.
4.1Settings of MBCs
Multivariate BC methods can be implemented in different dimensional configurations, depending on the need of the users to correct inter-variable and/or spatial correlations. However, in most cases,
multivariate BC methods are applied grid cell by grid cell by practitioners to correct inter-variable properties of climate simulations, disregarding spatial structures (e.g., in Meyer et al., 2019;
Guo et al., 2019). We not only tested and assessed this approach for each method but also expanded the study to include high-dimensional configurations of MBC to adjust spatial and full (i.e.,
spatial and inter-variable jointly) dependence structures of climate simulations. Depending on the dimensional configurations, the objectives of corrections for multivariate properties differ.
Including different dimensional versions in the study will permit one to better highlight the potential losses and benefits associated with them. Therefore, in the following each of the four MBC
methods is applied according to the three following configurations:
• a 2-dimensional (hereinafter referred to as “2d-”) version, for which the MBC method is applied independently at each grid cell but jointly corrects both temperature and precipitation time
series. For example, to correct a climate dataset of 321 grid cells, the MBC method is performed 321 times, i.e., for each grid cell across the whole grid. By doing so, 2d- versions are aiming to
correct inter-variable correlations within each grid cell.
• a spatial-dimensional (hereinafter referred to as “Spatial-”) version, where all time series for a particular physical variable are corrected jointly but independently from the other physical
variable. Hence, for this version, the MBC method is performed twice, adjusting, on the one hand, all time series for temperature and, on the other hand, all time series for precipitation. Thus,
Spatial- versions are designed to adjust spatial correlations of climate models for each physical variable separately.
• a full-dimensional (hereinafter referred to as “Full-”) version, where all time series are corrected jointly over the entire grid for both temperature and precipitation. The MBC method is hence
applied only once and is intended to correct together the inter-variable and spatial correlations of the simulations.
Regarding the initial settings for MBCn, preliminary tests have been conducted with different dimensional settings to find the number of iterations ensuring the convergence of the algorithm depending
on the dimensional configuration. With respect to the results of these tests (not shown), the number of iterations has been chosen to be equal to 50 for 2d- configurations and 200 for both Spatial-
and Full- versions.
4.2Protocols of bias correction
In this study, the BC methods presented above are applied to correct IPSL GCM simulations with either the WFDEI ($\mathrm{0.5}{}^{\circ }×\mathrm{0.5}{}^{\circ }$) or the SAFRAN (8km×8km) data as
references. Data are available for the period 1979–2016, i.e., 38 years, and are divided into two intervals of 19 years: 1979–1997 and 1998–2016. As a reminder, daily temperature and precipitation
times are corrected on 321 and 345 grid cells for France and Brittany regions, respectively. For each method, bias correction is performed separately for the 12 months in order to preserve seasonal
The first protocol in this study takes advantage of the cross-validation technique to generate bias-corrected outputs for the period 1979–2016. Dividing the time period into two parts permits one to
perform a 2-fold cross-validation procedure: the 1979–1997 period is first defined as the calibration period, and the 1998–2016 portion, called the projection period, is used for out-of-sample
validation. Swapping of the two periods is then done, so that each period has been used once for calibration and once for validation. Bias correction for the period 1979–2016 is then achieved by
assembling the adjusted outputs for the projection periods obtained at each step. This 2-fold protocol, largely used in the climate science literature (e.g., in Cannon, 2018a), allows one to reduce
overfitting by using two distinct subperiods and is hence well suited to evaluate our results. However, by adjusting the period 1979–1997 according to the 1998–2016 period, this protocol presents the
drawback of potentially hiding the climate change signal present in the model. Thus, proper assessment of the multidimensional properties evolutions cannot be conducted via this procedure.
Hence, to evaluate the nonstationary behavior of BC methods, a second protocol is defined. Similarly to the first protocol, the 1998–2016 period is corrected by using the 1979–1997 portion as
calibration period. However, here, 1979–1997 simulations are corrected directly with respect to the 1979–1997 references, i.e., without cross-validation. Hence, the potential climate change signal is
not distorted by undesirable effects resulting from the protocol procedure, allowing for the appropriate assessment of change aspects of the BC methods between the two periods.
In accordance with common practice, thresholding of 1mm for precipitation time series is applied before evaluation to replace values lower than 1mm by 0 after correction.
The correction outputs are evaluated according to different characteristics designed to focus on (i) marginal, (ii) inter-variable, (iii) spatial, (iv) temporal and (v) nonstationary properties.
Characteristics (i)–(iv) are evaluated on the 1979–2016 period for the adjusted outputs obtained according to the 2-fold protocol and are compared to those from the reference dataset. However,
regarding nonstationary properties, corrected outputs from the second protocol are used, and results are compared to the simulations to highlight the performances of the MBC methods regarding their
capability to reproduce (or not) the multidimensional changes in the model between the 1979–1997 and 1998–2016 periods.
In the following, evaluation is presented for the winter season (December–January–February) only, as conclusions remain generally the same for the other seasons. However, in order to provide nuances,
additional results for the summer season (June–July–August) are displayed in the Supplement when needed.
5.1Univariate distributions properties
First, bias-corrected data are evaluated relative to univariate statistics. To do so, for temperature and precipitation, the difference of mean values between the bias correction and the reference at
each grid cell is computed. The same computation is also made for standard deviation. Absolute difference is calculated for temperature mean, while relative difference is more appropriate for
precipitation mean as well as for standard deviation of both physical variables. Results are shown with boxplots for the plain IPSL simulations and for a selection of BC outputs in Fig. 1 for France
during the winter season. The results for Brittany during winter are presented in Fig. S1 of the Supplement. As marginal/dependence MBC methods correct univariate properties independently from the
dependence structure, results for their 1-dimensional characteristics are equivalent between the three different dimensional configurations (2d-, Spatial- and Full-). Therefore, to avoid redundancy,
results for R^2D^2 and MBCn are presented for only one arbitrary dimensional configuration, the other configurations giving the exact same mean and standard deviation results. Clearly, Fig. 1 shows
large differences between the IPSL simulations and the references for both temperature and precipitation and illustrates the necessity to adjust 1-dimensional distributions of the model before using
it in subsequent analyses. Multivariate BC methods implemented in this study display different performances in adjusting the univariate properties. In agreement with the properties of the marginal/
dependence MBC methods, R^2D^2 and MBCn present exactly the same results as the 1d-BC methods they use, i.e., respectively, CDF-t (shown) and QDM (not shown). With regard to the performances of dOTC
and MRec, some instabilities are found relative to the dimensional configuration. For dOTC, increasing the number of dimensions to correct from 2d- to Full- seems to have a slight but non-negligible
cost on the correction of mean and standard deviation (Fig. 1b and c). However, depending on both the climate variable and the statistical feature, the increasing deterioration with respect to the
dimensional setting is not systematically observed, as it can be seen in Fig. 1a and d. Concerning MRec, a slight deterioration of correction is often observed from 2d- to Spatial- versions (Fig. 1b,
c and d). Regarding the Full- version, the MRec algorithm produces results that are clearly unsatisfactory. Instead of improving the simulations, Full-MRec corrections strongly degrade the univariate
statistics. This underperformance of the MRec method over France appears in a context of high-dimensional correction when the number of available data is not large enough compared to the number of
dimensions. In this case, the inverses of high-dimensional sample covariance matrices are a highly biased estimator of the inverse of covariance matrices, which consequently largely affects the
quality of the Full-MRec corrections. Anyhow, the increasing degradation, whether it is slight or not, of univariate distribution corrections in high-dimensional contexts is one (undesirable) feature
of all-in-one methods, here observed for dOTC and MRec. Indeed, all-in-one methods are designed to adjust both univariate distributions and dependence structure of climate simulations at the same
time, which involves a possible deterioration of 1-dimensional marginal distributions during the combined correction process.
For Brittany, the same conclusions hold for R^2D^2, dOTC and MBCn, indicating no particular influence of spatial resolution on the results of the marginal statistics adjustment for these methods.
Nevertheless, quite interestingly, for the Full-MRec outputs, the underperformance observed for France is not obtained for Brittany (Fig. S1). A possible reason explaining why Full-MRec version is
presenting adequate results on this particular region (and the two other subregions, not shown) concerns the size of its geographical area and will be discussed in more detail in Sect. 6.2.
5.2Inter-variable correlations
To evaluate inter-variable dependence structure, Spearman correlations between temperature and precipitation are computed at each grid cell to measure the monotonic relationship between the two
physical variables. Using rank correlation presents the particularity of not being value dependent; i.e., it measures the dependence between two variables rid of their univariate distributions. As
the goal when applying MBC is to adjust not only the univariate distributions but also the dependence structure between the variables of interest, Spearman’s correlation is appropriate for this
latter aspect. Moreover, this measure does not require any assumption about the distribution of the variables or their statistical relationships. It is hence appropriate for temperature and
precipitation studies presenting extreme values and/or a lower bound (Vrac and Friederichs, 2015). The maps of the Spearman correlation differences with respect to the reference – for the IPSL model
and the bias-corrected data – are displayed in Fig. 2 for both France and Brittany. Initial maps of Spearman correlations, i.e., without differences with respect to the reference, are also provided
in Fig. S2.
For France, the map for the IPSL simulations (Fig. 2b1) indicates strong differences with respect to the WFDEI map (Fig. 2a1). As the univariate CDF-t method does not modify rank sequence of
temperature and precipitation time series, it globally conserves both the rank correlation intensities and structures of the IPSL model for each region and does not provide any correction of this
aspect (Fig. 2c1). By construction, clear improvements of the inter-variable correlation structure are provided by 2d- versions (Fig. 2d1, g1, j1 and m1). This is also the case for most of the Full-
configurations of MBCs (respectively, Fig. 2f1, i1, l1) despite possible differences in intensities. Note that maps of correlation differences for 2d-R^2D^2 (Fig. 2d1) and Full-R^2D^2 (Fig. 2f1) are
identical. Indeed, for the inter-variable aspect, the 2d- version is nested within the Full- configuration (see Vrac, 2018), due to the use of the reordering technique in R^2D^2. Also, for R^2D^2,
the choice of the reference dimension does not have any impact on results in the inter-variable context, as it only modifies the rank chronology of time series. As expected from previous
explanations, the map for the Full- version of MRec (Fig. 2o1) indicates a strong deterioration of the inter-variable correlation structure. It highlights again the inability of the method to work
properly for France in this dimensional setting. Concerning Spatial- versions of MBCs (Fig. 2e1, h1, k1 and n1), as they adjust the whole simulated field of temperature and precipitation separately,
they disregard inter-variable relationships. It results in BC outputs with strongly weakened inter-variable correlations structures.
Regarding Brittany, the same conclusions can be drawn for R^2D^2 and dOTC, for which spatial resolution does not affect the results of inter-variable properties adjustment. As noted previously,
Full-MRec over Brittany provides more satisfactory results than those obtained over France, which are in line with those obtained for R^2D^2 and dOTC. However, for MBCn outputs, a degrading effect
from 2d- (Fig. 2j2) to Full- (Fig. 2l2) is observed, in providing a corrected correlations' structure but with underestimated intensities in the high-dimensional context.
5.3Spatial correlations
To assess the quality of the corrections in terms of spatial correlations, mean correlograms, i.e., mean Spearman correlation in function of distance, are computed for temperature and precipitation
separately after removing daily areal mean. Indeed, climate variables can present a high day-to-day variability that can affect the evaluation of spatial criteria if not removed (e.g., Vrac, 2018).
Figure 3 and S3 show the results obtained for, respectively, precipitation and temperature for the different climate datasets. Note that the choice of the reference dimension for R^2D^2- versions
modifies results for temporal criteria and, consequently, for some of the spatial criteria. Hence, in the rest of this work, results from R^2D^2- versions are presented with the reference dimension
corresponding to the variable under interest. For the sake of brevity, results for precipitation are mainly discussed in this subsection, and nuances are made when different results are obtained for
For France, the IPSL precipitation correlogram is fairly distinct from the WFDEI one. The univariate method CDF-t, by simply adjusting univariate distributions, gets closer to the reference dataset
(Fig. 3a1), which may be here confusing. Indeed, although CDF-t adjusts the univariate distributions, it is supposed to preserve the rank sequence of the simulations, and therefore spatial
correlations are disregarded during the BC procedure. But, as the Singularity Stochastic Removal version of CDF-t (Vrac et al., 2016) is explicitly designed to improve dry days frequency, the method
consequently modifies rank correlations, which results here in an improvement of spatial statistics for precipitation. Also, an additional reason is that the correction of the univariate
distributions provided by CDF-t associated with the removing of daily areal means modifies ranks of the data, resulting in getting a correlogram closer to that from the reference dataset, and so
improves intersite variability.
Correlograms of 2d- versions (dotted) for the four MBC methods (Fig. 3b1, c1, d1 and e1) show results equivalent to CDF-t. Indeed, 2d-configuration MBCs adjust univariate distributions and
inter-variable correlations without modifying spatial correlations. The improvements of correlograms for 2d- versions thereby illustrate again that the correction of univariate distributions improves
spatial statistics for France. Particularly, 2d-R^2D^2 results (Fig. 3b1) are, by construction, exactly the same as those from CDF-t (Vrac, 2018). Indeed, by construction, 2d-R^2D^2 driven by
precipitation preserves Spearman spatial correlations from CDF-t for the precipitation variable. Note that, however, it is definitely not the case for temperature spatial structure (not shown) when
2d-R^2D^2 is driven by precipitation. Indeed, for 2d-R^2D^2 outputs driven by a specific physical variable, spatial structures of the “other” variable are strongly degraded by the reordering step.
Correlograms associated with outputs of Spatial- and Full- versions for R^2D^2 (Fig. 3b1) nicely fit the one from the reference dataset – even at long distances – and provide major improvements in
adjusting the spatial properties of the simulations. However, for similar reasons as those explained for 2d-R^2D^2, undesirable degradation effects on spatial cross-correlation between temperature
and precipitation are obtained for Spatial-R^2D^2 outputs (not shown). Therefore, it indicates that practitioners must favor the use of Full-R^2D^2 for their applications. With regard to Spatial- and
Full-dOTC (Fig. 3c1) and Spatial-MRec (Fig. 3e1), although correlograms are very close to those from the reference dataset, they provide slightly less pronounced improvements compared to the 2d-
versions, suggesting a slight degrading effect on results for these methods by considering more variables in the correction. As expected, the correlogram associated with Full-MRec outputs is away
from reference data, indicating once again the dysfunction of the MRec method for France. For Spatial- and Full-MBCn (Fig. 3d1), at long distances, similar improvement of spatial correlations are
provided as those from dOTC. However, large deviations between correlograms are found for short distances, suggesting a failure for the MBCn method to adjust local spatial properties in a
high-dimensional context.
For Brittany, same conclusions hold for R^2D^2 (Fig. 3b2), presenting again a stability of results regardless of both the spatial resolution and the geographical area considered. For dOTC (Fig. 3c2),
Spatial- and Full- versions now provide major improvements of spatial correlations compared to their 2d- versions and present results similar to Spatial- and Full-R^2D^2. With regard to MRec (Fig. 3
e2), the dysfunction of the Full- version is no longer observed. It now provides results similar to Spatial-MRec and better than 2d-MRec. However, it is worth mentioning that, for Brittany, different
results are obtained with MRec between precipitation and temperature spatial corrections. While, for temperature, Spatial-MRec outputs (Fig. S3e2) provide reasonable results with a correlogram
relatively close to the one of the reference data, a more moderate improvement of intersite variability is obtained for precipitation (Fig. 3e2). Explanations for these results will be provided in
Sect. 6.2. Regarding MBCn (Fig. 3d2), large deviations between correlograms are found for both short and large distances, underlining some instability of the algorithm to adjust for spatial
5.4Temporal structure
The different MBC methods implemented here are not intended to adjust temporal structures. Indeed, these multivariate procedures adjust multivariate distributions without accounting for any temporal
information. However, although the temporal structures are not adjusted according to the reference, MBCs necessarily modify the rank sequences of the simulations (Vrac, 2018). This modification is
not performed in the same way depending on the MBC or the dimensional configuration used and remains therefore to evaluate. To do so, 1d lag Pearson autocorrelations are computed at each grid cell
for temperature and precipitation. The resulting maps of differences with respect to the reference for the different climate datasets are displayed in Fig. 4 (resp. Fig. S4) for temperature (resp.
For France, IPSL temperature autocorrelations differences (Fig. 4b1) are small, indicating a relative agreement of IPSL with the WFDEI reference dataset (Fig. 4a1), showing equivalent high values. A
similar differences map is provided by CDF-t outputs (Fig. 4c1). It is however not the case for precipitation (Fig. S4c1), for which a decrease of autocorrelation values is observed over France with
respect to the reference and to the model. Although not observed for temperature, it highlights that the univariate correction could have a non-negligible effect on Pearson autocorrelation.
Interestingly, 2d- versions (Fig. 4d1, g1, j1 and m1) do not lead to a strong modification of temporal properties with respect to CDF-t. However, from one method to another, temporal structure
modifications are not equivalent for Spatial- and Full- versions. For dOTC and MBCn (Fig. 4h1, i1, k1 and l1), as the number of dimensions increases, the temperature autocorrelations seem to be more
and more modified, with intensities of values decreasing slightly from Spatial- to Full- versions. This result can also be seen for precipitation in Fig. S4. With regard to MRec, its Spatial- version
(Fig. 4n1) presents similar results as those obtained from Spatial-dOTC and Spatial-MBCn. Also, as expected, Full-MRec outputs (Fig. 4o1) do not provide sensible results due to the inability of the
method to work properly over the whole of France. Concerning R^2D^2, as the reference dimension driving the rank sequence is the same between Spatial- and Full- configurations, same differences of
autocorrelation maps are obtained for these two versions (Fig. 4e1 and f1). Moreover, the autocorrelation value in the grid cell of the reference dimension, i.e., located over Paris for France, is
exactly equal to the corresponding one in the CDF-t outputs, by construction. Remarkably, as mentioned by Vrac (2018), autocorrelations of the CDF-t outputs are partially reproduced around the
specific locations of the reference dimensions for Spatial-R^2D^2 and Full-R^2D^2 versions, as evidenced by the lightly shaded area around Paris. This reflects the existing spatial correlations
between the reference dimension and its local neighborhood, which results in partially reproducing the temporal properties of the model over this area. However, for precipitation (Fig. S4e1 and f1),
this result is not as clear-cut as it is for temperature, probably due to weaker spatial correlations around Paris for this physical variable.
In a general way, the same conclusions can be drawn for Brittany, sometimes even better illustrated due to a narrower color scale. The results for Full-MRec are easier to interpret. They present
results similar to those from 2d- and Spatial-MRec (Fig. 4o2). In particular, this indicates that, contrary to dOTC and MBCn, MRec does not present an increasing modification of temperature
autocorrelations from 2d- to Full- versions.
To better understand the results obtained from Fig. 4, further explanations are required. The relative agreement of Pearson autocorrelation values between the reference and IPSL dataset shown in
Fig. 4 might lead one to believe that temporal properties of the model are quite correct for temperature, which is in reality misleading for two main reasons. First, 1d lag Pearson autocorrelation
permits one to assess only a particular feature of the temporal properties, which is obviously insufficient to draw any general conclusions about the quality of simulations concerning these aspects.
For example, by simply computing Pearson temperature autocorrelations for higher lag values, a discrepancy of results is obtained between the reference and the simulations (not shown). Second,
Pearson autocorrelations depend on two statistical characteristics of time series: their variability and their temporal rank structures. As implemented in Fig. 4, the Pearson autocorrelation metric
is hence not able to dissociate them. The similarity between reference and model autocorrelations can then potentially be the combined result of errors stemming from both biased univariate
distributions and wrong rank structures of the model.
To better assess temporal structure changes brought by MBCs, the calculation of rank correlations between the bias-corrected time series and the raw climate model simulations is performed for each
physical variable and at each grid cell. Results for temperature and precipitation are displayed with boxplots, respectively, in Figs. 5 and S5. The closer the values of the boxplots are to 1, the
closer the rank chronologies of the MBC outputs are to the rank chronologies of the model. For France, as expected, similar temperature rank structures are observed between the model and CDF-t/2d-R^2
D^2 outputs (Fig. 5a). For the other 2d- versions, rank correlation values are quite close to 1 as well, suggesting that dOTC, MBCn and MRec methods in their 2d- configuration modify only slightly
the rank structure of the initial simulations. For Spatial- and Full- configurations, dOTC and MBCn change moderately the rank structures even though they consider more dimensions in the correction.
Concerning MRec, without analyzing the Full- outputs, the increasing modification with dimensionality is also observed between 2d- and Spatial-MRec outputs, although less pronounced. In contrast, for
Spatial- and Full-R^2D^2 outputs, the changes in the rank structures for France are substantially larger than those discussed until now. This result is also obtained for precipitation in Fig. S5a
with an even larger range. The principal reason lies in the fact that, as already explained, R^2D^2 partially preserves rank sequences of the CDF-t outputs – and therefore of the IPSL model – in the
direct neighborhood of the reference dimensions but strongly modifies the rank structures outside this neighborhood, which results in obtaining some low Spearman correlation values in Figs. 5a and
For Brittany, results show a less pronounced modification of rank structure for both temperature (Fig. 5b) and precipitation (Fig. S5b) than those observed for France. In particular for temperature,
similar rank correlations are obtained for all versions of the methods, even for Spatial- and Full-R^2D^2 outputs, indicating that the number of dimensions has potentially a nonsignificant effect on
this criterion over a smaller area. The differences of results between France and Brittany highlight that the size of the region of interest seems to have a non-negligible influence on the temporal
properties of BC outputs.
5.5Multidimensional changes analysis
When correcting climate simulations, in practice, while climate simulations for the present period are adjusted with respect to observations, no reference data are available for the correction of
future periods. Assumptions of either stationarity or nonstationarity of copula are then made within the MBCs concerning the change in the multidimensional features between present and future
periods. This has then consequences on how MBCs can account for the changes in the multidimensional properties of the climate simulations. Therefore, using the second protocol defined in Sect. 4.2,
we now focus on how the different MBC methods reproduce the change in inter-variable and intersite structures, as given by the model to be corrected between two different periods.
5.5.1Analysis of change in inter-variable correlations
Figure 6 shows, for the bias-corrected outputs, the maps of the difference between the Spearman correlation between temperature and precipitation, computed for the calibration (1979–1997) and the
projection (1998-2016) period, respectively. It permits one to visually assess part of the change in the inter-variable dependence structure. Over France, inter-variable change in the IPSL
simulations (Fig. 6b1) seems to be distinct from those of WFDEI (Fig. 6a1). CDF-t outputs (Fig. 6c1) reproduce globally the change in the simulations, as they present similar maps. Concerning results
for the 2d- (Fig. 6d1) and Full- versions (Fig. 6f1) of R^2D^2, they present inter-variable rank correlation values close to 0. This illustrates the stationarity assumption in R^2D^2: the copula
function (i.e., dependence structure) of the observations during the calibration period is reproduced for both calibration and projection, resulting in having no change in inter-variable rank
correlations. For their part, 2d-dOTC, 2d-MBCn and 2d-MRec maps (resp. Fig. 6g1, j1 and m1) present roughly the same spatial structures for the differences of Spearman correlations, which indicates
that the evolution of the simulations is somehow taken into account in the correction procedures. It must be remarked that, contrary to dOTC and MRec, the stochastic generation of random rotation
matrices within the MBCn algorithm leads to get a non-negligible variability in the estimation of the evolution (not shown). This highlights a particular aspect of MBCn: contrary to other methods,
MBCn is based on a stochastic procedure, which has a significant impact on its adjustments. Consequently, the quality of MBC data obtained from MBCn can differ from a correction to another for the
same climate simulation, depending on the random rotation matrices generated in the algorithm and on the stopping rule (i.e., number of iterations). Interestingly, concerning the method's Spatial-
versions (Fig. 6e1, h1, k1 and n1), outputs show changes in inter-variable rank structure similar to those from the model. Indeed, as for CDF-t, rank inter-variable correlations are not adjusted with
Spatial- versions. Consequently, the change in inter-variable rank structure of the model is somehow preserved in outputs of Spatial- versions.
For the Full-configuration maps of dOTC and MBCn (Fig. 6i1 and l1), changes simulated by the model are not reproduced at all, which might be due to the failure of these methods to handle the change
in time of this statistical feature in high dimensions. As expected, the Full-MRec map (Fig. 6o1) does not provide adequate results due to its inability to adjust the simulated data for France in
this dimensional setting.
Concerning the results for Brittany, conclusions similar to those obtained for France can be drawn for R^2D^2 outputs. However, conclusions are quite different for CDF-t, 2d-dOTC, 2d-MBCn and
2d-MRec. Indeed, the changes in rank correlations obtained for these outputs (Fig. 6c2, g2, j2 and m2) are not in agreement at all with the simulated ones (Fig. 6b2). In fact, changes from 2d-
outputs are in line with those from CDF-t, illustrating the importance of the correction of 1-dimensional characteristics for inter-variable changes. It is also the case for the Full-MRec map (Fig. 6
o2), providing more sensible results than those obtained for France.
Generally speaking, for 2d- and Spatial- versions of MBCs making the assumptions of copula nonstationarity, similar results as those brought by their univariate BC outputs are obtained, suggesting
the importance of the correction of univariate distributions for changes in inter-variable rank correlations. Additional results in agreement with these conclusions are obtained for summer and are
displayed in Fig. S6.
5.5.2Analysis of change in spatial correlations
In order to assess changes in spatial structures in bias-corrected outputs, p-Wasserstein distance (see, e.g., Villani, 2008, chap. 6) is computed. This metric measures the distance between two
multivariate probability distributions μ and υ and is defined as follows:
$\begin{array}{}\text{(1)}& {W}_{p}\left(\mathit{\mu },\mathit{\upsilon }\right):={\left(\underset{\mathit{\gamma }\in \mathit{\tau }\left(\mathit{\mu },\mathit{\upsilon }\right)}{inf}\underset{{\
mathbb{R}}^{d}×{\mathbb{R}}^{d}}{\int }||x-y|{|}^{p}\mathrm{d}\mathit{\gamma }\left(x,y\right)\right)}^{\frac{\mathrm{1}}{p}},\end{array}$
with τ(μ,υ) denoting the set of probability measures on ℝ^d×ℝ^d with, respectively, μ and υ as first and second margins and $||.||$ the Euclidean distance. In the present study, p is taken equal to 2
, as it ensures the uniqueness of the minimization problem (Santambrogio, 2015). The Wasserstein distance can be seen as the minimum “cost” for transforming a multivariate probability distribution μ
into another, here υ. In particular, computing Eq. (1) between a distribution characterizing a sample during the calibration period and another distribution characterizing a sample during the
projection period, permits one to provide information on its change across time, whether it represents a univariate, multivariable or multi-site (or both) distribution. More details on how to compute
in practice this distance are provided in Appendix C. The resulting metric, denoted Wd, is calculated using the R package “transport” (Schuhmacher et al., 2019) over the region of interest according
to three different multivariate distributions:
• on ranks of temperature only over the whole region to assess change in the spatial dependence structure of temperature;
• on ranks for precipitation only over the whole region to assess change in the spatial dependence structure of precipitation;
• on ranks for both temperature and precipitation over the whole region to assess change in the inter-variable and spatial dependence structures of the two variables.
In particular, computing Wd using ranks instead of raw values allows the removal of the change in the univariate distributions from that in spatial and inter-variable relationships. However,
comparing Wd values of climate datasets must be made with caution. Indeed, similar values of Wd for different climate datasets do not necessarily imply that their changes in spatial structure are
similar. Results for the three Wasserstein distances on ranks are displayed in Fig. 7 for both France and Brittany. Additional results for Wd on raw values are displayed in Fig. S7 for information
purposes only.
For France (Fig. 7a), the three Wd are slightly higher for the reference than for the model data (represented by straight lines). Although the differences are quite small, it cannot be concluded
directly that changes in spatial structure are identical, as there is no particular reason for this. For CDF-t outputs, similar Wd are obtained as those from the model. However, as the 1d-BC method
does not modify (too much) rank sequence of temperature and precipitation time series, it can be deduced that CDF-t outputs globally reproduce/preserve the spatial structure change in the model.
For 2d-R^2D^2 outputs, two results are presented, corresponding to those obtained with either temperature or precipitation used as reference dimension. For the reasons already given (see, e.g.,
Sect. 5.3), results for 2d-R^2D^2 driven by temperature (resp. precipitation) for the change in spatial structure of temperature (resp. precipitation) are by construction identical to those from
CDF-t. Nevertheless, for the spatial structure of temperature and precipitation jointly (triangle), Wd for 2d-R^2D^2 outputs are quite high. Indeed, when the 2d-R^2D^2 version uses either temperature
or precipitation rank sequence to drive the other physical variable at each grid cell, the method is likely to degrade the spatial structures of the other variable in a different way for calibration
and projection periods. Consequently, the Wasserstein distance captures a “change” in the spatial structure of the two variables between these two periods, but it is in fact due to its deterioration.
Concerning Spatial-R^2D^2, low Wd are observed for the change in the spatial structures for temperature and precipitation separately, illustrating the stationarity copula assumption used. However,
for the Wd computed for the whole multivariate distribution (triangle in Fig. 7a), Spatial-R^2D^2 presents a higher value, close to that of the IPSL simulations. Indeed, as already explained in
Sect. 5.5.1, within Spatial-R^2D^2, copula functions of temperature and precipitation are adjusted separately without correcting inter-variable rank correlations, which results in partially
preserving the changes in inter-variable rank structure of the model between calibration and projection period. With regard to Full-R^2D^2, the three Wd are all quite low, in agreement with the
stationarity copula assumption it uses. However, it should be noted that the Wd are not equal to 0, whereas, theoretically, no change in spatial structure is performed by Full-R^2D^2. In addition to
the reason already cited concerning dry days frequency correction, this is also due to the fact that, in the present study, bias corrections have been performed on a monthly basis, while the
evaluation is done at a seasonal scale.
For both dOTC and MBCn outputs, Wd are higher than those from the model. Although the changes in spatial correlations derived by these two methods are too strong, it nevertheless highlights their
ability to capture such a change from the model and to use it in their bias correction procedure. Moreover, as explained in Sect. 5.4, dOTC and MBCn methods modify only slightly the rank structure of
the initial simulations. It can then be deduced that the changes in spatial correlations measured for the two methods are (partially) in agreement with those from the model. However, for MBCn, the
three Wasserstein distances increase according to the number of dimensions considered in the bias correction, from 2d- to Full- versions. It can be linked with the deterioration of the quality of
results already observed for spatial features for very high-dimensional bias correction. Regarding MRec, without speaking about its Full- version, similar observations can be made for 2d- and
Spatial- outputs as well. In a general way, the Wd associated with the different configurations for dOTC, MBCn and MRec are always above the Wasserstein distances for R^2D^2, illustrating somehow the
assumptions made by these methods about the stationary or nonstationary copula functions.
For Brittany (Fig. 7b), the Wd values computed for the model are quite low, indicating little simulated change in spatial structures for this region. Consequently, the differences of Wd between
methods assuming stationarity and nonstationarity of copula functions are less pronounced, but the same conclusions as those drawn for France hold. However, for Full-MRec outputs, Wd values are in
relative agreement with those from the model, highlighting the ability of the method to preserve (partially) the simulated changes in spatial structure between the calibration and the projection
periods, for a smaller region.
6Conclusion, discussion and future work
In this study, we have presented a global picture of the performances of four multivariate bias correction (MBC) methods designed to adjust various multivariate properties of climate simulations.
These MBC methods were carefully selected for their differences in terms of methodologies, statistical techniques used, assumptions and philosophical features. For each method, three different
dimensional configurations have been tested to correct climate simulations from the IPSL model: a 2d- version to adjust temperature and precipitation time series together but separately for each grid
cell, a Spatial- version aiming to correct the simulated fields of temperature and precipitation separately, and a Full- version designed to adjust the two physical variables jointly over the entire
domain. Depending on the versions, the objectives of adjustments for multivariate properties are not the same: whereas 2d- and Spatial- versions are designed to correct, respectively, inter-variable
and intersite dependence structures, it is expected that the Full- versions adjust both the inter-variable and intersite relationships together. In addition, the univariate CDF-t bias correction
method has been implemented and used as a benchmark to assess the benefits of considering multivariate aspects in the correction procedure. A wide range of metrics has been developed to compare bias
correction outputs with observations and model data and analyze the adjustments of univariate distributions, inter-variable correlations, intersite correlations and temporal structure.
Multidimensional change, i.e., nonstationary, properties have been assessed, providing a comprehensive framework to compare the performance of the methods. The IPSL simulations have been corrected
with respect to two distinct reference datasets, i.e., WFDEI and SAFRAN, for, respectively, France and Brittany to attempt to measure the potential influence of the reference spatial resolution on
MBC results.
6.2Discussion and recommendations
General recommendations can be drawn to help practitioners in the choice of BC methods for their applications. For the sake of clarity, Table 2 provides a concise summary of the different
recommendations made below. If the univariate CDF-t method corrects the univariate distributions well, it replicates the dependence properties of the model, i.e., inter-variable, intersite or
temporal structures, and preserves its multidimensional change across time. Hence, if the multivariate properties of raw climate simulations are not relevant, using 1d-BC methods is not appropriate
to get adequate dependence properties. Concerning MBC methods, in general, R^2D^2, dOTC, MBCn and MRec algorithms showed a great ability to adjust the statistical properties associated with the
corresponding objectives of the dimensional configurations. Indeed, in addition to correcting univariate distributions, the 2d-, Spatial- and Full- versions of each multivariate method adjust,
respectively, inter-variable, spatial and inter-variable/spatial correlations of climate simulations reasonably well. However, caution has to be taken before applying multivariate methods and
conducting analysis studies. It has been noted that, depending on the dimensional configuration, instability of some methods can possibly affect corrected outputs, and practitioners have to make sure
that no degradation of the desired statistical features is made by the multivariate BC method. In particular, for MBCn and MRec, increasing the number of variables to be corrected jointly in the
dimensional configuration is often accompanied by a potentially strong deterioration of spatial properties (see orange tildes in the row “Capacity to correct spatial prop.” in Table 2). However, for
MBCn, it must be recalled that the number of iterations for the algorithm was fixed to 200 for Full- versions. Although this choice is a good compromise between computation time and fitting the
multivariate distribution in the calibration period, this might be suboptimal for some regions. Indeed, early stopping of the procedure could be necessary to avoid overfitting in high dimension, as
discussed in Cannon (2018a). Therefore, more research is needed to improve the global performances of MBCn, such as early stopping, optimizing the sequence of random rotation matrices to speed up
convergence or, for spatial downscaling problems, adding a conservation step to provide more physical constraints to the bias correction (as proposed in Lange, 2019). Moreover, it has been shown that
the characteristics of the climate data to correct can influence the results of the MBCs. In particular, as noted in Sect. 5.3, a distinction of results between temperature and precipitation has been
identified for the MRec method (e.g., in Figs. 1, S1, 3 and S3). This might be caused by the way the MRec method performs the correction: only the Pearson correlation structure is adjusted, since it
is assumed to be sufficient to correct the full multivariate dependence structure. Although correcting only Pearson spatial correlations for temperature seems reasonable as temperature has
traditionally a multivariate Gaussian dependence structure, it appears to be not enough for precipitation, presenting more complex spatial interactions. In that sense, to adjust non-Gaussian climate
variables as precipitation, MBCs correcting the full multivariate dependence structure (e.g., R^2D^2, dOTC or MBCn) must be preferred by practitioners.
Also, the ability of the MRec method to adjust Brittany in a very high-dimensional context strongly suggests that the size of the geographical area under study is an important feature for
multivariate bias correction. Indeed, a small region like Brittany is likely to present a homogeneous climate or at least to be spatially second-order stationary and, consequently, strong statistical
dependencies between locations. Dimensions are then somehow redundant, and spatial correlations for each physical variable are strong, which potentially reduces the number of effective dimensions,
also called “spatial degrees of freedom” (e.g., in der Megreditchian, 1990; Bretherton et al., 1999). For MRec, it results in consequently reducing the errors in the computation of the inverse
covariance matrices and providing more adequate results. For larger regions presenting a high number of effective dimensions such as France, MRec is however able to provide appropriate results if
enough data are provided. For illustration purposes, the MRec method has been additionally applied on a seasonal basis instead of on a monthly one, i.e., correcting 642 dimensions with at least $\
mathrm{90}\phantom{\rule{0.125em}{0ex}}\mathrm{d}×\mathrm{19}\phantom{\rule{0.25em}{0ex}}\mathrm{years}=\mathrm{1710}$ time steps. By increasing the number of time steps used in the procedure,
high-dimensional sample covariance matrices within MRec are estimated in a more “robust” way, permitting a more suitable correction of the simulations using Full-MRec. Results for some criteria are
presented in the Supplement (Figs. S8, S9, S10, S11 and S12) but are not commented on in the present study. Also, within MRec, more robust estimators of inverse covariance matrices could be used to
obtain more appropriate corrections in a high-dimensional context (e.g., as presented in Levina et al., 2008). More generally, for most MBCs, for a given number of statistical dimensions (e.g.,
number of grid cells), as going from a large (e.g., France) to a smaller (e.g., Brittany) area reduces the effective dimension, it facilitates the multivariate corrections and therefore improves the
results (e.g., compare Figs. 1, S1, 4, S4, 5 and S5). This raises the question of whether applying MBC to climate simulations over large geographical areas is justified, i.e., if it is worth striving
for the correction of correlation structures between distant sites presenting weak statistical relationships, and, by doing so, taking the risk of losing global effectiveness of the BC methods. It
also highlights the importance of choosing parsimoniously the variables to correct, in order to adjust dependence structures that are relevant without potential quality loss induced by additional
(and unneeded) variables.
Regarding the temporal structure, none of the presented multivariate BC methods are designed to adjust this specific statistical aspect (red crosses in Table 2). Moreover, as highlighted by Vrac (
2018), any multivariate BC method will necessarily modify the rank sequence of the simulated variables. Results from the present study allow adding nuances to this statement: modification of rank
chronologies of the simulations depends on both the multivariate BC methods and the dimensional configurations. In particular, for dOTC, MBCn and MRec methods, a similar behavior was observed: the
higher the number of dimensions to correct, the stronger the deterioration of rank chronology of the simulations. However, concerning R^2D^2, depending on the dimensional version, the rank chronology
of the model can be reproduced for the specific area around the location of the reference dimension, which could (or not) be desired by practitioners depending on the performance of the simulations.
Finally, we shed light on the nonstationary properties of the multivariate BC methods. While dOTC, MBCn and MRec are designed to transfer some of the multidimensional properties evolution (i.e.,
change in time) from the model to the bias-corrected data, R^2D^2 assumes the inter-variable and intersite rank correlations – or copula functions – to be stable in time. In a general way, copula
nonstationarity for future periods can be reasonably expected, e.g., as documented for rainfall spatial distributions (Wasko et al., 2016), for the dependence between storm surge and rainfall (Wahl
et al., 2015), and the dependence between seasonal summer temperature and precipitation (Zscheischler and Seneviratne, 2017). However, on the contrary, it can be argued that inter-variable and
spatial dependence structures can be assumed to be stable over time for specific regions, because, to some extent, they can be considered as imposed by physical regional constraints (Vrac, 2018). The
differences of Wasserstein distances between the France and the Brittany region for the reference in Fig. 7a and b illustrate well that copula stationarity (or nonstationarity) is not straightforward
depending on the geographical domain. The question of the evolution of the copula (i.e., the rank dependence structure) is, therefore, still an open question and needs to be answered on a
case-by-case basis. In practice, performances of the methods concerning the multidimensional changes in the different BC outputs are hard to assess precisely, as the potential instability (as in MBCn
and MRec) or the stochasticity (as in MBCn) of the methods could affect the quality of the results, making difficult the identification of changes. Moreover, the adjustment of univariate
distributions has a non-negligible effect on changes in inter-variable and spatial rank dependences for MBCs assuming non-copula stationarity; in fact, rather than reproducing simulated changes in
the correction procedure, these methods are more likely to provide changes in agreement with the ones provided by 1d-BC (e.g., as seen for Brittany in Fig. 6b). Then, in the case where the adjustment
of univariate distributions does not modify (too much) the simulated changes in inter-variable and spatial rank dependences, MBCs assuming nonstationary copula would be more likely to present changes
in line with those from the model. This result is further confirmed by the results obtained for summer and displayed in Fig. S6 for inter-variable rank dependence changes. The nonstationary property
also partly explains the possible differences of results obtained during evaluation (i.e., protocol 1; see Sect. 5) for each criterion. Indeed, as noted in Robin et al. (2019), if the multivariate
properties changes provided by the model simulations are incorrect, those of the corrections from methods assuming nonstationarity can be, retrospectively, in disagreement with the changes in the
Therefore, before choosing any multivariate BC method, practitioners have to ask themselves some questions: what are the important statistical properties I want my corrections to provide? Can the
evolution of the copula (i.e., rank dependence) in the simulations between calibration and projection be considered as relevant? And should it be reproduced in the correction? If so, according to the
results obtained in the present study, dOTC and MRec are good candidates among the presented MBCs. Using these methods, the corrections will be likely to present change in rank dependence similar to
the simulations or at least of same sign. It could also be recommended to use these methods if practitioners do not have any idea if the rank dependence changes in the simulations could be considered
relevant or not, advocating to let the model express its own dynamic in the absence of relevant judgements. However, if it is assumed that the change in the simulations, in spite of all efforts
exerted by climate modellers, is not considered as relevant, R^2D^2 is a good candidate, as it is better to have stationarity of multidimensional rank properties in the correction rather than a
non-relevant or wrong one. Moreover, R^2D^2 is also a good candidate for practitioners who do not expect any rank dependence change. The obtained BC outputs from R^2D^2 will not have any change in
inter-variable or intersite rank dependence structures, because they are assumed to be imposed by physical constraints and hence stable in time. Concerning MBCn, the global instability of the method
in high-dimensional settings, added to the inherent variability due to its stochastic nature, affects significantly the quality of the correction. In practice, therefore, it makes difficult the
appropriate preservation of the simulated changes, although the method is specifically designed for that.
6.3Future work
This intercomparison has been designed such that new BC methods can be easily added. As a result, adding new methods relying on different assumptions, correcting different statistical aspects or
using other statistical techniques, is reasonably feasible. Moreover, as mentioned in the Introduction section, bias-adjusted simulations are particularly valuable for impact studies. Despite the
challenge of missing impact data, evaluating how the quality of multivariate bias-corrected data influences the results of complex impact models is an important perspective. Providing such an
analysis will be useful for the scientific community working on climate change impacts, e.g., in hydrology, agronomy or ecology. In an attempt to answer this question, an appropriate future step
could be to apply the presented multivariate BC methods in different dimensional configurations to various GCM simulations – and not only one as in this study – in order to provide an ensemble of
multivariate BC simulations. The obtained datasets would also be useful to carry out scientific studies on other aspects of climate change, such as climate change attribution studies aimed to
identify which mechanisms are responsible for changes in the Earth's climate (e.g., Stott et al., 2016; Yiou et al., 2017; Ribes et al., 2020). Indeed, most of these studies use plain simulations,
and consequently do not take into account their statistical biases. Conducting attribution studies using plain and bias-corrected simulations will permit one to increase the understanding of the
influence of these biases on results, which is essential to provide valuable information to the society concerning the ongoing climate change.
In the present study, it has been highlighted that none of the presented multivariate BC methods were designed to correct or preserve the temporal properties of the simulations. Nevertheless, a few
studies have attempted to develop BC methods providing adjustments of some temporal properties of climate variables in addition to the correction of intersite or inter-variable properties (Mehrotra
and Sharma, 2015, 2016, 2019). However, considering adjustments for temporal properties will necessarily modify, even slightly, univariate distributions and intersite and/or inter-variable
properties. From a more philosophical perspective, striving for the development of MBCs correcting a wide range of statistical features raises also the question of what has been preserved from the
simulations in the final BC outputs. By improving the agreement of simulations with observations, this may have the effect of lowering (misleadingly) the uncertainty of the simulated statistical
attributes, often without sound physical justifications (Ehret et al., 2012), which puts into question the validity of such methods. Multivariate BC methods developed in the future should, therefore,
take into account these issues, in attempting to find a reasonable balance between, on the one hand, the correction of intersite and inter-variable dependences and, on the other hand, the correction
or modification of temporal properties, while being able to preserve meaningful simulated characteristics for future periods. To do so, developing new MBC methods including some physical processes to
drive the correction procedure is a consistent perspective of development to obtain more realistic bias-corrected simulations. The new developed MBCs could be then included in this intercomparison
study, to evaluate and compare their performances with the existing multivariate BC methods.
Appendix A:Details on the CDF-t method
BC methods are applied to correct a simulated fields of S grid cells, each of them described by V physical variables. The total number of statistical dimensions to correct is hence equal to $D=V×S$,
with each of the dimensions composed of N time steps. Let X[A] being a matrix of dimension N×D and ${X}_{A}^{d}\left(t\right)$ the value of the physical variable corresponding to the dth dimension at
time t from the matrix X[A]. Datasets, i.e., matrices, to correct with BC methods are model outputs during the calibration (denoted ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}$) and the projection
period (denoted ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}$), according to the data from the reference observed during calibration (denoted ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$). Corrected
outputs for the calibration and the projection period are denoted ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}$ and ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm
{P}}}$, respectively.
CDF-t is a version of quantile–quantile method that takes into account, by defining a transfer function T, the potential evolution of univariate CDFs from the calibration to the projection period.
For this subsection, let's assume that ${F}_{{\mathrm{M}}_{\mathrm{C}}}^{d}$ and ${F}_{{\mathrm{R}}_{\mathrm{C}}}^{d}$ are respectively the univariate CDFs of the dth dimension ${\mathbit{X}}_{{\
mathrm{M}}_{\mathrm{C}}}^{d}$ and ${\mathbit{X}}_{{\mathrm{R}}_{\mathrm{C}}}^{d}$ located at the same grid cell for the model and the reference in the calibration period. To simplify the notation, we
will denote these CDFs ${F}_{{\mathrm{M}}_{\mathrm{C}}}$ and ${F}_{{\mathrm{R}}_{\mathrm{C}}}$, respectively. The transfer function T is defined such that it links the two CDFs ${F}_{{\mathrm{M}}_{\
mathrm{C}}}$ and ${F}_{{\mathrm{R}}_{\mathrm{C}}}$ as follows:
$\begin{array}{}\text{(A1)}& T\left({F}_{{\mathrm{M}}_{\mathrm{C}}}\left(x\right)\right)={F}_{{\mathrm{R}}_{\mathrm{C}}}\left(x\right).\end{array}$
A more simple formulation of T is then obtained by replacing x by ${F}_{{\mathrm{M}}_{\mathrm{C}}}^{-\mathrm{1}}\left(u\right)$, with u probabilities in [0,1].
$\begin{array}{}\text{(A2)}& T\left(u\right)={F}_{{\mathrm{R}}_{\mathrm{C}}}\left({F}_{{\mathrm{M}}_{\mathrm{C}}}^{-\mathrm{1}}\left(u\right)\right).\end{array}$
By assuming time-stationarity of the transformation T, it can be applied similarly in the projection period to link CDFs between the model and the reference:
$\begin{array}{}\text{(A3)}& T\left({F}_{{\mathrm{M}}_{\mathrm{P}}}\left(x\right)\right)={F}_{{\mathrm{R}}_{\mathrm{P}}}\left(x\right).\end{array}$
By combining Eqs. (A2) and (A3), we then can generate ${F}_{{\mathrm{R}}_{\mathrm{P}}}$, the estimated CDF of the climate variable in the reference during the projection period:
$\begin{array}{}\text{(A4)}& {F}_{{\mathrm{R}}_{\mathrm{P}}}\left(x\right)={F}_{{\mathrm{R}}_{\mathrm{C}}}\left({F}_{{\mathrm{M}}_{\mathrm{C}}}^{-\mathrm{1}}\left({F}_{{\mathrm{M}}_{\mathrm{P}}}\left
Once ${F}_{{\mathrm{R}}_{\mathrm{P}}}$ has been estimated, a simple quantile–quantile method is performed between ${F}_{{\mathrm{R}}_{\mathrm{P}}}$ and ${F}_{{\mathrm{M}}_{\mathrm{P}}}$ to derive the
bias-corrected time series ${\stackrel{\mathrm{^}}{X}}_{{\mathrm{M}}_{\mathrm{P}}}^{d}$ for the projection period as follows:
$\begin{array}{}\text{(A5)}& {\stackrel{\mathrm{^}}{X}}_{{\mathrm{M}}_{p}}^{d}\left(t\right)={F}_{{\mathrm{R}}_{\mathrm{P}}}^{-\mathrm{1}}\left({F}_{{\mathrm{M}}_{\mathrm{P}}}\left({X}_{{\mathrm{M}}_
While a traditional quantile-mapping approach performed to correct a dataset ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}$ of simulations over the projection period will use the formulation ${\stackrel
right)$ (i.e., based on two distributions characterizing the calibration period), the CDF-t method relies on Eq. (A5) where the two involved distributions characterize projected distributions. By
proceeding this way, CDF-t takes into account the potential evolution of CDFs of the model between the calibration and projection periods to adjust the projection period. CDF-t is applied
independently for each of the D statistical dimensions and for both calibration and projection period to derive the final bias-corrected outputs ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\
mathrm{C}}}$ and ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$.
Appendix B:Details on the R^2D^2 method
The R^2D^2 method, belonging to the marginal/dependence category, consists of several successive steps that are similar to adjust climate simulations for calibration and projection periods. Hence, to
avoid redundancy, the correction procedure for the projection period will only be explained in this subsection. In this appendix, temporary corrected outputs for the projection period are denoted ${\
• First, an univariate BC method is performed for the projection period to obtain the N×D matrix output ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$. As a reminder, ${\stackrel
{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}=\left[{\left({\stackrel{\mathrm{̃}}{X}}_{{\mathrm{M}}_{\mathrm{P}}}^{\mathrm{1}}\left(\mathrm{1}\right),\mathrm{\dots },{\stackrel{\mathrm{̃}}
{X}}_{{\mathrm{M}}_{\mathrm{P}}}^{\mathrm{1}}\left(N\right)\right)}^{\prime },\mathrm{\dots },$ ${\left({\stackrel{\mathrm{̃}}{X}}_{{\mathrm{M}}_{\mathrm{P}}}^{D}\left(\mathrm{1}\right),\mathrm{\
dots },{\stackrel{\mathrm{̃}}{X}}_{{\mathrm{M}}_{\mathrm{P}}}^{D}\left(N\right)\right)}^{\prime }\right]$.
• For each dimension d, R^2D^2 computes the ranks of the time series within the univariate BC outputs ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$. For example, for the
dimension d, the N×1 vector ${\left(\text{rank}\left({\stackrel{\mathrm{̃}}{X}}_{{\mathrm{M}}_{\mathrm{P}}}^{d}\left(\mathrm{1}\right)\right),\mathrm{\dots },\text{rank}\left({\stackrel{\mathrm{̃}}
{X}}_{{\mathrm{M}}_{\mathrm{P}}}^{d}\left(N\right)\right)\right)}^{\prime }$, denoted ${\left({\stackrel{\mathrm{̃}}{r}}_{{\mathrm{M}}_{\mathrm{P}}}^{d}\left(\mathrm{1}\right),\mathrm{\dots },{\
stackrel{\mathrm{̃}}{r}}_{{\mathrm{M}}_{\mathrm{P}}}^{d}\left(N\right)\right)}^{\prime }$, is computed. It results in getting, for each time step t, a D-dimensional vector ${\stackrel{\mathrm{̃}}{\
mathbf{R}}}_{{\mathrm{M}}_{\mathrm{P}}}\left(t\right)=\left({\stackrel{\mathrm{̃}}{r}}_{{\mathrm{M}}_{\mathrm{P}}}^{\mathrm{1}}\left(t\right),\mathrm{\dots },{\stackrel{\mathrm{̃}}{r}}_{{\mathrm
{M}}_{\mathrm{P}}}^{D}\left(t\right)\right)$, which provides the multivariate rank structure of ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$ at t.
• For each dimension d, R^2D^2 computes the ranks of the time series within the reference dataset during calibration ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$. For example, for the dimension d,
the N×1 vector ${\left(\text{rank}\left({X}_{{\mathrm{R}}_{\mathrm{C}}}^{d}\left(\mathrm{1}\right)\right),\mathrm{\dots },\text{rank}\left({X}_{{\mathrm{R}}_{\mathrm{C}}}^{d}\left(N\right)\right)
\right)}^{\prime }$, denoted ${\left({r}_{{\mathrm{R}}_{\mathrm{C}}}^{d}\left(\mathrm{1}\right),\mathrm{\dots },{r}_{{\mathrm{R}}_{\mathrm{C}}}^{d}\left(N\right)\right)}^{\prime }$, is computed.
It results in getting, for each time step t, a D-dimensional vector ${\mathbf{R}}_{{\mathrm{R}}_{\mathrm{C}}}\left(t\right)=\left({r}_{{\mathrm{R}}_{\mathrm{C}}}^{\mathrm{1}}\left(t\right),\
mathrm{\dots },{r}_{{\mathrm{R}}_{\mathrm{C}}}^{D}\left(t\right)\right)$, which provides the multivariate rank structure of ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$ at t.
• A reference dimension d needs to be selected by the users in ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$. The corresponding univariate time series will be kept untouched in
the final R^2D^2 outputs as the correction of the multivariate dependence structure is articulated on this dimension “pivot”. For each time step t:
□ the algorithm R^2D^2 finds t^∗ such that ${\stackrel{\mathrm{̃}}{r}}_{{\mathrm{M}}_{\mathrm{P}}}^{d}\left(t\right)={r}_{{\mathrm{R}}_{\mathrm{C}}}^{d}\left({t}^{\ast }\right)$. From t^∗, R^2D^
2 deduces the multivariate rank structure of the reference during the calibration period at this specific time step: ${\mathbf{R}}_{{\mathrm{R}}_{\mathrm{C}}}\left({t}^{\ast }\right)=\left
({r}_{{\mathrm{R}}_{\mathrm{C}}}^{\mathrm{1}}\left({t}^{\ast }\right),\mathrm{\dots },{r}_{{\mathrm{R}}_{\mathrm{C}}}^{D}\left({t}^{\ast }\right)\right)$;
□ R^2D^2 forces the D-dimensional vector of ranks of its final outputs ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$ to be equal to ${\stackrel{\mathrm{^}}{\mathbf{R}}}_{{\
mathrm{M}}_{\mathrm{P}}}\left(t\right)=\left({r}_{{\mathrm{R}}_{\mathrm{C}}}^{\mathrm{1}}\left({t}^{\ast }\right),\mathrm{\dots },{\stackrel{\mathrm{̃}}{r}}_{{\mathrm{M}}_{\mathrm{P}}}^{d}\
left(t\right),\mathrm{\dots },{r}_{{\mathrm{R}}_{\mathrm{C}}}^{D}\left({t}^{\ast }\right)\right)$.
To do so, the algorithm looks to shuffle the values in each of the dimensions k≠d of ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$, such that its rank structure at time t
matches ${\stackrel{\mathrm{^}}{\mathbf{R}}}_{{\mathrm{M}}_{\mathrm{P}}}\left(t\right)$. In a more explicit way, for all k≠d, R^2D^2 finds the time steps t[k] such that ${r}_{{\mathrm{R}}_{\
mathrm{C}}}^{k}\left({t}^{\ast }\right)={\stackrel{\mathrm{̃}}{r}}_{{\mathrm{M}}_{\mathrm{P}}}^{k}\left({t}_{k}\right)$. The value in ${\stackrel{\mathrm{̃}}{\mathbit{X}}}_{{\mathrm{M}}_{\
mathrm{P}}}^{k}$ to shuffle associated with the rank ${\stackrel{\mathrm{̃}}{r}}_{{\mathrm{M}}_{\mathrm{P}}}^{k}\left({t}_{k}\right)$ is then derived and copied in the final outputs ${\
• By repeating the step 4 until each dimension has been used one time as a reference for the shuffling, R^2D^2 is able to derive a collection of D MBC outputs, with exactly the same multivariate
dependence structure but differing in temporal properties, describing the possible variability in the different rank structures.
Appendix C:Details on the dOTC method
The dOTC method, belonging to the all-in-one category, relies on optimal transport theory to adjust climate simulations. A slightly different mathematical notation needs to be used here to explain
dOTC. Let define ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}\left(t\right)$ the realizations of ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$ at each time step t across each of the D dimensions. The
collection of the variables $\left({\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}\left(\mathrm{1}\right),\mathrm{\dots },{\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}\left(N\right)\right)$ forms a D×N matrix
and describes ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$ in a different way. Similarly, $\left({\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}\left(\mathrm{1}\right),\mathrm{\dots },{\mathbf{X}}_{{\mathrm
{M}}_{\mathrm{C}}}\left(N\right)\right)$ and $\left({\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}\left(\mathrm{1}\right),\mathrm{\dots },{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}\left(N\right)\right)$ are
considered for, respectively, ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}$ and ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}$. In the following, c[i] denotes a collection of multivariate cells that
partition regularly ℝ^D and fully cover $\left({\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}\left(\mathrm{1}\right),\mathrm{\dots },{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}\left(N\right)\right)$ and $\
left({\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}\left(\mathrm{1}\right),\mathrm{\dots },{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}\left(N\right)\right)$. To simplify notations, the center of a grid cell c
[i] is also denoted c[i]. Hereinafter is presented first how dOTC adjusts the calibration period of climate simulations to derive ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}$.
Then, the algorithm procedure will be detailed for the adjustment of the projection period ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$.
The “OTC” procedure for the calibration period:
• First, the algorithm estimates ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}}$ and ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}}$
the empirical multivariate distributions of ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$ and ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}$. To do so, dOTC computes a sum of Dirac masses. For example,
for ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}$, we have
${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}}\left(A\right)=\sum _{i=\mathrm{1}}^{I}{p}_{{\mathbf{X}}_{{M}_{\mathrm{C}},i}}{\mathit{\delta }}_{{\mathbf{c}}_{i}}\
where ${p}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}},i}}=\frac{\mathrm{1}}{N}\sum _{t=\mathrm{1}}^{N}\mathbf{1}\left({\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}\left(t\right)\in {\mathbf{c}}_{i}\
right)$, and A⊂ℝ^D.
• Then, the coefficients γ[ij] defining the estimator $\stackrel{\mathrm{̃}}{\mathit{\gamma }}$ of the optimal plan that moves the bin c[i] of ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\
mathrm{M}}_{\mathrm{C}}}}$ to the bin c[j] of ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}}$ are computed. For $A,B\subset {\mathbb{R}}^{D}$, $\stackrel{\mathrm
{̃}}{\mathit{\gamma }}$ is defined as follows:
$\stackrel{\mathrm{̃}}{\mathit{\gamma }}\left(A×B\right)=\sum _{i,j=\mathrm{1}}^{I,J}{\mathit{\gamma }}_{ij}{\mathit{\delta }}_{\left({\mathbf{c}}_{i},{\mathbf{c}}_{j}\right)}\left(A×B\right).$
The coefficient γ[ij] corresponds to the joint probability of ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}$ being in c[i] and ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$ being in c[j], which is part
of the MBC process. They have to respect the following constraints:
$\begin{array}{c}\sum _{j=\mathrm{1}}^{J}{\mathit{\gamma }}_{ij}={p}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}},i}},\\ \sum _{i=\mathrm{1}}^{I}{\mathit{\gamma }}_{ij}={p}_{{\mathbf{X}}_{{\mathrm
and they have to minimize the following cost function $\stackrel{\mathrm{̃}}{C}$:
$\stackrel{\mathrm{̃}}{C}\left(\stackrel{\mathrm{̃}}{\mathit{\gamma }}\right)=\sum _{i,j=\mathrm{1}}^{I,J}\parallel {\mathbf{c}}_{i}-{\mathbf{c}}_{j}{\parallel }^{\mathrm{2}}{\mathit{\gamma }}_
To find these coefficients that form the so-called optimal transport plan, the algorithm resolves the linear programming problem by using the procedure developed by Flamary and Courty (2017).
• Then, for each time step t are the following steps:
□ The algorithm finds the cell c[i] containing ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}\left(t\right)$.
□ Using the plan γ[ij], it constructs the conditional probability vector ${\stackrel{\mathrm{̃}}{\mathit{\gamma }}}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}\left(t\right)}=\left({\mathit{\gamma
}}_{i,\mathrm{1}},\mathrm{\dots },{\mathit{\gamma }}_{i,J}\right)/{p}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}},i}$.
□ According to the probability vector $\stackrel{\mathrm{̃}}{\mathit{\gamma }}$, the algorithm draws a ${j}^{\ast }\in \mathrm{1},\mathrm{\dots },J$.
□ The correction ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}\left(t\right)$ is then derived with an uniform draw in ${\mathbf{c}}_{{j}^{\ast }}$.
• After iterating for each t, the final outputs for the calibration period ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}$ is obtained.
The “dOTC” procedure for the projection period:
• As explained before, dOTC estimates ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}}$, ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}}
$ and ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}}$ the empirical multivariate distributions of ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$, ${\mathbf{X}}_{{\
mathrm{M}}_{\mathrm{C}}}$ and ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}$.
• Then, the coefficients γ[ij] defining the estimator $\stackrel{\mathrm{̃}}{\mathit{\gamma }}$ of the optimal plan that moves the bin c[i] of ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\
mathrm{M}}_{\mathrm{C}}}}$ to the bin c[j] of ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}}$ are computed.
• Similarly, the coefficients φ[ik] defining the estimator $\stackrel{\mathrm{̃}}{\mathit{\phi }}$ of the optimal plan that moves the bin c[i] of ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_
{{\mathrm{M}}_{\mathrm{C}}}}$ to the bin c[k] of ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}}$ are computed.
• By default, the diagonal matrix of the standard deviations D is computed: $\mathbf{D}=\text{diag}\left({\mathit{\sigma }}_{{X}_{{\mathrm{M}}_{\mathrm{C}}}}{\mathit{\sigma }}_{{X}_{{\mathrm{R}}_{\
mathrm{C}}}}^{-\mathrm{1}}\right)$. Others alternatives for the computation of D are possible and detailed in Robin et al. (2019).
• Then, for each time step t are the following steps:
□ The algorithm finds the cell c[j] containing ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}\left(t\right)$.
□ Using the plan γ[ij], it finds the cell c[i] of ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}}$ associated with c[j].
□ Using the plan φ[ik], it finds the cell c[k] of ${\stackrel{\mathrm{̃}}{\mathbb{P}}}_{{\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}}$ associated with c[i].
□ Using D, it computes the vector ${\mathbf{v}}_{ik}:={\mathbf{c}}_{k}-{\mathbf{c}}_{i}$ for scaling adjustment of the correction.
□ A preliminary (and temporary) correction of the model during the projection ${\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}\left(t\right)$ is then obtained, ${\stackrel{\
• Then, it estimates ${\stackrel{\mathrm{ˇ}}{\mathbb{P}}}_{{\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}}$ the empirical multivariate distribution of ${\stackrel{\mathrm{ˇ}}{\
• Finally, the OTC procedure (see above for calibration period) is applied between $\left({\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}\left(\mathrm{1}\right),\mathrm{\dots },{\mathbf{X}}_{{\mathrm{M}}_
{\mathrm{P}}}\left(N\right)\right)$ and $\left({\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}\left(\mathrm{1}\right),\mathrm{\dots },{\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm
{M}}_{\mathrm{P}}}\left(N\right)\right)$ to produce the final outputs $\left({\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}\left(\mathrm{1}\right),\mathrm{\dots },{\stackrel{\
Appendix D:Details on the MBCn method
The MBCn method can be summarized in three steps in the way it corrects climate simulations. As a reminder, MBCn belongs to the marginal/dependence category, i.e., correcting separately marginal
distributions and full dependence structure of climate simulations. In this appendix, temporary corrected outputs of a matrix X[A] are denoted with tilde accents (${\stackrel{\mathrm{̃}}{\mathbf{X}}}_
{A}$) or inverted hats (${\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{A}$).
• Step 1: first, marginal distributions are corrected with an univariate BC method. To do so, MBCn uses the Quantile Delta Mapping (QDM from Cannon et al., 2015) algorithm defined as follows:
This transfer function preserves absolute changes in quantiles and has to be applied for interval variables such as temperature. For ratio variables like precipitation, the addition/substraction
operators in the transfer function have to be replaced by multiplication/division operators to define a function that preserves relative changes in quantiles. For both calibration and projection
period, the D physical variables are independently adjusted by applying the corresponding transfer function. The resulting matrices ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}
$ and ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$ with adjusted marginal distributions are stored by the algorithm in, respectively, ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\
mathrm{M}}_{\mathrm{C}}}^{\mathrm{init}}$ and ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}^{\mathrm{init}}$ before the second step, as it reuses them in the third one.
• Step 2: within the MBCn algorithm, the multivariate dependence structure of the simulations is adjusted through an iterative procedure. At each iteration j, an application of a D×D random
orthogonal rotation matrix R^[j] (Mezzadri, 2007) is performed on the datasets ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$, ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}$ and ${\
stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$ obtained from Step 1:
It permits one to provide linear combinations of the original variables. The QDM transfer function defined in Eq. (D1) for interval variables, i.e., with addition/substraction operators, is then
applied on each of the rotated marginal distributions of ${\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}^{\left[j\right]}$ and ${\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm{M}}_
{\mathrm{P}}}^{\left[j\right]}$, considering the corresponding rotated marginal distributions in ${\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm{R}}_{\mathrm{C}}}^{\left[j\right]}$ as the
reference. Once marginal distributions have been adjusted in ${\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}^{\left[j\right]}$ and ${\stackrel{\mathrm{ˇ}}{\mathbf{X}}}_{{\mathrm
{M}}_{\mathrm{P}}}^{\left[j\right]}$, matrices are rotated back to the physical variables ranges:
These successive steps are applied iteratively until the multivariate distribution of the corrected simulations ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}^{\left[j+\mathrm{1}
\right]}$ matches the one of the reference ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$.
• Step 3: once the full dependence structure of simulated variables converged to the one of the reference after, let say, the j^∗th iteration, MBCn replaces quantiles of each of the variables in $
{\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}^{\left[{j}^{\ast }+\mathrm{1}\right]}$ and ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}^{\left[{j}^{\ast }+\
mathrm{1}\right]}$ obtained at the end of Step 2 with those from ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}^{\mathrm{init}}$ and ${\stackrel{\mathrm{̃}}{\mathbf{X}}}_{{\mathrm
{M}}_{\mathrm{P}}}^{\mathrm{init}}$ obtained during Step 1. This additional step prevents the possible deterioration of the model trend during the correction of the multivariate dependence
structure in Step 2. Simulations with corrected marginal distributions features and full dependence structure ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}$ and ${\stackrel{\
mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$ are then obtained.
Appendix E:Details on the MRec method
The MRec method, belonging to the all-in-one category, consists of the following steps.
• First, each of the D dimensions in ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}$ is transformed independently in the Gauss domain. However, the transformation differs between interval variables,
i.e., temperature, and ratio variables, i.e., precipitation, and is performed as follows:
□ For a dimension d being an interval variable, a distribution ${F}_{{\mathrm{R}}_{\mathrm{C}}}^{d}$ is fitted:
Then, the corresponding vector W^d is computed as follows:
${\mathbit{W}}^{d}\left(t\right)={\mathrm{\Phi }}^{-\mathrm{1}}\left({F}_{{\mathrm{R}}_{\mathrm{C}}}^{d}\left({X}_{{\mathrm{R}}_{\mathrm{C}}}^{d}\left(t\right)\right)\right),$
with Φ the distribution function of the standard normal distribution 𝒩(0,1).
□ For a dimension k being a ratio variable, a distribution ${F}_{{\mathrm{R}}_{\mathrm{C}}}^{k}$ is fitted:
Additionally, the frequency P[k0] of null events in ${X}_{{\mathrm{R}}_{\mathrm{C}}}^{k}$ is computed:
Then, the corresponding vector W^k is computed as follows:
Doing this step for each dimension permits one to derive the matrix W of dimension N×D, composed of the Gaussian transformed vectors W^1, …, W^D.
Following the notation in Bárdossy and Pegram (2012), the same procedure is repeated for ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{C}}}$ and ${\mathbf{X}}_{{\mathrm{M}}_{\mathrm{P}}}$ to derive,
respectively, the Gaussian transformed data Y and Y^′.
• For both Gaussian transformed data W and Y, the N×N Pearson cross-correlation matrices C[W] and C[Y] are computed.
• A singular value decomposition (SVD) is applied on C[W] such that
with A[W] and B[W] having same dimensions as C[W], and D[W] a diagonal matrix of singular values. From this decomposition, the square root matrix of C[W], denoted S[W], can be obtained as
• Similarly, a singular value decomposition (SVD) is applied on C[Y] such that
From this decomposition, its inverse square root matrix T[Y] can be obtained as follows:
• Y is decorrelated to Q: Q=YT[Y].
• Q is then recorrelated to V: V=QS[W]. V is hence the recorrelated transformed model data for the calibration period presenting the same correlation structure as W.
• For the projection period, V^′ is computed directly without decorrelation step: ${\mathbf{V}}^{\prime }={\mathbf{Y}}^{\prime }{\mathbf{T}}_{Y}{\mathbf{S}}_{W}$.
• V and V^′ are then transformed back to physical variables using a univariate quantile–quantile method for each dimension d, with ${\mathbf{X}}_{{\mathrm{R}}_{\mathrm{C}}}^{d}$ being the target
for the correction. The desired adjusted matrices ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{C}}}$ and ${\stackrel{\mathrm{^}}{\mathbf{X}}}_{{\mathrm{M}}_{\mathrm{P}}}$ are then
finally obtained.
MV had the initial idea of the study. MV and BF designed the experiments and protocols. BF made all computations and figures. BF and MV made the analyses and interpretations. BF wrote the first
complete draft of the article, iteratively revised by MV. Then AJC, YR and DA all revised each section of the article.
The authors declare that they have no conflict of interest.
Bastien François and Mathieu Vrac acknowledge financial support from the EUPHEME project. Mathieu Vrac also acknowledges support from the CoCliServ project and French “Convention de Service
Climatique”. Both EUPHEME and CoCliServ are part of ERA4CS, an ERA-NET initiated by JPI Climate and cofunded by the European Union. This work was supported by the metaprogram Adaptation of
Agriculture and Forest to Climate Change (AAFCC) of the French National Research Institute for Agriculture, Food & Environment (INRAE).
This research has been supported by the ERA4CS (grant no. 690462).
This paper was edited by Ben Kravitz and reviewed by Jakob Zscheischler and one anonymous referee.
Bárdossy, A. and Pegram, G.: Multiscale spatial recorrelation of RCM precipitation to produce unbiased climate change scenarios over large areas and small, Water Resour. Res., 48, W09502, https://
doi.org/10.1029/2011WR011524, 2012.a, b, c, d
Beltrami, E.: Sulle funzioni bilineari, Giornale di Matematiche ad Uso degli Studenti Delle Universita, 11, 98–106, 1873.a
Berg, P., Feldmann, H., and Panitz, H.-J.: Bias correction of high resolution regional climate model data, J. Hydrol., 448–449, 80–92, https://doi.org/10.1016/j.jhydrol.2012.04.026, 2012.a
Bretherton, C. S., Widmann, M., Dymnikov, V. P., Wallace, J. M., and Bladé, I.: The Effective Number of Spatial Degrees of Freedom of a Time-Varying Field, J. Climate, 12, 1990–2009, https://doi.org/
10.1175/1520-0442(1999)012<1990:TENOSD>2.0.CO;2, 1999.a
Cannon, A. J.: Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables, Clim. Dynam., 50, 31–49,
https://doi.org/10.1007/s00382-017-3580-6, 2018a.a, b, c, d
Cannon, A. J.: Multivariate Bias Correction of Climate Model Outputs, available at: https://CRAN.R-project.org/package=MBC (last access: 20 May 2019), 2018b.a
Cannon, A. J., Sobie, S. R., and Murdock, T. Q.: Bias correction of simu- lated precipitation by quantile mapping: how well do methods preserve relative changes in quantiles and extremes?, J.
Climate, 28, 6938–6959, https://doi.org/10.1175/JCLI-D-14-00754.1, 2015.a, b
Casanueva, A., Bedia, J., Herrera García, S., Fernández, J., and Gutiérrez, J.: Direct and component-wise bias correction of multi-variate climate indices: the percentile adjustment function
diagnostic tool, Climatic Change, 147, 411–425, https://doi.org/10.1007/s10584-018-2167-5, 2018.a
Christensen, J. H., Boberg, F., Christensen, O. B., and Lucas-Picher, P.: On the need for bias correction of regional climate change projections of temperature and precipitation, Geophys. Res. Lett.,
35, L20709, https://doi.org/10.1029/2008GL035694, 2008.a
Clark, M., Gangopadhyay, S., Hay, L., Rajagopalan, B., and Wilby, R.: The Schaake Shuffle: A Method for Reconstructing Space–Time Variability in Forecasted Precipitation and Temperature Fields, J.
Hydrometeorol., 5, 243–262, 2004.a
Defrance, D., Ramstein, G., Charbit, S., Vrac, M., Famien, A. M., Sultan, B., Swingedouw, D., Dumas, C., Gemenne, F., Alvarez-Solas, J., and Vanderlinden, J.-P.: Consequences of rapid ice sheet
melting on the Sahelian population vulnerability, P. Natl. Acad. Sci. USA, 114, 6533–6538, https://doi.org/10.1073/pnas.1619358114, 2017.a
Dekens, L., Parey, S., Grandjacques, M., and Dacunha-Castelle, D.: Multivariate distribution correction of climate model outputs: A generalization of quantile mapping approaches: Multivariate
distribution correction of climate model outputs, Environmetrics, 28, e2454, https://doi.org/10.1002/env.2454, 2017.a
Déqué, M.: Frequency of precipitation and temperature extremes over France in an anthropogenic scenario: Model results and statistical correction according to observed values, Global Planet. Change,
57, 16–26, https://doi.org/10.1016/j.gloplacha.2006.11.030, 2007.a
der Megreditchian, G.: Meteorological networks optimization from a statistical point of view, Comput. Stat. Data An., 9, 57–75, https://doi.org/10.1016/0167-9473(90)90071-O, 1990.a
Dufresne, J.-L., Foujols, M.-A., Denvil, S., Caubel, A., Marti, O., Aumont, O., Balkanski, Y., Bekki, S., Bellenger, H., Benshila, R., Bony, S., Bopp, L., Braconnot, P., Brockmann, P., Cadule, P.,
Cheruy, F., Codron, F., Cozic, A., Cugnet, D., de Noblet, N., Duvel, J.-P., Ethé, C., Fairhead, L., Fichefet, T., Flavoni, S., Friedlingstein, P., Grandpeix, J.-Y., Guez, L., Guilyardi, E.,
Hauglustaine, D., Hourdin, F., Idelkadi, A., Ghattas, J., Joussaume, S., Kageyama, M., Krinner, G., Labetoulle, S., Lahellec, A., Lefebvre, M.-P., Lefevre, F., Levy, C., Li, Z. X., Lloyd, J., Lott,
F., Madec, G., Mancip, M., Marchand, M., Masson, S., Meurdesoif, Y., Mignot, J., Musat, I., Parouty, S., Polcher, J., Rio, C., Schulz, M., Swingedouw, D., Szopa, S., Talandier, C., Terray, P., Viovy,
N., and Vuichard, N.: Climate change projections using the IPSL-CM5 Earth System Model: from CMIP3 to CMIP5, Clim. Dynam., 40, 2123–2165, https://doi.org/10.1007/s00382-012-1636-1, 2013.a
Ehret, U., Zehe, E., Wulfmeyer, V., Warrach-Sagi, K., and Liebert, J.: HESS Opinions ”Should we apply bias correction to global and regional climate model data?”, Hydrol. Earth Syst. Sci., 16,
3391–3404, https://doi.org/10.5194/hess-16-3391-2012, 2012.a
Famien, A. M., Janicot, S., Ochou, A. D., Vrac, M., Defrance, D., Sultan, B., and Noël, T.: A bias-corrected CMIP5 dataset for Africa using the CDF-t method – a contribution to agricultural impact
studies, Earth Syst. Dynam., 9, 313–338, https://doi.org/10.5194/esd-9-313-2018, 2018.a
Flamary, R. and Courty, N.: POT Python Optimal Transport library, available at: https://pythonot.github.io/ (last access: 8 June 2019), 2017.a
Gudmundsson, L., Bremnes, J. B., Haugen, J. E., and Engen-Skaugen, T.: Technical Note: Downscaling RCM precipitation to the station scale using statistical transformations – a comparison of methods,
Hydrol. Earth Syst. Sci., 16, 3383–3390, https://doi.org/10.5194/hess-16-3383-2012, 2012.a
Guo, L.-Y., Gao, Q., Jiang, Z.-H., and Li, L.: Bias correction and projection of surface air temperature in LMDZ multiple simulation over central and eastern China, Adv. Clim. Change Res., 9, 81–92,
https://doi.org/10.1016/j.accre.2018.02.003, 2018.a
Guo, Q., Chen, J., Zhang, X., Shen, M., Chen, H., and Guo, S.: A new two-stage multivariate quantile mapping method for bias correcting climate model outputs, Clim. Dynam., 53, 3603–3623, https://
doi.org/10.1007/s00382-019-04729-w, 2019.a
Haddad, Z. and Rosenfeld, D.: Optimality of empirical Z-R relations, Q. J. Roy. Meteor. Soc., 123, 1283–1293, https://doi.org/10.1002/qj.49712354107, 1997.a
Jordan, C.: Mémoire sur les formes bilinéaires, J. Math. Pures Appl., 19, 35–54, 1874a.a
Jordan, C.: Sur la réduction des formes bilinéaires, C. R. Acad. Sci., Paris, France, 614–617, 1874b.a
Kallache, M., Vrac, M., Naveau, P., and Michelangeli, P.-A.: Non-stationary probabilistic downscaling of extreme precipitation, J. Geophys. Res.-Atmos., 116, D05113, https://doi.org/10.1029/
2010JD014892, 2011.a
Lange, S.: Trend-preserving bias adjustment and statistical downscaling with ISIMIP3BASD (v1.0), Geosci. Model Dev., 12, 3055–3070, https://doi.org/10.5194/gmd-12-3055-2019, 2019.a
Levina, E., Rothman, A., and Zhu, J.: Sparse estimation of large covariance matrices via a nested Lasso penalty, Ann. Appl. Stat., 2, 245–263, https://doi.org/10.1214/07-aoas139, 2008.a
Maraun, D.: Bias Correction, Quantile Mapping, and Downscaling: Revisiting the Inflation Issue, J. Climate, 26, 2137–2143, https://doi.org/10.1175/JCLI-D-12-00821.1, 2013.a
Marti, O., Braconnot, P., Dufresne, J.-L., Bellier, J., Benshila, R., Bony, S., Brockmann, P., Cadule, P., Caubel, A., Codron, F., de NOBLET, N., Denvil, S., Fairhead, L., Fichefet, T., Foujols,
M.-A., Friedlingstein, P., Goosse, H., Grandpeix, J., Guilyardi, E., and Talandier, C.: Key features of the IPSL ocean atmosphere model and its sensitivity to atmospheric resolution, Clim. Dynam.,
34, 1–26, https://doi.org/10.1007/S00382-009-0640-6, 2010.a
Mehrotra, R. and Sharma, A.: Correcting for systematic biases in multiple raw GCM variables across a range of timescales, J. Hydrol., 520, 214–223, https://doi.org/10.1016/j.jhydrol.2014.11.037,
Mehrotra, R. and Sharma, A.: A Multivariate Quantile-Matching Bias Correction Approach with Auto- and Cross-Dependence across Multiple Time Scales: Implications for Downscaling, J. Climate, 29,
3519–3539, https://doi.org/10.1175/JCLI-D-15-0356.1, 2016.a, b
Mehrotra, R. and Sharma, A.: A Resampling Approach for Correcting Systematic Spatiotemporal Biases for Multiple Variables in a Changing Climate, Water Resour. Res., 55, 754–770, https://doi.org/
10.1029/2018WR023270, 2019.a
Meyer, J., Kohn, I., Stahl, K., Hakala, K., Seibert, J., and Cannon, A. J.: Effects of univariate and multivariate bias correction on hydrological impact projections in alpine catchments, Hydrol.
Earth Syst. Sci., 23, 1339–1354, https://doi.org/10.5194/hess-23-1339-2019, 2019.a
Mezzadri, F.: How to generate random matrices from the classical compact groups, Not. Am. Math. Soc., 54, 592–604, 2007.a
Michelangeli, P.-A., Vrac, M., and Loukos, H.: Probabilistic downscaling approaches: Application to wind cumulative distribution functions, Geophys. Res. Lett., 36, L11708, https://doi.org/10.1029/
2009GL038401, 2009.a
Möller, A., Lenkoski, A., and Thorarinsdottir, T. L.: Multivariate probabilistic forecasting using ensemble Bayesian model averaging and copulas, Q. J. Roy. Meteor. Soc., 139, 982–991, https://
doi.org/10.1002/qj.2009, 2013.a
Nahar, J., Johnson, F., and Sharma, A.: Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach, Water Resour. Res., 54, 827–841, https://doi.org/
10.1002/2017WR021293, 2018.a
Panofsky, H. and Brier, G.: Some applications of statistics to meteorology, Earth and Mineral Sciences Continuing Education, College of Earth and Mineral Sciences, The Pennsylvania State University,
University Park, Pennsylvania, USA, 103 pp., 1958.a
Piani, C. and Haerter, J.: Two dimensional bias correction of temperature and precipitation copulas in climate models, Geophys. Res. Lett., 39, L20401, https://doi.org/10.1029/2012GL053839, 2012.a
Räty, O., Räisänen, J., Bosshard, T., and Donnelly, C.: Intercomparison of Univariate and Joint Bias Correction Methods in Changing Climate From a Hydrological Perspective, Climate, 6, 33, https://
doi.org/10.3390/cli6020033, 2018.a
Ribes, A., Thao, S., and Cattiaux, J.: Describing the relationship between a weather event and climate change: a new statistical approach, J. Climate, https://doi.org/10.1175/JCLI-D-19-0217.1, online
first, 2020.a
Robin, Y.: SBCK (Statistical Bias Correction Kit), GitHub, available at: https://github.com/yrobink/SBCK, last access: 20 May 2019.a
Robin, Y., Vrac, M., Naveau, P., and Yiou, P.: Multivariate stochastic bias corrections with optimal transport, Hydrol. Earth Syst. Sci., 23, 773–786, https://doi.org/10.5194/hess-23-773-2019, 2019.
a, b, c, d, e
Santambrogio, F.: Optimal Transport for Applied Mathematicians, Birkhaüser, Basel, Switzerland, vol. 87, 2015.a
Schefzik, R., Thorarinsdottir, T. L., and Gneiting, T.: Uncertainty Quantification in Complex Simulation Models Using Ensemble Copula Coupling, Stat. Sci., 28, 616–640, https://doi.org/10.1214/
13-STS443, 2013.a
Schuhmacher, D., Bähre, B., Gottschlich, C., Hartmann, V., Heinemann, F., and Schmitzer, B.: transport: Computation of Optimal Transport Plans and Wasserstein Distances, r package version 0.11-1,
available at: https://cran.r-project.org/package=transport (last access: 11 March 2020), 2019.a
Stewart, G. W.: On the Early History of the Singular Value Decomposition, SIAM Rev., 35, 551–566, https://doi.org/10.1137/1035134, 1993.a
Stott, P. A., Christidis, N., Otto, F. E. L., Sun, Y., Vanderlinden, J.-P., van Oldenborgh, G. J., Vautard, R., von Storch, H., Walton, P., Yiou, P., and Zwiers, F. W.: Attribution of extreme weather
and climate-related events, WIRES Clim. Change, 7, 23–41, https://doi.org/10.1002/wcc.380, 2016.a
Tobin, I., Vautard, R., Balog, I., Bréon, F.-M., Jerez, S., Ruti, P. M., Thais, F., Vrac, M., and Yiou, P.: Assessing climate change impacts on European wind energy from ENSEMBLES high-resolution
climate projections, Climatic Change, 128, 99–112, https://doi.org/10.1007/s10584-014-1291-0, 2015.a
Tramblay, Y., Ruelland, D., Somot, S., Bouaicha, R., and Servat, E.: High-resolution Med-CORDEX regional climate model simulations for hydrological impact studies: a first evaluation of the
ALADIN-Climate model in Morocco, Hydrol. Earth Syst. Sci., 17, 3721–3739, https://doi.org/10.5194/hess-17-3721-2013, 2013.a
Verkade, J., Brown, J., Reggiani, P., and Weerts, A.: Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales, J.
Hydrol., 501, 73–91, https://doi.org/10.1016/j.jhydrol.2013.07.039, 2013.a
Vidal, J.-P., Martin, E., Franchistéguy, L., Baillon, M., and Soubeyroux, J.-M.: A 50-year high-resolution atmospheric reanalysis over France with the Safran system, Int. J. Climatol., 30, 1627–1644,
https://doi.org/10.1002/joc.2003, 2010.a
Villani, C.: Optimal transport – Old and new, in: Grundlehren der mathematischen Wissenschaften, Springer-Verlag, Berlin, Heidelberg, Germany, 992 pp., 2008.a
Voisin, N., Schaake, J. C., and Lettenmaier, D. P.: Calibration and Downscaling Methods for Quantitative Ensemble Precipitation Forecasts, Weather Forecast., 25, 1603–1627, https://doi.org/10.1175/
2010WAF2222367.1, 2010.a
Vrac, M.: Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2D2) bias correction, Hydrol. Earth Syst. Sci., 22, 3175–3196,
https://doi.org/10.5194/hess-22-3175-2018, 2018.a, b, c, d, e, f, g, h, i, j
Vrac, M. and Friederichs, P.: Multivariate–Intervariable, Spatial, and Temporal–Bias Correction, J. Climate, 28, 218–237, https://doi.org/10.1175/JCLI-D-14-00059.1, 2015. a, b, c
Vrac, M., Drobinski, P., Merlo, A., Herrmann, M., Lavaysse, C., Li, L., and Somot, S.: Dynamical and statistical downscaling of the French Mediterranean climate: uncertainty assessment, Nat. Hazards
Earth Syst. Sci., 12, 2769–2784, https://doi.org/10.5194/nhess-12-2769-2012, 2012.a
Vrac, M., Noël, T., and Vautard, R.: Bias correction of precipitation through Singularity Stochastic Removal: Because Occurrences matter, J. Geophys. Res.-Atmos., 121, 5237–5258, https://doi.org/
10.1002/2015JD024511, 2016.a, b, c
Wahl, T., Jain, S., Bender, J., Meyers, S., and Luther, M.: Increasing risk of compound flooding from storm surge and rainfall for major US cities, Nat. Clim. Chang., 5, 1093–1097, https://doi.org/
10.1038/nclimate2736, 2015.a
Wasko, C., Sharma, A., and Westra, S.: Reduced spatial extent of extreme storms at higher temperatures, Geophys. Res. Lett., 43, 4026–4032, https://doi.org/10.1002/2016GL068509, 2016.a
Weedon, G. P., Balsamo, G., Bellouin, N., Gomes, S., Best, M. J., and Viterbo, P.: The WFDEI meteorological forcing data set: WATCH Forcing Data methodology applied to ERA-Interim reanalysis data,
Water Resour. Res., 50, 7505–7514, https://doi.org/10.1002/2014WR015638, 2014.a
Xu, C.-Y.: From GCMs to river flow: A review of downscaling methods and hydrologic modelling approaches, Prog. Phys. Geog., 23, 229–249, https://doi.org/10.1177/030913339902300204, 1999.a
Yang, W., Gardelin, M., Olsson, J., and Bosshard, T.: Multi-variable bias correction: application of forest fire risk in present and future climate in Sweden, Nat. Hazards Earth Syst. Sci., 15,
2037–2057, https://doi.org/10.5194/nhess-15-2037-2015, 2015.a
Yiou, P., Jézéquel, A., Naveau, P., Otto, F. E. L., Vautard, R., and Vrac, M.: A statistical framework for conditional extreme event attribution, Adv. Stat. Clim. Meteorol. Oceanogr., 3, 17–31,
https://doi.org/10.5194/ascmo-3-17-2017, 2017.a
Zscheischler, J. and Seneviratne, S.: Dependence of drivers affects risks associated with compound events, Sci. Adv, 3, e1700263, https://doi.org/10.1126/sciadv.1700263, 2017.a
Zscheischler, J., Westra, S., Hurk, B., Seneviratne, S., Ward, P., Pitman, A., AghaKouchak, A., Bresch, D., Leonard, M., Wahl, T., and Zhang, X.: Future climate risk from compound events, Nat. Clim.
Chang, 8, 469–477, https://doi.org/10.1038/s41558-018-0156-3, 2018.a
Zscheischler, J., Fischer, E. M., and Lange, S.: The effect of univariate bias adjustment on multivariate hazard estimates, Earth Syst. Dynam., 10, 31–43, https://doi.org/10.5194/esd-10-31-2019, | {"url":"https://esd.copernicus.org/articles/11/537/2020/","timestamp":"2024-11-08T13:56:22Z","content_type":"text/html","content_length":"564573","record_id":"<urn:uuid:6ffa724a-b542-48d4-ba0e-45df58f6d496>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00636.warc.gz"} |
Folding the Julia Fractal
So the Julia sets are a class of sets that have became very popular because of their beauty and an interesting object of study. The definition of a Julia set is really simple, it is the set of
complex numbers \(\bold S_c\) such that for \(z \in \bold S_c\), we have \(|f^n_c(z)| \le 2\), where \(f_c(z) = z^2 + c\). Each choice of \(c\) gives rise to a different Julia set with distinct
looks. I might write more about these stuff maybe but given how popular this subject is you can find a lot of it's cool properties anywhere on the internet. I suggest checking out Inigo Quilez on the
subject of rendering them.
I hope your internet's good. Due to the nature of the subject it does require rather large images.
Given how simple the definitions of these sets are, it makes it really easy and straight-forward to render them. It can be done in a few lines and is one of the first things I ever coded.
The source code for the above is here, you can use the mouse to control the value of \(c\). I left the shitty code there for sorta a historical record. Read with caution, it's painful to the eyes.
The procedure to render a Julia fractal is very simple:
1. Start with a complex number \(z\)
2. Successively compute \(z=z^2+c\) for a maximum of \(N\) iterations. A larger \(N\) gives a more detailed fractal.
3. If \(|z|>2\) at any iteration, prematurely stop the computation
For the above GIF, I simply assigned the number of iterations to reach step 3 as the colour: White if step 3 was never reached and different values of grey otherwise.
The creative part comes with how to shade it. I came across a method that involves procedural orbit traps here and attempted to replicate it.
The source code and demo is here. Instead of generating the cloud-like texture procedurally, a texture is used instead. It runs real time and hopefully on mobile browsers too so do give the link a
click. The fractal above uses a value of \(c\) around the neighbourhood of \(M_{23,2}\), a Misiurewicz Point that gives the fractal a lacy appearance.
Speaking of lacy, another cool way to render is by estimating it's distance function. I did this a long time ago and probably deleted the source code along the way.
Given the seeming complexity of these fractals, I wondered if there was a way to visualize the formation of them. Since they form via very simple rules, there should be a nice way to depict them. My
first attempt was to naively interpolate between each iteration, allowing me to create a smooth animation despite the otherwise discrete procedure. Trying many many ways to interpolate the best I got
was this:
It looks cool but imo doesn't show how the very crucial \(z^2+c\) plays a role in it's formation. That took me 2 hours by the way.
Next I had the idea of mapping. The operation \(f_c(z) = z^2+c\) maps a point \(z\) to a point \(z^2+c\). The mapping that forms the Julia fractal \(f^n_c\) is simply that mapping applied several
times. Here's my attempt at visualizing \(f_c\):
The first half of the GIF describes the \(+c\) operation, which is simply a shift in the direction of \(c\) . The second half describes the \(z^2\) operation, which is sorta a 'squishing' of the
space rotationally, resulting in 2 copies of the image. At the end of the first iteration, we have two displaced copies of the image:
Now, every iteration would generate \(\times 2\) more copies, and you can see how the fractal gets more and more complicated at each iteration.
The above took forever to render by the way, given how slow python really is. Maybe 40fps is a little overkill for a gif but I digress. | {"url":"https://makerforce.io/unfolding-the-julia-fractal/","timestamp":"2024-11-08T14:07:44Z","content_type":"text/html","content_length":"23827","record_id":"<urn:uuid:7192a037-1125-4d3d-8660-0753cd03cd58>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00829.warc.gz"} |
DAE solver using the implicit Theta method
Options Database Keys#
• -ts_theta_theta - Location of stage (0<Theta<=1)
• -ts_theta_endpoint - Use the endpoint (like Crank-Nicholson) instead of midpoint form of the Theta method
• -ts_theta_initial_guess_extrapolate - Extrapolate stage initial guess from previous solution (sometimes unstable)
-ts_type theta -ts_theta_theta 1.0 corresponds to backward Euler (TSBEULER)
-ts_type theta -ts_theta_theta 0.5 corresponds to the implicit midpoint rule
-ts_type theta -ts_theta_theta 0.5 -ts_theta_endpoint corresponds to Crank-Nicholson (TSCN)
The endpoint variant of the Theta method and backward Euler can be applied to DAE. The midpoint variant is not suitable for DAEs because it is not stiffly accurate.
The midpoint variant is cast as a 1-stage implicit Runge-Kutta method.
Theta | Theta
| 1
For the default Theta=0.5, this is also known as the implicit midpoint rule.
When the endpoint variant is chosen, the method becomes a 2-stage method with first stage explicit#
0 | 0 0
1 | 1-Theta Theta
| 1-Theta Theta
For the default Theta=0.5, this is the trapezoid rule (also known as Crank-Nicolson, see TSCN).
To apply a diagonally implicit RK method to DAE, the stage formula
Y_i = X + h sum_j a_ij Y'_j
is interpreted as a formula for Y’_i in terms of Y_i and known values (Y’_j, j<i) | {"url":"https://petsc.org/release/manualpages/TS/TSTHETA/","timestamp":"2024-11-10T01:57:48Z","content_type":"text/html","content_length":"25184","record_id":"<urn:uuid:2c19819b-63ae-4b3e-8c30-7f0b5a02675b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00425.warc.gz"} |
Isoperimetric Polyominoes
Patrick Hamlyn has made twenty-six 7x21 rectangles with a central hole with the set. The construction even allows a three colouring of the pieces. Also by Patrick is the full set into a 6 x 633
rectangle with two holes.
Patrick has managed to pack the set into a 626x62 square by forming 48 squares each with a single hole.
The set without the pieces with holes will fit into nineteen squares of sides 4 to 22.
A number of multiple equal sized squares are possible with a variety of holes in each square.
Various constructions can also be made with the sets of perimeter 16 and fixed area. | {"url":"http://www.recmath.com/PolyPages/PolyPages/Isopolyo16s.html","timestamp":"2024-11-08T18:53:21Z","content_type":"text/html","content_length":"3428","record_id":"<urn:uuid:8b1729ec-cff9-4ff4-8ba1-9d0b20cf79c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00615.warc.gz"} |
Average Force Calculator - Savvy Calculator
Average Force Calculator
About Average Force Calculator (Formula)
The Average Force Calculator is a physics tool used to calculate the average force exerted on an object over a given time period. It aids in mechanics, engineering, and understanding the effects of
forces on objects. The formula for calculating average force involves using the change in momentum of an object and the time over which the force is applied.
Formula for calculating Average Force:
Average Force = (Change in Momentum) / (Time)
In this formula:
• “Change in Momentum” refers to the difference between the final momentum and initial momentum of the object, typically measured in kilogram meters per second (kg·m/s).
• “Time” is the duration over which the force is applied, usually measured in seconds (s).
For example, if an object has an initial momentum of 10 kg·m/s and a final momentum of 40 kg·m/s over a time period of 5 seconds, the average force exerted on the object would be calculated as
Average Force = (40 kg·m/s – 10 kg·m/s) / 5 s = 6 kg·m/s²
This means that the average force exerted on the object is 6 kilogram meters per second squared.
The Average Force Calculator simplifies the process of determining average force, aiding students, engineers, and professionals in performing accurate force-related calculations. By inputting the
change in momentum and time, the calculator quickly provides the average force, helping users gain insights into the effects of forces on objects.
Leave a Comment | {"url":"https://savvycalculator.com/average-force-calculator","timestamp":"2024-11-08T08:42:59Z","content_type":"text/html","content_length":"142040","record_id":"<urn:uuid:32e9c6f7-499e-4f1e-b9fc-8f0817a771aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00371.warc.gz"} |
Markov Chain
Stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.
Markov chains are fundamental to the theory of stochastic processes, used extensively in various fields such as statistics, economics, and engineering. The model is characterized by its lack of
memory, meaning the next state depends only on the current state and not on the sequence of events that preceded it. This simplicity allows Markov chains to model random processes where future states
are independent of past states, given the present state. They are widely utilized in areas ranging from predicting stock market trends to natural language processing, where they help in understanding
and predicting sequences of words or phrases.
The concept of the Markov chain was introduced by the Russian mathematician Andrey Markov in 1906. Markov initially developed these chains to study the independence of trials in a stochastic process,
and they gained prominence through the 20th century as they proved valuable in a wide array of scientific disciplines.
Andrey Markov is the primary contributor to the development of Markov chains. His early work laid the foundation for what would become a broad field of study within probability theory. Over the
years, the theory has been expanded and applied by numerous mathematicians and scientists across various domains, further refining the concept and exploring its applications in complex systems. | {"url":"https://www.envisioning.io/vocab/markov-chain","timestamp":"2024-11-12T00:53:45Z","content_type":"text/html","content_length":"439533","record_id":"<urn:uuid:2142e156-f8f0-4885-92c4-809a6092e0f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00533.warc.gz"} |
Proto-Danksharding FAQ - HackMD
# Proto-Danksharding FAQ [TOC] ## What is Danksharding? Danksharding is the new sharding design proposed for Ethereum, which introduces some significant simplifications compared to previous designs.
The main difference between all recent Ethereum sharding proposals since ~2020 (both Danksharding and pre-Danksharding) and most non-Ethereum sharding proposals is Ethereum's **[rollup-centric
roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)** (see also: [[1]](https://polynya.medium.com/understanding-ethereums-rollup-centric-roadmap-1c60d30c060f) [[2]]
(https://vitalik.ca/general/2019/12/26/mvb.html) [[3]](https://vitalik.ca/general/2021/01/05/rollup.html)): instead of providing more space for _transactions_, Ethereum sharding provides more space
for _blobs of data_, which the Ethereum protocol itself does not attempt to interpret. Verifying a blob simply requires checking that the blob is _[available](https://github.com/ethereum/research/
wiki/A-note-on-data-availability-and-erasure-coding)_ - that it can be downloaded from the network. The data space in these blobs is expected to be used by [layer-2 rollup protocols](https://
vitalik.ca/general/2021/01/05/rollup.html) that support high-throughput transactions. <center><br> </
center><br> The main innovation introduced by **[Danksharding](https://notes.ethereum.org/@dankrad/new_sharding)** (see also: [[1]](https://polynya.medium.com/danksharding-36dc0c8067fe) [[2]](https:/
/www.youtube.com/watch?v=e9oudTr5BE4) [[3]](https://github.com/ethereum/consensus-specs/pull/2792)) is the **merged fee market**: instead of there being a fixed number of shards that each have
distinct blocks and distinct block proposers, in Danksharding there is only one proposer that chooses all transactions and all data that go into that slot. To avoid this design forcing high system
requirements on validators, we introduce **proposer/builder separation (PBS)** (see also: [[1]](https://notes.ethereum.org/@vbuterin/pbs_censorship_resistance) [[2]](https://ethresear.ch/t/
two-slot-proposer-builder-separation/10980)): a specialized class of actors called **block builders** bid on the right to choose the contents of the slot, and the proposer need only select the valid
header with the highest bid. Only the block builder needs to process the entire block (and even there, it's possible to use third-party decentralized oracle protocols to implement a distributed block
builder); all other validators and users can _verify_ the blocks very efficiently through **[data availability sampling](https://hackmd.io/@vbuterin/sharding_proposal)** (remember: the "big" part of
the block is just data). ## What is proto-danksharding (aka. EIP-4844)? Proto-danksharding (aka. [EIP-4844](https://eips.ethereum.org/EIPS/eip-4844)) is a proposal to implement most of the logic and
"scaffolding" (eg. transaction formats, verification rules) that make up a full Danksharding spec, but not yet actually implementing any sharding. In a proto-danksharding implementation, all
validators and users still have to directly validate the availability of the full data. The main feature introduced by proto-danksharding is new transaction type, which we call a **blob-carrying
transaction**. A blob-carrying transaction is like a regular transaction, except it also carries an extra piece of data called a **blob**. Blobs are extremely large (~125 kB), and can be much cheaper
than similar amounts of calldata. However, blob data is not accessible to EVM execution; the EVM can only view a commitment to the blob. Because validators and clients still have to download full
blob contents, data bandwidth in proto-danksharding is targeted to 1 MB per slot instead of the full 16 MB. However, there are nevertheless large scalability gains because this data is not competing
with the gas usage of existing Ethereum transactions. ## Why is it OK to add 1 MB data to blocks that everyone has to download, but not to just make calldata 10x cheaper? This has to do with the
difference between **average load** and **worst-case load**. Today, we already have a situation where the average block size [is about 90 kB](https://etherscan.io/chart/blocksize) but the theoretical
maximum possible block size (if _all_ 30M gas in a block went to calldata) is ~1.8 MB. The Ethereum network has handled blocks approaching the maximum in the past. However, if we simply reduced the
calldata gas cost by 10x, then although the _average_ block size would increase to still-acceptable levels, the _worst case_ would become 18 MB, which is far too much for the Ethereum network to
handle. The current gas pricing scheme makes it impossible to separate these two factors: the ratio between average load and worst-case load is determined by users' choices of how much gas they spend
on calldata vs other resources, which means that gas prices have to be set based on worst-case possibilities, leading to an average load needlessly lower than what the system can handle. But **if we
change gas pricing to more explicitly create a [multidimensional fee market](https://ethresear.ch/t/multidimensional-eip-1559/11651), we can avoid the average case / worst case load mismatch**, and
include in each block close to the maximum amount of data that we can safely handle. Proto-danksharding and [EIP-4488](https://eips.ethereum.org/EIPS/eip-4488) are two proposals that do exactly that.
| | Average case block size | Worst case block size | | - | - | - | | **Status quo** | [85 kB](https://etherscan.io/chart/blocksize) | 1.8 MB | | **EIP-4488** | Unknown; 350 kB if 5x growth in
calldata use | 1.4 MB | | **Proto-danksharding** | 1 MB (tunable if desired) | 2 MB | ## How does proto-danksharding (EIP-4844) compare to EIP-4488? **[EIP-4488](https://eips.ethereum.org/EIPS/
eip-4488)** is an earlier and simpler attempt to solve the same average case / worst case load mismatch problem. EIP-4488 did this with two simple rules: * Calldata gas cost reduced from 16 gas per
byte to 3 gas per byte * A limit of 1 MB per block plus an extra 300 bytes per transaction (theoretical max: ~1.4 MB) The hard limit is the simplest possible way to ensure that the larger increase in
average-case load would not also lead to an increase in worst-case load. The reduction in gas cost would greatly increase rollup use, likely increasing average block size to hundreds of kilobytes,
but the worst-case possibility of single blocks containing 10 MB would be directly prevented by the hard limit. In fact, the worst-case block size would be _lower_ than it is today (1.4 MB vs 1.8
MB). **Proto-danksharding** instead creates a separate transaction type that can hold cheaper data in large fixed-size blobs, with a limit on how many blobs can be included per block. These blobs are
not accessible from the EVM (only commitments to the blobs are), and the blobs are stored by the consensus layer (beacon chain) instead of the execution layer. **The main practical difference between
EIP-4488 and proto-danksharding is that EIP-4488 attempts to minimize the changes needed today, whereas proto-danksharding makes a larger number of changes today so that few changes are required in
the future to upgrade to full sharding**. Although implementing full sharding (with data availability sampling, etc) is a complex task and remains a complex task after proto-danksharding, this
complexity is contained to the consensus layer. Once proto-danksharding is rolled out, execution layer client teams, rollup developers and users need to do no further work to finish the transition to
full sharding. Proto-danksharding also separates blob data from calldata, making it easier for clients to store blob data for a shorter period of time. Note that the choice between the two is _not_
an either-or: we could implement EIP-4488 soon and then follow it up with proto-danksharding half a year later. ## What parts of full danksharding does proto-danksharding implement, and what remains
to be implemented? Quoting EIP-4844: > The work that is already done in this EIP includes: > > * A new transaction type, of the exact same format that will need to exist in "full sharding" > * _All_
of the execution-layer logic required for full sharding > * _All_ of the execution / consensus cross-verification logic required for full sharding > * Layer separation between `BeaconBlock`
verification and data availability sampling blobs > * Most of the `BeaconBlock` logic required for full sharding > * A self-adjusting independent gasprice for blobs. > > The work that remains to be
done to get to full sharding includes: > > * A low-degree extension of the `blob_kzgs` in the consensus layer to allow 2D sampling > * An actual implementation of data availability sampling > * PBS
(proposer/builder separation), to avoid requiring individual validators to process 32 MB of data in one slot > * Proof of custody or similar in-protocol requirement for each validator to verify a
particular part of the sharded data in each block Notice that all of the remaining work is consensus-layer changes, and does not require any additional work from execution client teams, users or
rollup developers. ## What about disk space requirements blowing up from all these really big blocks? Both EIP-4488 and proto-danksharding lead to a long-run maximum usage of ~1 MB per slot (12s).
This works out to about 2.5 TB per year, a far higher growth rate than Ethereum requires today. **In the case of EIP-4488, solving this requires history expiry ([EIP-4444]((https://eips.ethereum.org/
EIPS/eip-4444)))**, where clients are no longer required to store history older than some duration of time (durations from 1 month to 1 year have been proposed). **In the case of proto-danksharding,
the consensus layer can implement separate logic to auto-delete the blob data after some time (eg. 30 days)**, regardless of whether or not EIP-4444 is implemented. However, implementing EIP-4444 as
soon as possible is highly recommended regardless of what short-term data scaling solution is adopted. Both strategies limit the extra disk load of a consensus client to at most a few hundred
gigabytes. **In the long run, adopting some history expiry mechanism is essentially mandatory**: full sharding would add about 40 TB of historical blob data per year, so users could only
realistically store a small portion of it for some time. Hence, it's worth setting the expectations about this sooner. ## If data is deleted after 30 days, how would users access older blobs? **The
purpose of the Ethereum consensus protocol is not to guarantee storage of all historical data forever. Rather, the purpose is to provide a highly secure real-time bulletin board, and leave room for
other decentralized protocols to do longer-term storage**. The bulletin board is there to make sure that the data being published on the board is available long enough that any user who wants that
data, or any longer-term protocol backing up the data, has plenty of time to grab the data and import it into their other application or protocol. In general, long-term historical storage is easy.
While 2.5 TB per year is too much to demand of regular nodes, it's very manageable for dedicated users: you can buy very big hard drives for [about $20 per terabyte](https://www.amazon.com/
Seagate-IronWolf-16TB-Internal-Drive/dp/B07SGGWYC1), well within reach of a hobbyist. Unlike consensus, which has a N/2-of-N [trust model](https://vitalik.ca/general/2020/08/20/trust.html),
historical storage has a 1-of-N trust model: you only need one of the storers of the data to be honest. Hence, each piece of historical data only needs to be stored hundreds of times, and not the
full set of many thousands of nodes that are doing real-time consensus verification. Some practical ways in which the full history will be stored and made easily accessible include: *
**Application-specific protocols (eg. rollups)** can require _their_ nodes to store the portion of history that is relevant to their application. Historical data being lost is not a risk to the
protocol, only to individual applications, so it makes sense for applications to take on the burden of storing data relevant to themselves. * Storing historical data in **BitTorrent**, eg.
auto-generating and distributing a 7 GB file containing the blob data from the blocks in each day. * **The Ethereum [Portal Network](https://www.ethportal.net/)** (currently under development) can
easily be extended to store history. * **Block explorers, API providers and other data services** will likely store the full history. * **Individual hobbyists, and academics doing data analysis**,
will likely store the full history. In the latter case, storing history locally provides them significant value as it makes it much easier to directly do calculations on it. * **Third-party indexing
protocols like [TheGraph](https://thegraph.com/en/)** will likely store the full history. At much higher levels of history storage (eg. 500 TB per year), the risk that some data will be forgotten
becomes higher (and additionally, the data availability verification system becomes more strained). This is likely the true limit of sharded blockchain scalability. However, all current proposed
parameters are very far from reaching this point. ## What format is blob data in and how is it committed to? A blob is a vector of 4096 **field elements**, numbers within the range: <small> `0 <= x <
52435875175126190479447740508185965837690552500527637822603658699938581184513` </small> The blob is mathematically treated as representing a degree < 4096 polynomial over the finite field with the
above modulus, where the field element at position $i$ in the blob is the evaluation of that polynomial at $\omega^i$. $\omega$ is a constant that satisfies $\omega^{4096} = 1$. A commitment to a
blob is a hash of the [KZG commitment](https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html) to the polynomial. From the point of view of implementation, however, it is not
important to be concerned with the mathematical details of the polynomial. Instead, there will simply be a vector of elliptic curve points (the **Lagrange-basis trusted setup**), and the KZG
commitment to a blob will simply be a linear combination. Quoting code from EIP-4844: ```python def blob_to_kzg(blob: Vector[BLSFieldElement, 4096]) -> KZGCommitment: computed_kzg = bls.Z1 for value,
point_kzg in zip(tx.blob, KZG_SETUP_LAGRANGE): assert value < BLS_MODULUS computed_kzg = bls.add( computed_kzg, bls.multiply(point_kzg, value) ) return computed_kzg ``` `BLS_MODULUS` is the above
modulus, and `KZG_SETUP_LAGRANGE` is the vector of elliptic curve points that is the Lagrange-basis trusted setup. For implementers, it's reasonable to simply think of this for now as a black-box
special-purpose hash function. ## Why use the hash of the KZG instead of the KZG directly? Instead of using the KZG to represent the blob directly, EIP-4844 uses the **versioned hash**: a single 0x01
byte (representing the version) followed by the last 31 bytes of the SHA256 hash of the KZG. This is done for EVM-compatibility and future-compatibility: KZG commitments are 48 bytes whereas the EVM
works more naturally with 32 byte values, and if we ever switch from KZG to something else (eg. for quantum-resistance reasons), the commitments can continue to be 32 bytes. ## What are the two
precompiles introduced in proto-danksharding? Proto-danksharding introduces two precompiles: the **blob verification precompile** and the **point evaluation precompile**. The **blob verification
precompile** is self-explanatory: it takes as input a versioned hash and a blob, and verifies that the provided versioned hash actually is a valid versioned hash for the blob. This precompile is
intended to be used by optimistic rollups. Quoting EIP-4844: > **Optimistic rollups** only need to actually provide the underlying data when fraud proofs are being submitted. The fraud proof
submission function would require the full contents of the fraudulent blob to be submitted as part of calldata. It would use the blob verification function to verify the data against the versioned
hash that was submitted before, and then perform the fraud proof verification on that data as is done today. The **point evaluation precompile** takes as input a versioned hash, an `x` coordinate, a
`y` coordinate and a proof (the KZG commitment of the blob and a KZG proof-of-evaluation). It verifies the proof to check that `P(x) = y`, where `P` is the polynomial represented by the blob that has
the given versioned hash. This precompile is intended to be used by ZK rollups. Quoting EIP-4844: > **ZK rollups** would provide two commitments to their transaction or state delta data: the kzg in
the blob and some commitment using whatever proof system the ZK rollup uses internally. They would use a commitment [proof of equivalence protocol](https://ethresear.ch/t/
easy-proof-of-equivalence-between-multiple-polynomial-commitment-schemes-to-the-same-data/8188), using the point evaluation precompile, to prove that the kzg (which the protocol ensures points to
available data) and the ZK rollup’s own commitment refer to the same data. Note that most major optimistic rollup designs use a multi-round fraud proof scheme, where the final round takes only a
small amount of data. Hence, **optimistic rollups could conceivably also use the point evaluation precompile** instead of the blob verification precompile, and it would be cheaper for them to do so.
## How exactly do ZK rollups work with the KZG commitment efficiently? The "naive" way check a blob in a ZK rollup is to pass the blob data as a private input into the KZG, and do a elliptic curve
linear combination (or a pairing) inside the SNARK to verify it. This is wrong and needlessly inefficient. Instead, there is a much easier approach in the case where the ZK rollup is BLS12-381 based,
and a moderately easier approach for arbitrary ZK-SNARKs. ### Easy approach (requires rollup to use the BLS12-381 modulus) Suppose $K$ is the KZG commitment, and $B$ is the blob that it is committing
to. All ZK-SNARK protocols have some way to import large amounts of data into a proof, and contain some kind of commitment to that data. For example, in [PLONK](https://vitalik.ca/general/2019/09/22/
plonk.html), this is the $Q_C$ commitment. All we have to do is prove that $K$ and $Q_C$ are committing to the same data. This can be done with a [proof of equivalence](https://ethresear.ch/t/
easy-proof-of-equivalence-between-multiple-polynomial-commitment-schemes-to-the-same-data/8188), which is very simple. Copying from the post: > Suppose you have multiple polynomial commitments $C_1$
... $C_k$, under $k$ different commitment schemes (eg. Kate, FRI, something bulletproof-based, DARK...), and you want to prove that they all commit to the same polynomial $P$. We can prove this
easily: > > Let $z = hash(C_1 .... C_k)$, where we interpret $z$ as an evaluation point at which $P$ can be evaluated. > > Publish openings $O_1 ... O_k$, where $O_i$ is a proof that $C_i(z) = a$
under the i'th commitment scheme. Verify that $a$ is the same number in all cases. A ZK rollup transaction would simply have to have a regular SNARK, as well as a proof of equivalence of this kind to
prove that its public data equals the versioned hash. **Note that they should NOT implement the KZG check directly; instead, they should just use the point evaluation precompile** to verify the
opening. This ensures future-proofness: if later KZG is replaced with something else, the ZK rollup would be able to continue working with no further issues. ### Moderate approach: works with any
ZK-SNARK If the destination ZK-SNARK uses some other modulus, or even is not polynomial-based at all (eg. it uses R1CS), there is a slightly more complicated approach that can prove equivalence. The
proof works as follows: 1. Let $P(x)$ be the polynomial encoded by the blob. Make a commitment $Q$ in the ZK-SNARK scheme that encodes the values $v_1 .. v_n$, where $v_i = P(\omega^i)$. 2. Choose
$x$ by hashing the commitment of $P$ and $Q$. 3. Prove $P(x)$ with the point evaluation precompile. 4. Use the [barycentric equation](https://hackmd.io/@vbuterin/barycentric_evaluation) $P(x) = \frac
{x^N - 1}{N} * \sum_i \frac{v_i * \omega^i}{x - \omega^i}$ to perform the same evaluation inside the ZKP. Verify that the answer is the same as the value proven in (3). (4) will need to be done with
mismatched-field arithmetic, but [PLOOKUP](https://eprint.iacr.org/2020/315.pdf)-style techniques can do this with fewer constraints than even an arithmetically-friendly hash function. Note that $\
frac{x^N - 1}{N}$ and $\omega^i$ can be precomputed and saved to make the calculation simpler. For a longer description of this protocol, see: https://notes.ethereum.org/@dankrad/
kzg_commitments_in_proofs ## What does the KZG trusted setup look like? See: * https://vitalik.ca/general/2022/03/14/trustedsetup.html for a general description of how powers-of-tau trusted setups
work * https://github.com/ethereum/research/blob/master/trusted_setup/trusted_setup.py for an example implementation of all of the important trusted-setup-related computations In our case in
particular, the current plan is to run in parallel four ceremonies (with different secrets) with sizes $(n_1 = 4096, n_2 = 16)$, $(n_1 = 8192, n_2 = 16)$, $(n_1 = 16384, n_2 = 16)$ and $(n_1 = 32768,
n_2 = 16)$. Theoretically, only the first is needed, but running more with larger sizes improves future-proofness by allowing us to increase blob size. We can't _just_ have a larger setup, because we
want to be able to have a hard limit on the degree of polynomials that can be validly committed to, and this limit is equal to the blob size. The likely practical way to do this would be to start
with the [Filecoin setup](https://filecoin.io/blog/posts/trusted-setup-complete/), and then run a ceremony to extend it. Multiple implementations, including a browser implementation, would allow many
people to participate. ## Couldn't we use some other commitment scheme without a trusted setup? Unfortunately, using anything other than KZG (eg. [IPA](https://vitalik.ca/general/2021/11/05/
halo.html) or SHA256) would make the sharding roadmap much more difficult. This is for a few reasons: * Non-arithmetic commitments (eg. hash functions) are not compatible with data availability
sampling, so if we use such a scheme we would have to change to KZG anyway when we move to full sharding. * IPAs [may be compatible](https://ethresear.ch/t/
what-would-it-take-to-do-das-with-inner-product-arguments-ipas/12088) with data availability sampling, but it leads to a much more complex scheme with much weaker properties (eg. self-healing and
distributed block building become much harder) * Neither hashes nor IPAs are compatible with a cheap implementation of the point evaluation precompile. Hence, a hash or IPA-based implementation would
not be able to effectively benefit ZK rollups or support cheap fraud proofs in multi-round optimistic rollups. * One way to keep data availability sampling and point evaluation but introduce another
commitment is to store multiple commitments (eg. KZG and SHA256) per blob. But this has the problem that either (i) we need to add a complicated ZKP proof of equivalence, or (ii) all consensus nodes
would need to verify the second commitment, which would require them to download the full data of all blobs (tens of megabytes per slot). Hence, the functionality losses and complexity increases of
using anything but KZG are unfortunately much greater than the risks of KZG itself. Additionally, any KZG-related risks are contained: a KZG failure would only affect rollups and other applications
depending on blob data, and leave the rest of the system untouched. ## How "complicated" and "new" is KZG? KZG commitments were introduced [in a paper in 2010](https://link.springer.com/chapter/
10.1007/978-3-642-17373-8_11), and have been used extensively since ~2019 in [PLONK](https://vitalik.ca/general/2019/09/22/plonk.html)-style ZK-SNARK protocols. However, the underlying math of KZG
commitments is a [relatively simple](https://vitalik.ca/general/2022/03/14/trustedsetup.html#what-does-a-powers-of-tau-setup-look-like) piece of arithmetic on top of the underlying math of elliptic
curve operations and pairings. The specific curve used is [BLS12-381](https://hackmd.io/@benjaminion/bls12-381), which was generated from the [Barreto-Lynn-Scott family](https://eprint.iacr.org/2002/
088.pdf) invented in 2002. Elliptic curve pairings, necessary for verifying KZG commitments, are [very complex math](https://vitalik.ca/general/2017/01/14/exploring_ecp.html), but they were invented
in the 1940s and applied to cryptography since the 1990s. By 2001, there were [many proposed cryptographic algorithms](https://crypto.stanford.edu/~dabo/papers/bfibe.pdf) that used pairings. From an
implementation complexity point of view, KZG is not significantly harder to implement than IPA: the function for computing the commitment (see [above](#
What-format-is-blob-data-in-and-how-is-it-committed-to)) is exactly the same as in the IPA case just with a different set of elliptic curve point constants. The point verification precompile is more
complex, as it involves a pairing evaluation, but the math is identical to a part of what is already done in implementations of [EIP-2537 (BLS12-381 precompiles)](https://eips.ethereum.org/EIPS/
eip-2537), and very similar to the [bn128 pairing precompile](https://eips.ethereum.org/EIPS/eip-197) (see also: [optimized python implementation](https://github.com/ethereum/py_ecc/blob/master/
py_ecc/optimized_bls12_381/optimized_pairing.py)). Hence, there is no complicated "new work" that is required to implement KZG verification. ## What are the different software parts of a
proto-danksharding implementation? There are four major components: * **The execution-layer consensus changes** (see [the EIP](https://eips.ethereum.org/EIPS/eip-4844) for details): * New transaction
type that contains blobs * Opcode that outputs the i'th blob versioned hash in the current transaction * Blob verification precompile * Point evaluation precompile * **The consensus-layer consensus
changes (see [this folder](https://github.com/ethereum/consensus-specs/tree/dev/specs/eip4844) in the repo)**: * List of blob KZGs [in the `BeaconBlockBody`](https://github.com/ethereum/
consensus-specs/blob/dev/specs/eip4844/beacon-chain.md#beaconblockbody) * The ["sidecar" mechanism](https://github.com/ethereum/consensus-specs/blob/dev/specs/eip4844/validator.md#is_data_available),
where full blob contents are passed along with a separate object from the `BeaconBlock` * [Cross-checking](https://github.com/ethereum/consensus-specs/blob/dev/specs/eip4844/beacon-chain.md#misc)
between blob versioned hashes in the execution layer and blob KZGs in the consensus layer * **The mempool** * `BlobTransactionNetworkWrapper` (see Networking section of [the EIP](https://
eips.ethereum.org/EIPS/eip-4844)) * More robust anti-DoS protections to compensate for large blob sizes * **Block building logic** * Accept transaction wrappers from the mempool, put transactions
into the `ExecutionPayload`, KZGs into the beacon block and bodies in the sidecar * Deal with the two-dimensional fee market Note that for a minimal implementation, we do not need the mempool at all
(we can rely on second-layer transaction bundling marketplaces instead), and we only need one client to implement the block building logic. Extensive consensus testing is only required for the
execution-layer and consensus-layer consensus changes, which are relatively lightweight. Anything in between such a minimal implementation and a "full" rollout where all clients support block
production and the mempool is possible. ## What does the proto-danksharding multidimensional fee market look like? Proto-danksharding introduces a [**multi-dimensional EIP-1559 fee market**](https://
ethresear.ch/t/multidimensional-eip-1559/11651), where there are **two resources, gas and blobs, with separate floating gas prices and separate limits**. That is, there are two variables and four
constants: | | Target per block | Max per block | Basefee | | - | - | - | - | | **Gas** | 15 million | 30 million | Variable | | **Blob** | 8 | 16 | Variable | The blob fee is charged in gas, but it
is a variable amount of gas, which adjusts so that in the long run the average number of blobs per block actually equals the target. The two-dimensional nature means that block builders are going to
face a harder problem: instead of simply accepting transactions with the highest priority fee until they either run out of transactions or hit the block gas limit, they would have to simultaneously
avoid hitting _two_ different limits. **Here's an example**. Suppose that the gas limit is 70 and the blob limit is 40. The mempool has many transactions, enough to fill the block, of two types (tx
gas includes the per-blob gas): * Priority fee 5 per gas, 4 blobs, 4 total gas * Priority fee 3 per gas, 1 blob, 2 total gas A miner that follows the naive "walk down the priority fee" algorithm
would fill the entire block with 10 transactions (40 gas) of the first type, and get a revenue of 5 * 40 = 200. Because these 10 transactions fill up the blob limit completely, they would not be able
to include any more transactions. But the optimal strategy is to take 3 transactions of the first type and 28 of the second type. This gives you a block with 40 blobs and 68 gas, and 5 * 12 + 3 * 56
= 228 revenue. Are execution clients going to have to implement complex multidimensional knapsack
problem algorithms to optimize their block production now? No, for a few reasons: * **EIP-1559 ensures that most blocks will not hit either limit, so only a few blocks are actually faced with the
multidimensional optimization problem**. In the usual case where the mempool does not have enough (sufficient-fee-paying) transactions to hit either limit, any miner could just get the optimal
revenue by including _every_ transaction that they see. * **Fairly simple heuristics can come close to optimal in practice**. See [Ansgar's EIP-4488 analysis](https://hackmd.io/@adietrichs/
4488-mining-analysis) for some data around this in a similar context. * **Multidimensional pricing is not even the largest source of revenue gains from specialization - MEV is**. Specialized MEV
revenue extractable through specialized algorithms from on-chain DEX arbitrage, liquidations, front-running NFT sales, etc is a significant fraction of total "naively extractable revenue" (ie.
priority fees): specialized MEV revenue seems to average [around 0.025 ETH per block](https://explore.flashbots.net/), and total priority fees are usually [around 0.1 ETH per block](https://
watchtheburn.com/). * **[Proposer/builder separation](https://ethresear.ch/t/two-slot-proposer-builder-separation/10980) is designed around block production being highly specialized anyway**. PBS
turns the block building process into an auction, where specialized actors can bid for the privilege of creating a block. Regular validators merely need to accept the highest bid. This was intended
to prevent MEV-driven economies of scale from creeping into validator centralization, but it deals with _all_ issues that might make optimal block building harder. For these reasons, the more
complicated fee market dynamics do not greatly increase centralization or risks; indeed, the principle [applied more broadly](https://ethresear.ch/t/multidimensional-eip-1559/11651) could actually
reduce denial-of-service risk! ## How does the exponential EIP-1559 blob fee adjustment mechanism work? Today's EIP-1559 adjusts the basefee $b$ to achieve a particular target gas use level $t$ as
follows: $b_{n+1} = b_n * (1 + \frac{u - t}{8t})$ Where $b_n$ is the current block's basefee, $b_{n+1}$ is the next block's basefee, $t$ is the target and $u$ is the gas used. The goal is that when
$u > t$ (so, usage is above the target), the base fee increases, and when $u < t$ the base fee decreases. **The fee adjustment mechanism in proto-danksharding accomplishes the exact same goal of
targeting a long-run average usage of $t$, and it works in a very similar way, but it fixes a subtle bug in the EIP 1559 approach**. Suppose, in EIP 1559, we get two blocks, the first with $u = 0$
and the next with $u = 2t$. We get: | Block number | Gas in block |Basefee | | - | - | - | | $k$ | - | $x$ | | $k+1$ | $0$ | $\frac{7}{8} * x$ | | $k+2$ | $2t$ | $\frac{7}{8} * \frac{9}{8} * x = \
frac{63}{64} * x$ | Despite average use being equal to $t$, the basefee drops by a factor of $\frac{63}{64}$. So when block space usage varies block by block, the basefee only stabilizes when usage
is a little higher than $t$; in practice [apparently about 3% higher](https://app.mipasa.com/featured/Ethereum-s-London-Hard-Fork-Easy-Gains-), though the exact number depends on the variance.
Proto-danksharding instead uses a formula based on exponential adjustment: $b_{n+1} = b_n * exp(\frac{u - t}{8t})$ $exp(x)$ is the exponential function $e^x$ where $e \approx 2.71828$. At small $x$
values, $exp(x) \approx 1 + x$. In fact, the graphs for proto-danksharding adjustment and EIP-1559 adjustment look _almost exactly the same_:  However, the exponential formula has the convenient property that it is transaction-displacement-independent: the same total usage has the same effect on
the basefee, regardless of how it is distributed between different blocks. Copying over the example above, we can see that the exponential formula does in fact fix the issue. | Block number | Gas in
block | Basefee using EIP 1559 rule | Basefee using proto-danksharding rule | | - | - | - | - | | $k$ | - | $x$ | $x$ | | $k+1$ | $0$ | $\frac{7}{8} * x$ |$\approx 0.88249 * x$ | | $k+2$ | $2t$ | $\
frac{7}{8} * \frac{9}{8} * x = \frac{63}{64} * x$ | $\approx 0.88249 * 1.13314 * x = x$ | We can see why this is true in the general case as follows. The new basefee $b_n$ after a multi-step
adjustment can be computed as follows: $b_n\ *\ exp(\frac{u_1 - t}{8t})\ *\ ...\ *\ exp(\frac{u_n - t}{8t})$ But we can re-express this formula a different way: $b_n\ *\ exp(\frac{u_1 - t}{8t})\ *\
...\ *\ exp(\frac{u_n - t}{8t})$ $= b_n\ *\ exp(\frac{u_1 - t}{8t}\ +\ ...\ +\ \frac{u_n - t}{8t})$ $= b_n\ *\ exp(\frac{u_1\ +\ ...\ +\ u_n - nt}{8t})$ And from here we can see that $b_n$ depends
only on the total usage ($u_1 + ... + u_n$), and not how that usage is distributed. **The term $(u_1\ +\ ...\ +\ u_n - nt)$ can be viewed as the _excess_: the difference between the total gas
actually used and the total gas _intended_ to be used**. The fact that the current basefee equals $b_0 * exp(\frac{excess}{8t})$ makes it really clear that the excess can't go out of a very narrow
range: if the excess goes above $8t * 60$, then the basefee becomes $e^{60}$, which is so absurdly high that no one can pay it, and if it goes below 0 then the resource is basically free and the
chain will get spammed until the excess goes back above zero. The adjustment mechanism in proto-danksharding works in exactly these terms: it tracks `actual_total` ($u_1\ +\ ...\ +\ u_n$) and
computes `targeted_total` ($nt$), and computes the price as an exponential of the difference. To make the computation simpler, instead of using $e^x$ we use $2^x$; in fact, we use an _approximation_
of $2^x$: the `fake_exponential` function in [the EIP](https://eips.ethereum.org/EIPS/eip-4844). The fake exponential is almost always within 0.3% of the actual value. To prevent long periods of
underuse from leading to long periods of 2x full blocks, we add an extra feature: we don't let $excess$ go below zero. If `actual_total` ever goes below `targeted_total`, we just set `actual_total`
to equal `targeted_total` instead. This does break transaction order invariance in extreme cases (where the blob gas goes all the way down to zero), but this is an acceptable tradeoff for the added
safety. Note also one interesting consequence of this multidimensional market: **when proto-danksharding is first introduced, it's likely to have few users initially, and so for some period of time
the cost of a blob will almost certainly be extremely cheap, even if "regular" Ethereum blockchain activity remains expensive**. It is the author's opinion that this fee adjustment mechanism is
_better_ than the current approach, and so eventually _all_ parts of the EIP-1559 fee market should switch to using it. For a longer and more detailed explanation, see [Dankrad's post](https://
dankradfeist.de/ethereum/2022/03/16/exponential-eip1559.html). ## How does `fake_exponential` work? Here's the code of `fake_exponential` for convenience: ```python def fake_exponential(numerator:
int, denominator: int) -> int: cofactor = 2 ** (numerator // denominator) fractional = numerator % denominator return cofactor + ( fractional * cofactor * 2 + (fractional ** 2 * cofactor) //
denominator ) // (denominator * 3) ``` Here it the core mechanism re-expressed in math, with rounding removed: $FakeExp(x) = 2^{\lfloor x \rfloor} * Q(x - \lfloor x \rfloor)$ $Q(x) = 1 + \frac{2}{3}
* x + \frac{1}{3} * x^2$ The goal is to splice together many instances of $Q(x)$, one shifted and scaled up appropriately for each $[2^k, 2^{k+1}]$ range. $Q(x)$ itself is an approximation of $2^x$
for $0 \le x \le 1$, chosen for the following properties: * Simplicity (it's a quadratic equation) * Correctness on the left edge ($Q(0) = 2^0 = 1$) * Correctness on the right edge ($Q(1) = 2^1 = 2$)
* Smooth slope (we ensure $Q'(1) = 2 * Q'(0)$, so that each shifted+scaled copy of $Q$ has the same slope on its right edge as the next copy has on its left edge) The last three requirements give
three linear equations in three unknown coefficients, and the above given $Q(x)$ gives the only solution. The approximation works surprisingly well; for all but the smallest inputs `fake_exponential`
gives answers within 0.3% of the actual value of $2^x$: <center><table><tr> <td> </td><td>  </td> </tr><tr> <td><center> `fake_exponential` (blue) vs actual value of $2^x$ for $0 \le x \le 5$, using step
size of 20. </center></td><td><center> `fake_exponential` divided by actual value of $2^x$ for $5 \le x \le 50$, using step size of 20. </center></td> </tr></table></center> ## What are some
questions in proto-danksharding that are still being debated? _Note: this section can very easily become out-of-date. Do not trust it to give the latest thought on any particular issue._ * All major
optimistic rollups use multi-round proofs, and so they can work with the (much cheaper) point evaluation precompile instead of the blob verification precompile. Anyone who _really_ needs blob
verification could implement it themselves: take as input the blob $D$ and the versioned hash $h$, choose $x = hash(D, h)$, use [barycentric evaluation](https://hackmd.io/@vbuterin/
barycentric_evaluation) to compute $y = D(x)$ and use the point evaluation precompile to verify $h(x) = y$. Hence, **do we _really_ need the blob verification precompile or could we just remove it
and only use point evaluation?** * How well can the chain handle persistent long-term 1 MB+ blocks? If it's too risky, **should the target blob count be reduced at the beginning?** * **Should blobs
be priced in gas or in ETH** (that gets burned)? Are there other adjustments to the fee market that should be made? * Should the new transaction type be treated as **a blob or an SSZ object**, in the
latter case changing `ExecutionPayload` to a union type? (This is a "more work now" vs "more work later" tradeoff) * Exact details of the **trusted setup implementation** (technically outside the
scope of the EIP itself as for implementers the setup is "just a constant", but it still needs to be done). | {"url":"https://notes.ethereum.org/@vbuterin/proto_danksharding_faq?ref=blog.availproject.org","timestamp":"2024-11-14T21:15:48Z","content_type":"text/html","content_length":"57590","record_id":"<urn:uuid:36b36028-f98c-4d57-9267-75e2042cccf8>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00070.warc.gz"} |
January 09 2020
version 1.0.1
New features
• coef() function to extract the regression coefficientes under the MLE approach
• vcov() function to compute the variance-covariance matrix associated with the regression coefficients under the MLE approach
• confint() function to compute the 100(1-alpha)% confidence intervals under the MLE approahc
Bug fixes
• yppe() function now works properly if both the number of intervals and the time grid (n_int and rho) are passed as arguments; the default time grid is now computed by using all distinct observed
failure times as the endpoints of the intervals | {"url":"https://cloud.r-project.org/web/packages/YPPE/news/news.html","timestamp":"2024-11-06T23:59:18Z","content_type":"application/xhtml+xml","content_length":"2135","record_id":"<urn:uuid:bcd2aa9a-f67b-43dc-b2a2-aa5c65aacab7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00434.warc.gz"} |
Lamb's Problem in Seismology | Insights, Solutions & Applications
Lamb’s problem in seismology
Explore Lamb’s Problem in seismology: Insights into seismic wave behavior, applications in earthquake engineering, and advancements in urban planning.
Understanding Lamb’s Problem in Seismology
Lamb’s Problem, a fundamental concept in seismology, refers to the study of surface waves generated in a semi-infinite elastic solid by a point source. This problem, first formulated by Sir Horace
Lamb in the early 20th century, offers crucial insights into how seismic waves propagate through the Earth’s crust following events like earthquakes or artificial explosions.
Insights from Lamb’s Problem
One of the key insights from Lamb’s Problem is the understanding of wave types generated by seismic events. It reveals that three primary types of waves are produced: P-waves (Primary waves), S-waves
(Secondary waves), and Rayleigh waves. P-waves, being compressional waves, travel fastest and are the first to be detected by seismographs. S-waves, shear waves, follow P-waves and move through the
Earth’s interior only. Lastly, Rayleigh waves, a type of surface wave, travel along the Earth’s surface and are responsible for most of the shaking felt during an earthquake.
Solutions to Lamb’s Problem
The mathematical solutions to Lamb’s Problem involve complex calculus and are essential for predicting the behavior of seismic waves. These solutions help in determining the velocity, amplitude, and
frequency of waves as they travel through different layers of the Earth. Understanding these parameters is crucial for earthquake engineering, as it aids in designing structures that can withstand
seismic forces.
Applications in Seismology
Lamb’s Problem has wide-ranging applications in both theoretical and practical seismology. Theoretically, it helps in enhancing our understanding of the Earth’s interior structure. Practically, it is
instrumental in earthquake prediction and mitigation strategies. For instance, the insights from Lamb’s Problem are used in designing buildings and infrastructure that are more resilient to seismic
Moreover, Lamb’s Problem has been pivotal in the development of seismic inversion techniques. These techniques allow scientists to create detailed images of the Earth’s subsurface by analyzing the
way seismic waves travel through it. This is critical not only for understanding geological formations but also for resource exploration, such as in the oil and gas industries.
Advanced Developments in Understanding Lamb’s Problem
In recent years, advancements in computational power and seismic technology have furthered our understanding of Lamb’s Problem. High-performance computing allows for the simulation of seismic wave
propagation in more complex geological structures, providing more accurate predictions of how seismic waves behave in different environments. Additionally, the integration of machine learning
algorithms with seismic data is revolutionizing the way we interpret and predict seismic events.
Seismology in Urban Planning
An important application of Lamb’s Problem is in urban planning and development. Cities located in seismic zones are increasingly using insights from seismology for urban planning and zoning
regulations. This includes identifying safe building zones, engineering buildings to withstand seismic forces, and developing evacuation plans. Understanding the propagation of seismic waves helps in
identifying potential areas of high risk, thereby guiding urban development in a safer and more sustainable manner.
Challenges and Future Directions
Despite its significant contributions, there are still challenges in fully understanding and applying Lamb’s Problem to real-world scenarios. One challenge is the variability of the Earth’s crust,
which can lead to unpredictable wave behaviors. Another challenge is in the area of early warning systems for earthquakes where seconds of advance notice can be crucial. Future research in
seismology, driven by Lamb’s Problem, aims to overcome these challenges by developing more sophisticated models and technologies.
Lamb’s Problem remains a cornerstone in the field of seismology, offering invaluable insights into the behavior of seismic waves. Its applications range from earthquake engineering to urban planning,
and its solutions are integral to our understanding of the Earth’s interior structure. The advancements in computational seismology and the integration of new technologies promise to enhance our
capabilities in predicting and managing seismic events. As our understanding of Lamb’s Problem continues to evolve, it will undoubtedly play a critical role in shaping a safer and more resilient
future against the inevitable forces of nature. | {"url":"https://modern-physics.org/lambs-problem-in-seismology/","timestamp":"2024-11-10T18:50:09Z","content_type":"text/html","content_length":"160450","record_id":"<urn:uuid:4ced7dc0-7901-41b1-b70d-ff3c1aa9f607>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00274.warc.gz"} |
Levenshtein Distance
Levenshtein distance calculates the number of operations needed to change one word to another by applying single-character edits (insertions, deletions or substitutions).
The reference explains this concept very well. For consistency, I extracted a paragraph from it which explains the operations in Levenshtein algorithm. The source of the following paragraph is the
first reference of this article.
• Cell (0:1) contains red number 1. It means that we need 1 operation to transform M to an empty string. And it is by deleting M. This is why this number is red.
• Cell (0:2) contains red number 2. It means that we need 2 operations to transform ME to an empty string. And it is by deleting E and M.
• Cell (1:0) contains green number 1. It means that we need 1 operation to transform an empty string to M. And it is by inserting M. This is why this number is green.
• Cell (2:0) contains green number 2. It means that we need 2 operations to transform an empty string to MY. And it is by inserting Y and M.
• Cell (1:1) contains number 0. It means that it costs nothing to transform M into M.
• Cell (1:2) contains red number 1. It means that we need 1 operation to transform ME to M. And it is by deleting E.
• And so on…
Characters: (( sentenceOneWords ))
Characters: (( sentenceTwoWords ))
Levenshtein Distance: (( levenshteinDistance ))
Planted: by L Ma;
L Ma (2019). 'Levenshtein Distance', Datumorphism, 05 April. Available at: https://datumorphism.leima.is/cards/math/levenshtein-distance/. | {"url":"https://datumorphism.leima.is/cards/math/levenshtein-distance/","timestamp":"2024-11-14T08:55:26Z","content_type":"text/html","content_length":"114178","record_id":"<urn:uuid:e530acad-f3e9-48a4-bd6a-b576fe4d5f26>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00001.warc.gz"} |
Efficiency means using the best way to achieve a goal. Mathematically, selecting the maximizer of an objective function. The goal may be anything. For example, the objective function may be a
weighted average of performance across various situations.
Robustness means performing well in a wide variety of circumstances. Mathematically, performing well may mean maximizing the weighted average performance across situations, where the weights are the
probabilities of the situations. Performing well may also mean maximizing the probability of meeting a minimum standard – this probability sums the probabilities of situations in which the
(situation-specific) minimum standard is reached. In any case, some objective function is being maximized for robustness. The best way to achieve a goal is being found. The goal is either a weighted
average performance, the probability of exceeding a minimum standard or some similar objective. Thus robustness is efficiency for a particular objective.
The robustness-efficiency tradeoff is just a tradeoff between different objective functions. One objective function in this case is a weighted average that puts positive weight on the other objective
Whatever the goal, working towards it efficiently is by definition the best thing to do. The goal usually changes over time, but most of this change is a slow drift. Reevaluating the probabilities of
situations usually changes the goal, in particular if the goal is a weighted average or a sum of probabilities that includes some of these situations. A rare event occurring causes a reevaluation of
the probability of this event, thus necessarily the probability of at least one other event. If the probabilities of rare events are revised up, then the goal tends to shift away from
single-situation efficiency, or performance in a small number of situations, towards robustness (efficiency for a combination of a large number of situations).
To be better prepared for emergencies and crises, the society should prepare efficiently. The most efficient method may be difficult to determine in the short term. If the expected time until the
next crisis is long, then the best way includes gathering resources and storing these in a large number of distributed depots. These resources include human capital – the skills of solving
emergencies. Such skills are produced using training, stored in people’s brains, kept fresh with training. Both the physical and mental resources are part of the economic production in the country.
Economic growth is helpful for creating emergency supplies, raising the medical capacity, freeing up time in which to train preparedness. Unfortunately, economic growth is often wasted on frivolous
consumption of goods and services, often to impress others. Resources wasted in this way may reduce preparedness by causing people to go soft physically and mentally.
Solving a crisis requires cooperation. Consumption of social media may polarize a society, reducing collaboration and thus preparedness.
On the optimal burden of proof
All claims should be considered false until proven otherwise, because lies can be invented much faster than refuted. In other words, the maker of a claim has the burden of providing high-quality
scientific proof, for example by referencing previous research on the subject. Strangely enough, some people seem to believe marketing, political spin and conspiracy theories even after such claims
have been proven false. It remains to wish that everyone received the consequences of their choices (so that karma works).
Considering all claims false until proven otherwise runs into a logical problem: a claim and its opposite claim cannot be simultaneously false. The priority for falsity should be given to actively
made claims, e.g. someone saying that a product or a policy works, or that there is a conspiracy behind an accident. Especially suspect are claims that benefit their maker if people believe them. A
higher probability of falsity should also be attached to positive claims, e.g. that something has an effect in whatever direction (as opposed to no effect) or that an event is due to non-obvious
causes, not chance. The lack of an effect should be the null hypothesis. Similarly, ignorance and carelessness, not malice, should be the default explanation for bad events.
Sometimes two opposing claims are actively made and belief in them benefits their makers, e.g. in politics or when competing products are marketed. This is the hardest case to find the truth in, but
a partial and probabilistic solution is possible. Until rigorous proof is found, one should keep an open mind. Keeping an open mind creates a vulnerability to manipulation: after some claim is proven
false, its proponents often try to defend it by asking its opponents to keep an open mind, i.e. ignore evidence. In such cases, the mind should be closed to the claim until its proponents provide
enough counter-evidence for a neutral view to be reasonable again.
To find which opposing claim is true, the first test is logic. If a claim is logically inconsistent with itself, then it is false by syntactic reasoning alone. A broader test is whether the claim is
consistent with other claims of the same person. For example, Vladimir Putin said that there were no Russian soldiers in Crimea, but a month later gave medals to some Russian soldiers, citing their
successful operation in Crimea. At least one of the claims must be false, because either there were Russian soldiers in Crimea or not. The way people try to weasel out of such self-contradictions is
to say that the two claims referred to different time periods, definitions or circumstances. In other words, change the interpretation of words. A difficulty for the truth-seeker is that sometimes
such a change in interpretation is a legitimate clarification. Tongues do slip. Nonetheless, a contradiction is probabilistic evidence for lying.
The second test for falsity is objective evidence. If there is a streetfight and the two sides accuse each other of starting it, then sometimes a security camera video can refute one of the
contradicting claims. What evidence is objective is, sadly, subject to interpretation. Videos can be photoshopped, though it is difficult and time-consuming. The objectivity of the evidence is
strongly positively correlated with the scientific rigour of its collection process. „Hard” evidence is a signal of the truth, but a probabilistic signal. In this world, most signals are
The third test of falsity is the testimony of neutral observers, preferably several of them, because people misperceive and misremember even under the best intentions. The neutrality of observers is
again up for debate and interpretation. In some cases, an observer is a statistics-gathering organisation. Just like objective evidence, testimony and statistics are probabilistic signals.
The fourth test of falsity is the testimony of interested parties, to which the above caveats apply even more strongly.
Integrating conflicting evidence should use Bayes’ rule, because it keeps probabilities consistent. Consistency helps glean information about one aspect of the question from data on other aspects.
Background knowledge should be combined with the evidence, for example by ruling out physical impossibilities. If a camera shows a car disappearing behind a corner and immediately reappearing, moving
in the opposite direction, then physics says that the original car couldn’t have changed direction so fast. The appearing car must be a different one. Knowledge of human interactions and psychology
is part of the background information, e.g. if smaller, weaker and outnumbered people rarely attack the stronger and more numerous, then this provides probabilistic info about who started a fight.
Legal theory incorporates background knowledge of human nature to get information about the crime – human nature suggests motives. Asking: „Who benefits?” has a long history in law. | {"url":"https://sanderheinsalu.com/ajaveeb/?tag=logic","timestamp":"2024-11-03T02:45:37Z","content_type":"text/html","content_length":"28294","record_id":"<urn:uuid:47e329f9-816f-4299-8e80-673b79740d63>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00005.warc.gz"} |
Points and triangles in the plane and halving planes in space
We prove that for any set S of n points in the plane and n^3-α triangles spanned by the points of S there exists a point (not necessarily of S) contained in at least n^3-3α/(512 log^25 n) of the
triangles. This implies that any set of n points in three - dimensional space defines at most 6.4n^8/3 log^5/3 n halving planes.
Original language English (US)
Title of host publication Proc Sixth Annu Symp Comput Geom
Publisher Publ by ACM
Pages 112-115
Number of pages 4
ISBN (Print) 0897913620, 9780897913621
State Published - 1990
Externally published Yes
Event Proceedings of the Sixth Annual Symposium on Computational Geometry - Berkeley, CA, USA
Duration: Jun 6 1990 → Jun 8 1990
Publication series
Name Proc Sixth Annu Symp Comput Geom
Conference Proceedings of the Sixth Annual Symposium on Computational Geometry
City Berkeley, CA, USA
Period 6/6/90 → 6/8/90
All Science Journal Classification (ASJC) codes
Dive into the research topics of 'Points and triangles in the plane and halving planes in space'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/points-and-triangles-in-the-plane-and-halving-planes-in-space","timestamp":"2024-11-11T08:28:49Z","content_type":"text/html","content_length":"43952","record_id":"<urn:uuid:c813a7ca-5c23-4ed9-b05e-8f2ac3290f8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00510.warc.gz"} |
Trade Tokens | Uniswap
In Uniswap, there is a separate exchange contract for each ERC20 token. These exchanges hold reserves of both ETH and their associated ERC20. Instead of waiting to be matched in an order-book, users
can make trades against the reserves at any time. Reserves are pooled between a decentralized network of liquidity providers who collect fees on every trade.
Pricing is automatic, based on the x * y = k market making formula which automatically adjusts prices based off the relative sizes of the two reserves and the size of the incoming trade. Since all
tokens share ETH as a common pair, it is used as an intermediary asset for direct trading between any ERC20 ⇄ ERC20 pair.
ETH ⇄ ERC20 Calculations
The variables needed to determine price when trading between ETH and ERC20 tokens is:
• ETH reserve size of the ERC20 exchange
• ERC20 reserve size of the ERC20 exchange
• Amount sold (input) or amount bought (output)
Amount Bought (sell order)
For sell orders (exact input), the amount bought (output) is calculated:
// Sell ETH for ERC20
const inputAmount = userInputEthValue
const inputReserve = web3.eth.getBalance(exchangeAddress)
const outputReserve = tokenContract.methods.balanceOf(exchangeAddress).call()
// Sell ERC20 for ETH
const inputAmount = userInputTokenValue
const inputReserve = tokenContract.methods.balanceOf(exchangeAddress).call()
const outputReserve = web3.eth.getBalance(exchangeAddress)
// Output amount bought
const numerator = inputAmount * outputReserve * 997
const denominator = inputReserve * 1000 + inputAmount * 997
const outputAmount = numerator / denominator
Amount Sold (buy order)
For buy orders (exact output), the cost (input) is calculated:
// Buy ERC20 with ETH
const outputAmount = userInputTokenValue
const inputReserve = web3.eth.getBalance(exchangeAddress)
const outputReserve = tokenContract.methods.balanceOf(exchangeAddress).call()
// Buy ETH with ERC20
const outputAmount = userInputEthValue
const inputReserve = tokenContract.methods.balanceOf(exchangeAddress).call()
const outputReserve = web3.eth.getBalance(exchangeAddress)
// Cost
const numerator = outputAmount * inputReserve * 1000
const denominator = (outputReserve - outputAmount) * 997
const inputAmount = numerator / denominator + 1
Liquidity Provider Fee
There is a 0.3% liquidity provider fee built into the price formula. This can be calculated:
fee = inputAmount * 0.003
Exchange Rate
The exchange rate is simply the output amount divided by the input amount.
const rate = outputAmount / inputAmount
ERC20 ⇄ ERC20 Calculations
The variables needed to determine price when trading between two ERC20 tokens is:
• ETH reserve size of the input ERC20 exchange
• ERC20 reserve size of the input ERC20 exchange
• ETH reserve size of the output ERC20 exchange
• ERC20 reserve size of the output ERC20 exchange
• Amount sold (input) or amount bought (output)
Amount Bought (sell order)
For sell orders (exact input), the amount bought (output) is calculated:
// TokenA (ERC20) to ETH conversion
const inputAmountA = userInputTokenAValue
const inputReserveA = tokenContractA.methods.balanceOf(exchangeAddressA).call()
const outputReserveA = web3.eth.getBalance(exchangeAddressA)
const numeratorA = inputAmountA * outputReserveA * 997
const denominatorA = inputReserveA * 1000 + inputAmountA * 997
const outputAmountA = numeratorA / denominatorA
// ETH to TokenB conversion
const inputAmountB = outputAmountA
const inputReserveB = web3.eth.getBalance(exchangeAddressB)
const outputReserveB = tokenContract.methods.balanceOf(exchangeAddressB).call()
const numeratorB = inputAmountB * outputReserveB * 997
const denominatorB = inputReserveB * 1000 + inputAmountB * 997
const outputAmountB = numeratorB / denominatorB
Amount Sold (buy order)
For buy orders (exact output), the cost (input) is calculated:
// Buy TokenB with ETH
const outputAmountB = userInputTokenBValue
const inputReserveB = web3.eth.getBalance(exchangeAddressB)
const outputReserveB = tokenContractB.methods.balanceOf(exchangeAddressB).call()
// Cost
const numeratorB = outputAmountB * inputReserveB * 1000
const denominatorB = (outputReserveB - outputAmountB) * 997
const inputAmountB = numeratorB / denominatorB + 1
// Buy ETH with TokenA
const outputAmountA = userInputEthValue
const inputReserveA = tokenContractA.methods.balanceOf(exchangeAddressA).call()
const outputReserveA = web3.eth.getBalance(exchangeAddressA)
// Cost
const numeratorA = outputAmountA * inputReserveA * 1000
const denominatorA = (outputReserveA - outputAmountA) * 997
const inputAmountA = numeratorA / denominatorA + 1
Liquidity Provider Fee
There is a 0.30% liquidity provider fee to swap from TokenA to ETH on the input exchange. There is another 0.3% liquidity provider fee to swap the remaining ETH to TokenB.
const exchangeAFee = inputAmountA * 0.003
const exchangeBFee = inputAmountB * 0.003
Since users only inputs Token A, it can be represented to them as:
const combinedFee = inputAmountA * 0.00591
Exchange Rate
The exchange rate is simply the output amount divided by the input amount.
const rate = outputAmountB / inputAmountA
Many Uniswap functions include a transaction deadline that sets a time after which a transaction can no longer be executed. This limits miners holding signed transactions for extended durations and
executing them based off market movements. It also reduces uncertainty around transactions that take a long time to execute due to issues with gas price.
Deadlines are calculated by adding the desired amount of time (in seconds) to the latest Ethereum block timestamp.
web3.eth.getBlock('latest', (error, block) => {
deadline = block.timestamp + 300 // transaction expires in 300 seconds (5 minutes)
Uniswap allows traders to swap tokens and transfer the output to a new recipient address. This allows for a type of payment where the payer sends one token and the payee receives another.
ETH ⇄ ERC20 Trades
Coming soon...
ERC20 ⇄ ERC20 Trades
Coming soon...
Custom Pools
Coming soon... | {"url":"https://docs.uniswap.org/contracts/v1/guides/trade-tokens","timestamp":"2024-11-12T02:31:46Z","content_type":"text/html","content_length":"83800","record_id":"<urn:uuid:56911069-4eec-4b54-bfe9-4811b8deccfe>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00390.warc.gz"} |
Al-Tūsī Originates the Concept of Mathematical Function
About 1150 Persian mathematician and astronomer of the Islamic Golden Age Sharaf al-Dīn al-Muẓaffar ibn Muḥammad ibn al-Muẓaffar al-Ṭūsī, who taught in Aleppo and Mosul, originated the concept of
mathematical function.
"In his analysis of the equation x3 + d = bx2 for example, he begins by changing the equation's form to x2(b − x) = d. He then states that the question of whether the equation has a solution depends
on whether or not the 'function' on the left side reaches the value d. To determine this, he finds a maximum value for the function. Sharaf al-Din then states that if this value is less than d, there
are no positive solutions; if it is equal to d, then there is one solution; and if it is greater than d, then there are two solutions" (Wikipedia article on Function (mathematics), accessed | {"url":"https://historyofinformation.com/detail.php?entryid=2322","timestamp":"2024-11-06T01:35:02Z","content_type":"text/html","content_length":"13815","record_id":"<urn:uuid:349fc9df-0add-424a-a6e4-cb133e77d511>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00246.warc.gz"} |
How Many Hours of Video Can 128 GB Hold?
The following table shows the Number of Hours of Video that can be stored on a 128 Gigabyte drive for different resolutions at the highest bitrate setting.
Resolution Number of hours
8K 56 minutes
4K 3 hours, 20 minutes
2K 9 hours, 28 minutes
1080p 18 hours, 57 minutes
720p 29 hours, 56 minutes
480p 71 hours, 6 minutes
360p 189 hours, 37 minutes
As the resolution decreases the number of hours increases.
Use the calculator below to find the number of hours of video that can be stored on a 128 Gigabyte drive.
• Video resolution
• Bit Rate (Megabits per second or Mbps)
Video resolution is the number of pixels that can be displayed. Google recommends a range of bitrates for the following resolutions:
• 8K
• 4K
• 2K
• 1080p
• 720p
• 480p
• 360p
As an example for High Frame Rate 2K (1440p) video the recommended bit rate is:
• 24 Mbps for Standard Dynamic Range
• 30 Mbps for High Dynamic Range
Related Calculators
Unit Converters
Terabyte to Megabit | {"url":"https://3roam.com/how-many-hours-of-video-can-128-gb-hold/","timestamp":"2024-11-05T03:35:55Z","content_type":"text/html","content_length":"200465","record_id":"<urn:uuid:89f52bcc-ca50-4527-8a48-c10456fc1be6>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00806.warc.gz"} |