text
stringlengths 256
16.4k
|
|---|
On the Korovkin approximation theorem and Volkov-type theorems | Journal of Inequalities and Applications | Full Text
Nihan Uygun1
In this short paper, we give a generalization of the classical Korovkin approximation theorem (Korovkin in Linear Operators and Approximation Theory, 1960), Volkov-type theorems (Volkov in Dokl. Akad. Nauk SSSR 115:17-19, 1957), and a recent result of (Taşdelen and Erençin in J. Math. Anal. Appl. 331(1):727-735, 2007).
In this paper, the classical Korovkin theorem (see [1]) and one of the key results (Theorem 1) of [2] will be generalized to arbitrary compact Hausdorff spaces. For a topological space X, the space of real-valued continuous functions on X, as usual, will be denoted by
C\left(X\right)
. We note that if X is a compact Hausdorff space, then
C\left(X\right)
is a Banach space under pointwise algebraic operations and under the norm
\parallel f\parallel =\underset{x\in X}{sup}|f\left(x\right)|.
Let X be a compact Hausdorff space and E be a subspace of
C\left(X\right)
. Then a linear map
A:E\to C\left(X\right)
A\left(f\right)\ge \mathbf{0}
C\left(X\right)
f\ge 0
in E. Here
f\ge \mathbf{0}
f\left(x\right)\ge 0
in ℝ for all
x\in X
For more details on abstract Korovkin approximations theory, we refer to [3] and [4].
Constant-one function on a topological space X will be denoted by
{f}_{0}
{f}_{0}\left(x\right)=1
x\in X
A=\left(a,b\right)
B=\left(c,d\right)
{\mathbb{R}}^{2}
, then the Euclidean distance between A and B, given by
|\left(a,b\right)-\left(c,d\right)|=\sqrt{{\left(a-c\right)}^{2}+{\left(b-d\right)}^{2}},
|A-B|
Definition 1.1 Let X and Y be compact Hausdorff spaces, Z be the product space of X and Y, and let
h\in C\left(Z×Z\right)
f\in C\left(Z\right)
be given. The module of continuity of f with respect to h is a function
{w}_{h}\left(f\right):\left[0,\mathrm{\infty }\right)\to \mathbb{R}
w\left(f\right)\left(0\right)=0
{w}_{h}\left(f\right)\left(\delta \right)=sup\left\{|f\left(u,v\right)-f\left(x,y\right)|:\left(u,v\right),\left(x,y\right)\in Z\text{ and }|h\left(\left(u,v\right),\left(x,y\right)\right)|<\delta \right\}
\delta >0
, with the following additional properties:
w\left(f\right)
is increasing;
{lim}_{\delta \to 0}=0
We note that the above definition is motivated from [[2], p.729] and generalizes the definition which is given there.
Definition 1.2 Let X, Y, and Z be as in Definition 1.1. Let
h\in C\left(Z×Z\right)
be given. We define
{H}_{w,h}
as the set of all continuous functions
f\in C\left(X×Y\right)
\left(u,v\right),\left(x,y\right)\in X×Y
|f\left(u,v\right)-f\left(x,y\right)|\le {w}_{h}\left(f\right)\left(|h\left(\left(u,v\right),\left(x,y\right)\right)|\right).
{H}_{w,h}
is mentioned, we always suppose that h satisfies the property for
{H}_{w,h}
being a vector subspace of
C\left(X×X\right)
{H}_{w,h}
has been considered in [2] by taking
X=\left[0,A\right]
Y=\left[0,B\right]
A,B>0
h\left(\left(u,v\right),\left(x,y\right)\right)=\parallel \left({f}_{1}\left(u,v\right),{f}_{2}\left(u,v\right)\right)-\left({f}_{1}\left(x,y\right),{f}_{2}\left(x,y\right)\right)\parallel ,
{f}_{1}\left(u,v\right)=\frac{u}{1-u}\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{f}_{2}\left(u,v\right)=\frac{v}{1-v}.
The main result of this paper will be obtained via the following lemma.
Lemma 2.1 Let X and Y be compact Hausdorff spaces and Z be a product space of X and Y. Let
{f}_{1},{f}_{2}\in C\left(Z\right)
h\in C\left(Z×Z\right)
h\left(\left(u,v\right),\left(x,y\right)\right)=|\left({f}_{1}\left(u,v\right),{f}_{2}\left(u,v\right)\right)-\left({f}_{1}\left(x,y\right),{f}_{2}\left(x,y\right)\right)|
{H}_{w,h}
is a subspace
C\left(X×Y\right)
{f}_{1},{f}_{2}\in {H}_{w,h}\left(Z\right)
A:{H}_{w,h}\to C\left(Z\right)
be a positive linear map. Let
\left(u,v\right)\in Z
be given, and define
{\phi }_{u,v},{\mathrm{\Phi }}_{u,v}\in C\left(Z\right)
{\phi }_{u,v}={\left({f}_{1}\left(u,v\right){f}_{0}-{f}_{1}\right)}^{2}\phantom{\rule{1em}{0ex}}\mathit{\text{and}}\phantom{\rule{1em}{0ex}}{\mathrm{\Phi }}_{u,v}={\left({f}_{2}\left(u,v\right){f}_{0}-{f}_{2}\right)}^{2}.
\left(u,v\right)\in Z
\begin{array}{rcl}0& \le & A\left({\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}\right)\\ \le & {C}_{1}\left[A\left({f}_{0}\right)-{f}_{0}\right]\left(u,v\right)-{C}_{2}\left[A\left({f}_{1}+{f}_{2}\right)-\left({f}_{1}+{f}_{2}\right)\right]+\left[A\left({f}_{1}^{2}+{f}_{2}^{2}\right)-\left({f}_{1}^{2}+{f}_{2}^{2}\right)\right],\end{array}
{C}_{1}=\left({f}_{1}{\left(u,v\right)}^{2}+{f}_{2}{\left(u,v\right)}^{2}\right)\phantom{\rule{1em}{0ex}}\mathit{\text{and}}\phantom{\rule{1em}{0ex}}{C}_{2}=-2\left({f}_{1}\left(u,v\right)+{f}_{2}\left(u,v\right)\right).
0\le {\phi }_{u,v}={f}_{1}{\left(u,v\right)}^{2}{f}_{0}-2{f}_{1}\left(u,v\right){f}_{1}+{f}_{1}^{2}.
Applying the linearity and positivity of A, we have
0\le A\left({\phi }_{u,v}\right)={f}_{1}{\left(u,v\right)}^{2}A\left({f}_{0}\right)-2{f}_{1}\left(u,v\right)A\left({f}_{1}\right)+A\left({f}_{1}^{2}\right).
Then one can have
\begin{array}{rcl}0& \le & A\left({\phi }_{u,v}\right)\left(u,v\right)\\ =& {f}_{1}{\left(u,v\right)}^{2}A\left({f}_{0}\right)\left(u,v\right)-2{f}_{1}\left(u,v\right)A\left({f}_{1}\right)\left(u,v\right)+A\left({f}_{1}^{2}\right)\left(u,v\right)\\ =& {f}_{1}^{2}\left(u,v\right)\left[A\left({f}_{0}\right)\left(u,v\right)-{f}_{0}\left(u,v\right)+{f}_{0}\left(u,v\right)\right]\\ -2{f}_{1}\left(u,v\right)\left[A\left({f}_{1}\right)\left(u,v\right)-{f}_{1}\left(u,v\right)+{f}_{1}\left(u,v\right)\right]\\ +\left[A\left({f}_{1}^{2}\right)\left(u,v\right)-{f}_{1}{\left(u,v\right)}^{2}+{f}_{1}{\left(u,v\right)}^{2}\right]\\ =& {f}_{1}^{2}\left(u,v\right)\left[A\left({f}_{0}\right)-{f}_{0}\right]\left(u,v\right)-2{f}_{1}\left(u,v\right)\left[A\left({f}_{1}\right)-{f}_{1}\right]\left(u,v\right)+\left[A\left({f}_{1}^{2}\right)-{f}_{1}^{2}\right]\left(u,v\right).\end{array}
\begin{array}{rcl}A\left({\mathrm{\Phi }}_{u,v}\right)\left(u,v\right)& =& {f}_{2}^{2}\left(u,v\right)\left[A\left({f}_{0}\right)-{f}_{0}\right]\left(u,v\right)\\ -2{f}_{2}\left(u,v\right)\left[A\left({f}_{2}\right)-{f}_{2}\right]\left(u,v\right)+\left[A\left({f}_{2}^{2}\right)-{f}_{2}^{2}\right]\left(u,v\right).\end{array}
Now applying A, which is linear, to
{\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}
completes the proof. □
Lemma 2.2 Let X and Y be compact Hausdorff spaces and
{f}_{1}
{f}_{2}
, and h be defined as in Lemma 2.1. Let
f\in {H}_{w,h}
be given. For each
ϵ>0
\delta >0
|f\left(u,v\right)-f\left(x,y\right)|<ϵ+\frac{2\parallel f\parallel }{{\delta }^{2}}{h}^{2}\left(\left(u,v\right),\left(x,y\right)\right).
ϵ>0
w\left(f\right):\left[0,\mathrm{\infty }\right)\to \mathbb{R}
is continuous, there exists
\delta >0
w\left(f,{\delta }^{\prime }\right)=w\left(f\right)\left({\delta }^{\prime }\right)<ϵ
0\le {\delta }^{\prime }<\delta
. This implies, since
|f\left(u,v\right)-f\left(x,y\right)|\le w\left(f,|h\left(\left(u,v\right),\left(x,y\right)\right)|\right)\phantom{\rule{1em}{0ex}}\text{for all }\left(u,v\right),\left(x,y\right)\in Z,
{\left[\left({\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}\right)\right]}^{\frac{1}{2}}\left(x,y\right)=|h\left(\left(u,v\right)-h\left(x,y\right)\right)|<\delta \phantom{\rule{1em}{0ex}}\text{implies}\phantom{\rule{1em}{0ex}}|f\left(u,v\right)-f\left(x,y\right)|<ϵ,
{\phi }_{u,v}
{\mathrm{\Phi }}_{u,v}
are defined as in Lemma 2.1. If
{\left[\left({\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}\right)\right]}^{\frac{1}{2}}\left(x,y\right)\ge \delta
|f\left(u,v\right)-f\left(x,y\right)|\le 2\parallel f\parallel \le 2\parallel f\parallel \frac{\left[\left({\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}\right)\right]\left(x,y\right)}{{\delta }^{2}}.
\left(u,v\right)\in Z
|f\left(u,v\right)-f|\le ϵ+2\parallel f\parallel \frac{\left[\left({\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}\right)\right]}{{\delta }^{2}}.
Lemma 2.3 Suppose that the hypotheses of Lemma 2.2 are satisfied. Let
f\in {H}_{w,h}
ϵ>0
be given. Then there exists
C>0
\parallel A\left(f\right)-f\parallel <ϵ+C\left(\parallel A\left({f}_{0}\right)-{f}_{0}\parallel +\parallel A\left({f}_{1}+{f}_{2}\right)-\left({f}_{1}+{f}_{2}\right)\parallel +\parallel A\left({f}_{1}^{2}+{f}_{2}^{2}\right)-\left({f}_{1}^{2}+{f}_{2}^{2}\right)\parallel \right).
K:=\frac{2\parallel f\parallel }{{\delta }^{2}}
\delta >0
\left(u,v\right)\in Z
\begin{array}{rl}|f\left(u,v\right){f}_{0}-f|& \le ϵ+\frac{2\parallel f\parallel }{{\delta }^{2}}\left[{\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}\right]\\ \le ϵ+\frac{2\parallel f\parallel }{{\delta }^{2}}\left[{f}_{1}^{2}\left(u,v\right){f}_{0}+{f}_{2}^{2}\left(u,v\right){f}_{0}-2{f}_{1}\left(u,v\right){f}_{1}-2{f}_{2}\left(u,v\right){f}_{2}+\left({f}_{1}^{2}+{f}_{2}^{2}\right)\right],\end{array}
\begin{array}{rl}|\left[A\left(f\right)-f\left(u,v\right)A\left({f}_{0}\right)\right]\left(u,v\right)|& \le ϵA\left({f}_{0}\right)\left(u,v\right)+K\left(A\left({\phi }_{u,v}\right)+A\left({\mathrm{\Phi }}_{u,v}\right)\right)\\ =ϵ+ϵ\left[A\left({f}_{0}\right)-{f}_{0}\right]\left(u,v\right)+KA\left({\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}\right).\end{array}
\begin{array}{rl}|A\left(f\right)-f|\left(u,v\right)& \le |\left[A\left(f\right)-f\left(u,v\right)A\left({f}_{0}\right)\right]\left(u,v\right)|+|f\left(u,v\right)||\left(A\left({f}_{0}\right)-{f}_{0}\right)\left(u,v\right)|\\ \le ϵ+KA\left({\phi }_{u,v}+{\mathrm{\Phi }}_{u,v}\right)\left(u,v\right)+\left(\parallel f\parallel +ϵ\right)\parallel A\left({f}_{0}\right)-{f}_{0}\parallel .\end{array}
Now, applying Lemma 2.1 and taking
C=2K+\parallel f\parallel ,
we have what is to be shown. □
We note that in the above theorem C depends only on
\parallel f\parallel
and ϵ, and is independent of the positive linear operator A.
Theorem 2.4 Let X and Y be compact Hausdorff spaces and Z be the product space of X and Y. Let
{f}_{1},{f}_{2}\in C\left(Z\right)
h\in C\left(Z×Z\right)
h\left(\left(u,v\right),\left(x,y\right)\right)=\parallel \left({f}_{1}\left(u,v\right),{f}_{2}\left(u,v\right)\right)-\left({f}_{1}\left(x,y\right),{f}_{2}\left(x,y\right)\right)\parallel
{H}_{w,h}
C\left(X×Y\right)
{f}_{1},{f}_{2}\in {H}_{w,h}\left(Z\right)
{\left({A}_{n}\right)}_{n\in \mathbb{N}}
be a sequence of positive operators from
{H}_{w,h}
C\left(X×Y\right)
\parallel {A}_{n}\left({f}_{0}\right)-{f}_{0}\parallel \to 0
\parallel {A}_{n}\left({f}_{1}\right)-{f}_{1}\parallel \to 0
\parallel {A}_{n}\left({f}_{2}\right)-{f}_{2}\parallel \to 0
\parallel {A}_{n}\left({f}_{1}^{2}+{f}_{2}^{2}\right)-\left({f}_{1}^{2}+{f}_{2}^{2}\right)\parallel \to 0
f\in {H}_{w,h}
\parallel {A}_{n}\left(f\right)-f\parallel \to 0.
f\in {H}_{w,h}
ϵ>0
be given. By Lemma 2.3, there exists
C>0
\parallel f\parallel
ϵ>0
) such that for each n,
\parallel {A}_{n}\left(f\right)-f\parallel \le ϵ+C\left(\parallel {A}_{n}\left({f}_{0}\right)-{f}_{0}\parallel +\parallel {A}_{n}\left({f}_{1}+{f}_{2}\right)-\left({f}_{1}+{f}_{2}\right)\parallel +\parallel {A}_{n}\left({f}_{1}^{2}+{f}_{2}^{2}\right)-\left({f}_{1}^{2}+{f}_{2}^{2}\right)\parallel \right).
ϵ>0
is arbitrary and the last three terms of the inequality converge to zero by the assumption, we have
{A}_{n}\left(f\right)\to f.
Note also that in Theorem 1 of [2] it is not necessary to take a double sequence of positive operators: as the above result reveals, one can take
\left({A}_{n}\right)
\left({A}_{n,m}\right)
X=\left[0,1\right]
Y=\left\{y\right\}
{f}_{1},{f}_{2}\in C\left(X×Y\right)
{f}_{u,v}=u\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{f}_{2}=0,
then Theorem 2.4 becomes the classical Korovkin theorem.
If one takes
X=\left[0,A\right]
Y=\left[0,B\right]
0<A,B<1
{f}_{1}
{f}_{2}
{f}_{1}\left(u,v\right)=\frac{u}{1-u}\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{f}_{2}\left(u,v\right)=\frac{v}{1-v},
then the above theorem becomes Theorem 1 of [2].
For linear positive operators of two variables, Theorem 2.4 generalizes the result of Volkov in [5].
We believe that the above theorem can be generalized to n-fold copies by taking
Z={X}_{1}×{X}_{2}×\cdots ×{X}_{n}
Z=X×Y
{X}_{1},{X}_{2},\dots ,{X}_{n}
are compact Hausdorff spaces.
The above theorem is also true if one replaces
C\left(X\right)
{C}_{b}\left(X\right)
, the space of bounded continuous functions, in the case of an arbitrary topological space X.
Korovkin PP: Linear Operators and Approximation Theory. Hindustan Publish Co., Delhi; 1960.
Taşdelen F, Erençin A: The generalization of bivariate MKZ operators by multiple generating functions. J. Math. Anal. Appl. 2007,331(1):727-735. 10.1016/j.jmaa.2006.09.024
Altomare F, Campiti M: Korovkin-Type Approximation Theory and Its Applications. de Gruyter, Berlin; 1994.
Lorentz GG: Approximation of Functions. 2nd edition. Chelsea, New York; 1986.
Volkov VI: On the convergence of sequences of linear positive operators in the space of continuous functions of two variables. Dokl. Akad. Nauk SSSR 1957, 115: 17-19. (in Russian)
Department of Mathematics, Abant İzzet Baysal University, Gölköy Kampüsü, Bolu, 14280, Turkey
Correspondence to Nihan Uygun.
Uygun, N. On the Korovkin approximation theorem and Volkov-type theorems. J Inequal Appl 2014, 89 (2014). https://doi.org/10.1186/1029-242X-2014-89
Volkov-type theorem
modules of continuity
|
Internal combustion engine with throttle and rotational inertia and time lag - MATLAB - MathWorks Deutschland
{w}_{±}=\frac{1}{2}\left(-{p}_{2}±\sqrt{{p}_{2}^{2}+4{p}_{1}{p}_{3}}\right).
\Pi =\mathrm{max}\left({\Pi }_{i},{\Pi }_{c}\right)
\frac{d\left({\Pi }_{c}\right)}{dt}=\frac{0.5\cdot \left(1-\mathrm{tanh}\left(4\cdot \frac{\omega -{\omega }_{r}}{{\omega }_{t}}\right)\right)-{\Pi }_{c}}{\tau }
BMEP=T\cdot \left(\frac{2\pi \cdot {n}_{c}}{{V}_{d}}\right),
|
Prewellordering - Wikipedia
In set theory, a prewellordering on a set
{\displaystyle X}
is a preorder
{\displaystyle \leq }
{\displaystyle X}
(a transitive and strongly connected relation on
{\displaystyle X}
) that is wellfounded in the sense that the relation
{\displaystyle x\leq y\land y\nleq x}
is wellfounded. If
{\displaystyle \leq }
is a prewellordering on
{\displaystyle X}
, then the relation
{\displaystyle \sim }
{\displaystyle x\sim y\iff x\leq y\land y\leq x}
{\displaystyle X}
{\displaystyle \leq }
induces a wellordering on the quotient
{\displaystyle X/\sim }
. The order-type of this induced wellordering is an ordinal, referred to as the length of the prewellordering.
A norm on a set
{\displaystyle X}
{\displaystyle X}
into the ordinals. Every norm induces a prewellordering; if
{\displaystyle \phi :X\to Ord}
is a norm, the associated prewellordering is given by
{\displaystyle x\leq y\iff \phi (x)\leq \phi (y)}
Conversely, every prewellordering is induced by a unique regular norm (a norm
{\displaystyle \phi :X\to Ord}
is regular if, for any
{\displaystyle x\in X}
{\displaystyle \alpha <\phi (x)}
{\displaystyle y\in X}
{\displaystyle \phi (y)=\alpha }
1 Prewellordering property
Prewellordering propertyEdit
{\displaystyle {\boldsymbol {\Gamma }}}
is a pointclass of subsets of some collection
{\displaystyle {\mathcal {F}}}
of Polish spaces,
{\displaystyle {\mathcal {F}}}
closed under Cartesian product, and if
{\displaystyle \leq }
is a prewellordering of some subset
{\displaystyle P}
of some element
{\displaystyle X}
{\displaystyle {\mathcal {F}}}
{\displaystyle \leq }
{\displaystyle {\boldsymbol {\Gamma }}}
-prewellordering of
{\displaystyle P}
if the relations
{\displaystyle <^{*}}
{\displaystyle \leq ^{*}}
{\displaystyle {\boldsymbol {\Gamma }}}
{\displaystyle x,y\in X}
{\displaystyle x<^{*}y\iff x\in P\land [y\notin P\lor \{x\leq y\land y\not \leq x\}]}
{\displaystyle x\leq ^{*}y\iff x\in P\land [y\notin P\lor x\leq y]}
{\displaystyle {\boldsymbol {\Gamma }}}
is said to have the prewellordering property if every set in
{\displaystyle {\boldsymbol {\Gamma }}}
{\displaystyle {\boldsymbol {\Gamma }}}
-prewellordering.
The prewellordering property is related to the stronger scale property; in practice, many pointclasses having the prewellordering property also have the scale property, which allows drawing stronger conclusions.
{\displaystyle {\boldsymbol {\Pi }}_{1}^{1}}
{\displaystyle {\boldsymbol {\Sigma }}_{2}^{1}}
both have the prewellordering property; this is provable in ZFC alone. Assuming sufficient large cardinals, for every
{\displaystyle n\in \omega }
{\displaystyle {\boldsymbol {\Pi }}_{2n+1}^{1}}
{\displaystyle {\boldsymbol {\Sigma }}_{2n+2}^{1}}
have the prewellordering property.
{\displaystyle {\boldsymbol {\Gamma }}}
is an adequate pointclass with the prewellordering property, then it also has the reduction property: For any space
{\displaystyle X\in {\mathcal {F}}}
and any sets
{\displaystyle A,B\subseteq X}
{\displaystyle A}
{\displaystyle B}
{\displaystyle {\boldsymbol {\Gamma }}}
, the union
{\displaystyle A\cup B}
may be partitioned into sets
{\displaystyle A^{*},B^{*}}
{\displaystyle {\boldsymbol {\Gamma }}}
{\displaystyle A^{*}\subseteq A}
{\displaystyle B^{*}\subseteq B}
{\displaystyle {\boldsymbol {\Gamma }}}
is an adequate pointclass whose dual pointclass has the prewellordering property, then
{\displaystyle {\boldsymbol {\Gamma }}}
has the separation property: For any space
{\displaystyle X\in {\mathcal {F}}}
{\displaystyle A,B\subseteq X}
{\displaystyle A}
{\displaystyle B}
disjoint sets both in
{\displaystyle {\boldsymbol {\Gamma }}}
{\displaystyle C\subseteq X}
{\displaystyle C}
and its complement
{\displaystyle X\setminus C}
{\displaystyle {\boldsymbol {\Gamma }}}
{\displaystyle A\subseteq C}
{\displaystyle B\cap C=\emptyset }
{\displaystyle {\boldsymbol {\Pi }}_{1}^{1}}
has the prewellordering property, so
{\displaystyle {\boldsymbol {\Sigma }}_{1}^{1}}
has the separation property. This means that if
{\displaystyle A}
{\displaystyle B}
are disjoint analytic subsets of some Polish space
{\displaystyle X}
, then there is a Borel subset
{\displaystyle C}
{\displaystyle X}
{\displaystyle C}
{\displaystyle A}
and is disjoint from
{\displaystyle B}
Graded poset – a graded poset is analogous to a prewellordering with a norm, replacing a map to the ordinals with a map to the integers
Moschovakis, Yiannis N. (1980). Descriptive Set Theory. North Holland. ISBN 0-444-70199-0.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Prewellordering&oldid=1070898504"
|
(Redirected from Venturi meter)
The Venturi effect is the reduction in fluid pressure that results when a fluid flows through a constricted section (or choke) of a pipe. The Venturi effect is named after its discoverer, the 18th century Italian physicist, Giovanni Battista Venturi.
The static pressure in the first measuring tube (1) is higher than at the second (2), and the fluid speed at "1" is lower than at "2", because the cross-sectional area at "1" is greater than at "2".
A flow of air through a Pitot tube Venturi meter, showing the columns connected in a manometer and partially filled with water. The meter is "read" as a differential pressure head in cm or inches of water.
Idealized flow in a Venturi tube
1.2 Expansion of the section
3 Instrumentation and measurement
3.3 Compensation for temperature, pressure, and mass
In inviscid fluid dynamics, an incompressible fluid's velocity must increase as it passes through a constriction in accord with the principle of mass continuity, while its static pressure must decrease in accord with the principle of conservation of mechanical energy (Bernoulli's principle). Thus, any gain in kinetic energy a fluid may attain by its increased velocity through a constriction is balanced by a drop in pressure.
By measuring pressure, the flow rate can be determined, as in various flow measurement devices such as Venturi meters, Venturi nozzles and orifice plates.
Referring to the adjacent diagram, using Bernoulli's equation in the special case of steady, incompressible, inviscid flows (such as the flow of water or other liquid, or low speed flow of gas) along a streamline, the theoretical pressure drop at the constriction is given by:
{\displaystyle p_{1}-p_{2}={\frac {\rho }{2}}\left(v_{2}^{2}-v_{1}^{2}\right)}
{\displaystyle \scriptstyle \rho \,}
{\displaystyle \scriptstyle v_{1}}
is the (slower) fluid velocity where the pipe is wider,
{\displaystyle \scriptstyle v_{2}}
is the (faster) fluid velocity where the pipe is narrower (as seen in the figure).
Choked flowEdit
The limiting case of the Venturi effect is when a fluid reaches the state of choked flow, where the fluid velocity approaches the local speed of sound. When a fluid system is in a state of choked flow, a further decrease in the downstream pressure environment will not lead to an increase in velocity, unless the fluid is compressed.
The mass flow rate for a compressible fluid will increase with increased upstream pressure, which will increase the density of the fluid through the constriction (though the velocity will remain constant). This is the principle of operation of a de Laval nozzle. Increasing source temperature will also increase the local sonic velocity, thus allowing for increased mass flow rate but only if the nozzle area is also increased to compensate for the resulting decrease in density.
Expansion of the sectionEdit
The Bernoulli equation is invertible, and pressure should rise when a fluid slows down. Nevertheless, if there is an expansion of the tube section, turbulence will appear and the theorem will not hold. In all experimental Venturi tubes, the pressure in the entrance is compared to the pressure in the middle section; the output section is never compared with them.
Experimental apparatusEdit
Venturi tube demonstration apparatus built out of PVC pipe and operated with a vacuum pump
A pair of Venturi tubes on a light aircraft, used to provide airflow for air-driven gyroscopic instruments
Venturi tubesEdit
The simplest apparatus is a tubular setup known as a Venturi tube or simply a Venturi (plural: "Venturis" or occasionally "Venturies"). Fluid flows through a length of pipe of varying diameter. To avoid undue aerodynamic drag, a Venturi tube typically has an entry cone of 30 degrees and an exit cone of 5 degrees.[1]
Venturi tubes are often used in processes where permanent pressure loss is not tolerable and where maximum accuracy is needed in case of highly viscous liquids.[citation needed]
Venturi tubes are more expensive to construct than simple orifice plates, and both function on the same basic principle. However, for any given differential pressure, orifice plates cause significantly more permanent energy loss.[2]
Instrumentation and measurementEdit
Both Venturi tubes and orifice plates are used in industrial applications and in scientific laboratories for measuring the flow rate of liquids.
Flow rateEdit
A Venturi can be used to measure the volumetric flow rate,
{\displaystyle \scriptstyle Q}
, using Bernoulli's principle.
{\displaystyle {\begin{aligned}Q&=v_{1}A_{1}=v_{2}A_{2}\\[3pt]p_{1}-p_{2}&={\frac {\rho }{2}}\left(v_{2}^{2}-v_{1}^{2}\right)\end{aligned}}}
{\displaystyle Q=A_{1}{\sqrt {{\frac {2}{\rho }}\cdot {\frac {p_{1}-p_{2}}{\left({\frac {A_{1}}{A_{2}}}\right)^{2}-1}}}}=A_{2}{\sqrt {{\frac {2}{\rho }}\cdot {\frac {p_{1}-p_{2}}{1-\left({\frac {A_{2}}{A_{1}}}\right)^{2}}}}}}
A Venturi can also be used to mix a liquid with a gas. If a pump forces the liquid through a tube connected to a system consisting of a Venturi to increase the liquid speed (the diameter decreases), a short piece of tube with a small hole in it, and last a Venturi that decreases speed (so the pipe gets wider again), the gas will be sucked in through the small hole because of changes in pressure. At the end of the system, a mixture of liquid and gas will appear. See aspirator and pressure head for discussion of this type of siphon.
Differential pressureEdit
Main article: Pressure head
As fluid flows through a Venturi, the expansion and compression of the fluids cause the pressure inside the Venturi to change. This principle can be used in metrology for gauges calibrated for differential pressures. This type of pressure measurement may be more convenient, for example, to measure fuel or combustion pressures in jet or rocket engines.
The first large-scale Venturi meters to measure liquid flows were developed by Clemens Herschel who used them to measure small and large flows of water and wastewater beginning at the end of the 19th century.[3] While working for the Holyoke Water Power Company, Herschel would develop the means for measuring these flows to determine the water power consumption of different mills on the Holyoke Canal System, first beginning development of the device in 1886, two years later he would describe his invention of the Venturi meter to William Unwin in a letter dated June 5, 1888.[4]
Compensation for temperature, pressure, and massEdit
Fundamentally, pressure-based meters measure kinetic energy density. Bernoulli's equation (used above) relates this to mass density and volumetric flow,
{\displaystyle \Delta P={\frac {1}{2}}\rho (v_{2}^{2}-v_{1}^{2})={\frac {1}{2}}\rho \left(\left({\frac {A_{1}}{A_{2}}}\right)^{2}-1\right)v_{1}^{2}={\frac {1}{2}}\rho \left({\frac {1}{A_{2}^{2}}}-{\frac {1}{A_{1}^{2}}}\right)Q^{2}=k\,\rho \,Q^{2}}
where constant terms are absorbed into k. Using the definitions of density (
{\displaystyle m=\rho V}
), molar concentration (
{\displaystyle n=CV}
), and molar mass (
{\displaystyle m=Mn}
), one can also derive mass flow or molar flow (i.e. standard volume flow),
{\displaystyle {\begin{aligned}\Delta P&=k\,\rho \,Q^{2}\\&=k{\frac {1}{\rho }}\,{\dot {m}}^{2}\\&=k{\frac {\rho }{C^{2}}}\,{\dot {n}}^{2}=k{\frac {M}{C}}\,{\dot {n}}^{2}.\end{aligned}}}
However, measurements outside the design point must compensate for the effects of temperature, pressure, and molar mass on density and concentration. The ideal gas law is used to relate actual values to design values,
{\displaystyle C={\frac {P}{RT}}={\frac {\left({\frac {P}{P^{\ominus }}}\right)}{\left({\frac {T}{T^{\ominus }}}\right)}}C^{\ominus }}
{\displaystyle \rho ={\frac {MP}{RT}}={\frac {\left({\frac {M}{M^{\ominus }}}{\frac {P}{P^{\ominus }}}\right)}{\left({\frac {T}{T^{\ominus }}}\right)}}\rho ^{\ominus }.}
Substituting these two relations into the pressure-flow equations above yields the fully compensated flows,
{\displaystyle {\begin{aligned}\Delta P&=k{\frac {\left({\frac {M}{M^{\ominus }}}{\frac {P}{P^{\ominus }}}\right)}{\left({\frac {T}{T^{\ominus }}}\right)}}\rho ^{\ominus }\,Q^{2}&=\Delta P_{max}{\frac {\left({\frac {M}{M^{\ominus }}}{\frac {P}{P^{\ominus }}}\right)}{\left({\frac {T}{T^{\ominus }}}\right)}}\left({\frac {Q}{Q_{max}}}\right)^{2}\\&=k{\frac {\left({\frac {T}{T^{\ominus }}}\right)}{\left({\frac {M}{M^{\ominus }}}{\frac {P}{P^{\ominus }}}\right)\rho ^{\ominus }}}{\dot {m}}^{2}&=\Delta P_{max}{\frac {\left({\frac {T}{T^{\ominus }}}\right)}{\left({\frac {M}{M^{\ominus }}}{\frac {P}{P^{\ominus }}}\right)}}\left({\frac {\dot {m}}{{\dot {m}}_{max}}}\right)^{2}\\&=k{\frac {M\left({\frac {T}{T^{\ominus }}}\right)}{\left({\frac {P}{P^{\ominus }}}\right)C^{\ominus }}}{\dot {n}}^{2}&=\Delta P_{max}{\frac {\left({\frac {M}{M^{\ominus }}}{\frac {T}{T^{\ominus }}}\right)}{\left({\frac {P}{P^{\ominus }}}\right)}}\left({\frac {\dot {n}}{{\dot {n}}_{max}}}\right)^{2}.\end{aligned}}}
Q, m, or n are easily isolated by dividing and taking the square root. Note that pressure-, temperature-, and mass-compensation is required for every flow, regardless of the end units or dimensions. Also we see the relations,
{\displaystyle {\begin{aligned}{\frac {k}{\Delta P_{max}}}&={\frac {1}{\rho ^{\ominus }Q_{max}^{2}}}\\&={\frac {\rho ^{\ominus }}{{\dot {m}}_{max}^{2}}}\\&={\frac {{C^{\ominus }}^{2}}{\rho ^{\ominus }{\dot {n}}_{max}^{2}}}={\frac {C^{\ominus }}{M^{\ominus }{\dot {n}}_{max}^{2}}}.\end{aligned}}}
The Venturi effect may be observed or used in the following:
Cargo eductors on oil product and chemical ship tankers
Inspirators mix air and flammable gas in grills, gas stoves, Bunsen burners and airbrushes
Water aspirators produce a partial vacuum using the kinetic energy from the faucet water pressure
Steam siphons use the kinetic energy from the steam pressure to create a partial vacuum
Atomizers disperse perfume or spray paint (i.e. from a spray gun)
Carburetors use the effect to suck gasoline into an engine's intake air stream
Cylinder head in piston engine have multiple Venturi areas like the valve seat and the port entrance
Wine aerators infuse air into wine as it is poured into a glass
Protein skimmers filter saltwater aquaria
Automated pool cleaners use pressure-side water flow to collect sediment and debris
Clarinets use a reverse taper to speed the air down the tube, enabling better tone, response and intonation[5]
The leadpipe of a trombone, affecting the timbre
Industrial vacuum cleaners use compressed air
Venturi scrubbers are used to clean flue gas emissions
Injectors (also called ejectors) are used to add chlorine gas to water treatment chlorination systems
Steam injectors use the Venturi effect and the latent heat of evaporation to deliver feed water to a steam locomotive boiler.
Sandblasting nozzles accelerate and air and media mixture
Bilge water can be emptied from a moving boat through a small waste gate in the hull. The air pressure inside the moving boat is greater than the water sliding by beneath.
A scuba diving regulator uses the Venturi effect to assist maintaining the flow of gas once it starts flowing
In recoilless rifles to decrease the recoil of firing
The diffuser on an automobile
Race cars utilising ground effect to increase downforce and thus become capable of higher cornering speeds
Foam proportioners used to induct fire fighting foam concentrate into fire protection systems
Trompe air compressors entrain air into a falling column of water
The bolts in some brands of paintball markers
Low-speed wind tunnels can be considered very large Venturi because they take advantage of the Venturi effect to increase velocity and decrease pressure to simulate expected flight conditions.[6]
Hawa Mahal of Jaipur, also utilizes the Venturi effect, by allowing cool air to pass through, thus making the whole area more pleasant during the high temperatures in summer.
Large cities where wind is forced between buildings - the gap between the Twin Towers of the original World Trade Center was an extreme example of the phenomenon, which made the ground level plaza notoriously windswept.[7] In fact, some gusts were so high that pedestrian travel had to be aided by ropes.[8]
In windy mountain passes, resulting in erroneous pressure altimeter readings[9]
The Mistral wind in southern France increases in speed through the Rhone valley.
^ Nasr, G. G.; Connor, N. E. (2014). "5.3 Gas Flow Measurement". Natural Gas Engineering and Safety Challenges: Downstream Process, Analysis, Utilization and Safety. Springer. p. 183. ISBN 9783319089485.
^ "The Venturi effect". Wolfram Demonstrations Project. Retrieved 2009-11-03.
^ Herschel, Clemens. (1898). Measuring Water. Providence, RI:Builders Iron Foundry.
^ Blasco, Daniel Cortés. "Venturi or air circulation?, that's the question". face2fire (in Spanish). Retrieved 2019-07-14.
^ Anderson, John (2017). Fundamentals of Aerodynamics (6th ed.). New York, NY: McGraw-Hill Education. p. 218. ISBN 978-1-259-12991-9.
^ Dunlap, David W (December 7, 2006). "At New Trade Center, Seeking Lively (but Secure) Streets". The New York Times.
^ Dunlap, David W (March 25, 2004). "Girding Against Return of the Windy City in Manhattan". The New York Times.
^ Dusk to Dawn (educational film). Federal Aviation Administration. 1971. 17 minutes in. AVA20333VNB1.
Wikimedia Commons has media related to Venturi effect.
3D animation of the Differential Pressure Flow Measuring Principle (Venturi meter)
UT Austin. "Venturi Tube Simulation". Retrieved 2009-11-03.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Venturi_effect&oldid=1087666226"
|
Topical Issue on Giant, Pygmy, Pairing Resonances and Related Topics
N. Alamanos, R. A. Broglia, E. Vigezzi
Publisher’s Note The EPJ Publishers remain neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Sanjay Prabhakar, Roderick Melnik
We investigate electric field control of spin manipulation through Berry phase in III-V semiconductor quantum dots. By utilizing degenerate and non-degenerate perturbation theories, we diagonalize the total Hamiltonian of a semiconductor quantum dot and express the solution of time dependent Schrödinger equation in terms of complete and incomplete elliptic integrals of the second kind, respectively...
Excitation and
\gamma -decay coincidence measurements at the GRAF beamline for studies of pygmy and giant dipole resonances
N. Kobayashi, K. Miki, T. Hashimoto, C. Iwamoto, more
. Physical studies of electric dipole excitations in atomic nuclei e.g. the structure of pygmy dipole resonances and isovector giant dipole resonances are attracting much attention recently. In this article, we describe a technical development in the coincidence measurement of the excitation processes with the Grand Raiden high-resolution magnetic spectrometer and the
\gamma -decay processes...
Effect of water flow characteristics on gypsum dissolution
Ehsan Behnamtalab, Ahmad Delbari, Hamed Sarkardeh
The European Physical Journal Plus > 2019 > 134 > 12 > 1-9
. Gypsum is one of the karstic rocks with many positive and negative characteristics. The most important defect of these rocks is solubility against water flow. The dissolution phenomenon in gypsum is accompanied by the release of the sulfate ion in water increasing its concentration, gradually. In this research, the effect of water flow temperature (T), Reynolds number (Re) and water head (H) on...
Pier Francesco Bortignon as a scientist
R. A. Broglia
. This article provides a glimpse of Pier Francesco Bortignon’s specific contributions to nuclear physics and of Pier Francesco as a scientist.
K. Langanke, G. Martinez-Pinedo
. The microscopic study of nuclear giant resonances has been a passion of Pier Francesco Bortignon. These resonances play important roles in various astrophysical scenarios. This article summarizes how the improved description of giant resonances has helped to deepen our understanding of the dynamics of core-collapse and electron capture supernovae as well as of the nucleosynthesis associated with...
Magnetic phase diagram of a spin S = 1/2 antiferromagnetic two-leg ladder with modulated along legs Dzyaloshinskii-Moriya interaction
Niko Avalishvili, Bachana Beradze, George I. Japaridze
We study the ground-state magnetic phase diagram of a spin S = 1/2 antiferromagnetic two-leg ladder with period two lattice units modulated Dzyaloshinskii-Moriya (DM) interaction along the legs. We consider the case of collinear DM vectors and strong rung exchange and magnetic field. In this limit we map the initial ladder model onto an effective spin σ = 1/2 XXZ chain and study the latter using the...
Collective excitations involving spin and isospin degrees of freedom
Hiroyuki Sagawa, Gianluca Colò, Xavier Roca-Maza, Yifei Niu
. In this paper, we discuss some new important developments in the study of isobaric analog states (IAS) and Gamow-Teller resonances (GTR). In the case of the IAS, we have shown the importance of taking into account charge symmetry breaking (CSB) and charge independence breaking (CIB) forces, in order to reconcile the reproduction of the IAS energy in 208Pb with the reproduction of some very basic...
Multimodel ensemble approach for hourly global solar irradiation forecasting
Nahed Zemouri, Hassen Bouzgou, Christian A. Gueymard
. This contribution proposes a novel solar time series forecasting approach based on multimodel statistical ensembles to predict global horizontal irradiance (GHI) in short-term horizons (up to 1 hour ahead). The goal of the proposed methodology is to exploit the diversity of a set of dissimilar predictors with the purpose of increasing the accuracy of the forecasting process. The performance of a...
Li Haitao, Weiyang Qin
. To improve the transform efficiency of vibration energy, we proposed a novel energy harvester composed of a piezoelectric cantilever beam and a pendulum. Under horizontal excitations, the pendulum oscillation will lead to a fluctuation in the tension force of the rope and to a change in the compressive force acting on the beam, which could be employed to make the beam reach dynamic buckling. This...
Mohamed Hsini, Sobhi Hcini, Sadok Zemni
. The magnetocaloric effect in Pr0.5Sr0.5MnO3 (PSMO) has been successfully modeled in this work. PSMO undergoes a first-order antiferromagnetic charge ordering (AFM/CO) to a ferromagnetic (FM) transition at
T_{CO}=T_{N}\sim 165
T C O = T N ∼ 165 K followed by a second-order ferromagnetic (FM) to paramagnetic (PM) transition at the Curie temperature,
T_{C}\sim 255
T C ∼ 255 ...
Investigating the effect of piston bowl geometry on the partially premixed dual fuel combustion engine at low load condition
Hassan Khatamnejad, Bahram Jafari, D. D. Ganji
. One of the most important emerged technologies to improve the emission characteristics of internal combustion engines is the dual-fuel combustion engines being fueled with an abundant clean environmentally friendly fuel such as natural gas as the main fuel while having conventional compression ignition engine design. In this research, a three-dimensional CFD model of fluid flow coupled with the...
Shape memory alloys phenomena: classification of the shape memory alloys production techniques and application fields
İskender Özkul, Mehmet Ali Kurgun, Ece Kalay, Canan Aksu Canbay, more
. The shape memory alloy, referred to as the material of the future, is the first to come to mind in the class of smart materials. Shape memory alloys are already present in many important areas. In the medical field, glasses frame material, the material of intravenous stents, jet engines in the aviation area, and bridges in the construction area can be mentioned. Although the shape memory effect...
Possibility of
\rho meson condensation in neutron stars: Unified approach of chiral SU(3) model and QCD sum rules
. In the present work the conjunction of chiral SU(3) model with QCD sum rules is employed to explore the possibility of
\rho meson condensation in neutron stars. The quark and gluon condensates in terms of which the in-medium masses of
\rho mesons can be expressed are calculated using the chiral SU(3) model in the charge neutral matter which is relevant for neutron stars. We observe...
Hui Yang, Qingbo Wang, Ning Su, Linghua Wen
. We study the ground-state configurations and spin textures of rotating two-component Bose-Einstein condensates (BECs) with Rashba-Dresselhaus spin-orbit coupling (RD-SOC), which are confined in a two-dimensional (2D) optical lattice plus a 2D harmonic trap. In the absence of rotation, a relatively small isotropic 2D RD-SOC leads to the generation of ghost vortices for initially miscible BECs, while...
No-core shell model calculations of the photonuclear cross section of 10B
M. K. G. Kruse, W. E. Ormand, C. W. Johnson
. Results of ab initio no-core, shell model calculations for the photonuclear cross section of 10B are presented using realistic two-nucleon (NN) chiral forces up to next-to-next-to-next-order (N3LO) softened by the similarity renormalization group method (SRG) with
\lambda= 2.02
λ = 2 . 02 fm-1. The electric-dipole response function is calculated using the Lanczos method, with the effects...
Elena Litvinova, Herlik Wibowo
. A thermal extension of the relativistic nuclear field theory is formulated for the nuclear response. The Bethe-Salpeter equation (BSE) with the time-dependent kernel for the particle-hole response is treated within the Matsubara Green’s function formalism. We show that, with the help of a temperature-dependent projection operator on the subspace of the imaginary time (time blocking), it is possible...
Theoretical predictions for photoacoustic signal: Fractionary thermal diffusion with modulated light absorption source
Aloisi Somer, Andressa Novatski, Ervin Kaminski Lenzi
. We develop a theoretical framework, in the context of the anomalous thermal diffusion, for the photoacoustic signal. We obtain analytical predictions for the open photoacoustic cell technique by considering the thermal diffusion and thermoelastic bending effects. In these contexts, we consider different conditions for the thermal diffusivity, coefficient of optical absorption, the sample thickness...
Probability of radiation of twisted photons by axially symmetric bunches of particles
O. V. Bogdanov, P. O. Kazinski
. In most cases, the twisted photons generated directly by charged particles in undulators and laser waves are produced by bunches of particles and not by one charged particle. However, up to now, the theoretical studies of such a radiation were mainly based on description of radiation produced by one charged particle. In the present paper, we investigate the effect of a finite width of a particle...
Characterization of the new hybrid low-energy accelerator facility in Mexico
G. Reza, E. Andrade, L. Acosta, B. Góngora, more
. In 2013, a new accelerator mass spectrometry (AMS) facility was inaugurated in Mexico. Since then a substantial number of precise measurements of low concentrations of radioactive isotopes (14C, 10Be, 26Al and Pu) have been made. This paper describes the extension to the isotope separator installed at the end of 2017. It takes advantage of the 1MV High Voltage Engineering Europa (HVEE) tandem accelerator...
Last 3 years (111 664)
article (4 026 985)
German (342 714)
Dutch (75 408)
QUALITY OF LIFE (10 376)
OXIDATIVE STRESS (9 970)
PROGNOSIS (9 560)
DEPRESSION (9 479)
EPIDEMIOLOGY (8 480)
STABILITY (7 450)
OPTIMIZATION (7 416)
MICROSTRUCTURE (7 115)
GENE EXPRESSION (7 006)
DIABETES (6 831)
MORTALITY (6 796)
ADSORPTION (6 684)
AGING (6 674)
SIMULATION (6 535)
DIAGNOSIS (6 389)
PREGNANCY (6 053)
OSTEOPOROSIS (5 989)
HIV (5 873)
COLORECTAL CANCER (5 793)
NANOPARTICLES (5 720)
MECHANICAL PROPERTIES (5 679)
CLASSIFICATION (5 484)
TEMPERATURE (5 452)
POLYMORPHISM (5 329)
STRESS (5 296)
PREVENTION (5 085)
HEAVY METALS (5 030)
ALZHEIMER’S DISEASE (4 988)
ADOLESCENTS (4 964)
COMPUTED TOMOGRAPHY (4 946)
RHEUMATOID ARTHRITIS (4 928)
MODELING (4 915)
KINETICS (4 906)
EVOLUTION (4 869)
LAPAROSCOPY (4 658)
ANXIETY (4 556)
ELDERLY (4 526)
CRYSTAL STRUCTURE (4 474)
PREVALENCE (4 445)
PHYSICAL ACTIVITY (4 434)
SYNTHESIS (4 429)
METASTASIS (4 327)
HEPATOCELLULAR CARCINOMA (4 257)
COMPLICATIONS (4 220)
MIGRATION (4 217)
DEVELOPMENT (4 201)
SYSTEMATIC REVIEW (4 180)
PHARMACOKINETICS (4 157)
GASTRIC CANCER (4 151)
PARKINSON’S DISEASE (4 146)
DIABETES MELLITUS (4 128)
SCREENING (4 093)
RAT (4 069)
GENETIC ALGORITHM (3 990)
ANGIOGENESIS (3 970)
GENETIC DIVERSITY (3 961)
BIOMARKER (3 936)
MACHINE LEARNING (3 910)
RELIABILITY (3 894)
TYPE 2 DIABETES (3 889)
CONSERVATION (3 886)
OUTCOME (3 862)
NITRIC OXIDE (3 827)
CYTOKINES (3 808)
MENTAL HEALTH (3 800)
COPPER (3 740)
EXERCISE (3 729)
SCHIZOPHRENIA (3 721)
|
Battering Ram - Ring of Brodgar
Skill(s) Required Wheelwrighting,Siegecraft
Object(s) Required Board x20, Block of Wood x30, Bone Glue x5, Hardened Leather x4, Rope x4, Brimstone x4
Repaired With Bone Glue x2 (6 hours to be usable again, 2RL)
Build > Siege Equipment > Battering Ram
A battering ram can break down walls, homes, or just scare the hell out of your neighbours with your military might.
After building a battering ram, it takes 24 real life hours for the glue to dry and make it usable on a palisade and 32 hours to make it usable on a brick wall. After repairs, it takes 2 hours instead of four to be used again.
To use the battering ram, right-click it and select "Move". Your cursor will change to a wrecking chain: . You can now push the ram to where you click, so you can take it to the wall or building you wish to demolish. Simply moving it close to, and facing the target is enough. Next, release your grip on the ram, right-click it again, and now you should also have the "Use" option available.
A battering ram can only be pushed for a small distance before breaking down (about 7.5 tiles). If your strength is too low and you don't have people to help you, a Bear Cape can make for a nice boost.
The ram has a maximum health pool of 1250. Once this hits 500 the ram cannot be moved or used in any way and needs to be repaired.
Health is lowered by moving around.
Battering Rams take damage when moving, and will need to be repaired after each move. A Battering ram can move 7.5 tiles per move. Repairs take an hour.
Place rams near the intended target to save as much health as possible.
Health increases only with Bone Glue
One piece of bone glue are needed to fully repair from 500hp, but due to a rounding error you can spend a second bone glue repairing it for ~1 hp.
The repair cycle is started as soon as bone glue is used to repair.
Each cycle takes one hour and the ram is unusable for the duration.
The damage you deal while destroying is the square root of your strength. A ram adds a static 20 damage and allows up to 4 people to combine their damage.
Pickaxe or Sledgehammer does not help while using a battering ram.
{\displaystyle {Damage}=({\sqrt {strength}})+({\sqrt {strength}})+({\sqrt {strength}})+({\sqrt {strength}})+20-{Soak}}
Say we have 4 people attacking a palisade, one has 64 strength, the others have 124 strength. The damage you deal to the wall is calculate as follows:
{\displaystyle 38=({\sqrt {6}}4)+({\sqrt {1}}24)+({\sqrt {1}}24)+({\sqrt {1}}24)+20-25}
Splash damage: When taking down a wall section, either trough ramming or bashing, the number of wall sections that might fall down is random. From the minimum of only the targeted wall section, to multiple adjacent sections of the wall, to the entire wall. Only walls of the same soak are affected.
Bull Ram (2022-03-20) >"Added variable materials to "Battering Ram", giving them visual influence from the types of wood used to construct them."
Raft Notified (2022-01-23) >"Battering Rams should now generally be better at hitting things in front of them, irrespective of rotation."
World 12 (2020-03-06) >"A Battering Ram requires 24h * the Claim's Power Level of drying time before it can destroy Palisades on the claim, and 32h * the Claim's Power Level of drying time before it can destroy Brick Walls on the claim."
Stalking Garden (2019-07-10) >"Siege Engines (Catapult and Battering Rams) no longer require Brimstone to repair."
Bog Turtle Boogaloo (2019-04-18) >"Made it so that unclaimed walls no longer require siege engines to demolish. They still have soak. Lowered soak of palisades."
Siege Chess (2019-04-03) >"Battering Rams can attack Palisades after 24 hours, and Brick Walls after 32 hours."
Siege Chess (2019-04-03) >"After 48 hours, Catapults and Battering Rams will begin decaying over time."
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Battering_Ram&oldid=92481"
|
y\left(x\right)={x}^{3/2},x≥0
\mathrm{κ}
=\frac{|y″|}{{\left(1+{\left(y\prime \right)}^{2}\right)}^{3/2}}
=\frac{\frac{3}{4\sqrt{x}}}{{\left(1+{\left(\frac{3}{2}\sqrt{x}\right)}^{2}\right)}^{3/2}}
=\frac{3}{4\sqrt{x} {\left(1+\frac{9}{4}x\right)}^{3/2}}
=\frac{3}{\frac{4\sqrt{x} {\left(4+9 x\right)}^{3/2}}{{4}^{3/2}}}
=\frac{6}{\sqrt{x} {\left(4+9 x\right)}^{3/2}}
To obtain this result from first principles, begin by obtaining the arc-length function
s\left(x\right)={∫}_{0}^{x}\sqrt{1+{\left(y\prime \left(t\right)\right)}^{2}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathit{ⅆ}t
{∫}_{0}^{x}\sqrt{1+\frac{9}{4}t} \mathrm{dt}
\left({\left(4+9 x\right)}^{3/2}-8\right)/27
and its inverse,
x=x\left(s\right)=\left({\left(8+27 s\right)}^{2/3}-4\right)/9
. The angle made by the tangent line and the
x
\mathrm{θ}=\mathrm{arctan}\left(y\prime \left(x\right)\right)
. The curvature is the rate at which this angle varies as
changes. Hence, the derivative of
\mathrm{θ}
must be taken with respect to
s
. Either the chain rule or the substitution
x=x\left(s\right)
must be used. Making the substitution leads to
\mathrm{θ}=\mathrm{arctan}\left(\frac{3}{2}\sqrt{\left({\left(8+27 s\right)}^{2/3}-4\right)/9}\right)
, so the derivative with respect to
s
\mathrm{θ}\prime \left(s\right)=\frac{6}{\sqrt{\frac{1}{9}{\left(27s+8\right)}^{2/3}-\frac{4}{9}}\left(27s+8\right)}
s
s\left(x\right)
\mathrm{κ}=\genfrac{}{}{0}{}{\mathrm{θ}\prime \left(s\right)}{\phantom{x=a}}|\genfrac{}{}{0}{}{\phantom{\mathrm{f\left(x\right)}}}{s=s\left(x\right)}
\frac{6}{\sqrt{\frac{1}{9}{\left({\left(4+9x\right)}^{3/2}\right)}^{2/3}-\frac{4}{9}}{\left(4+9x\right)}^{3/2}}
\frac{6}{\sqrt{x} {\left(4+9 x\right)}^{3/2}}
Calculate the curvature as per Table 2.4.1 .
y\left(x\right)
y\left(x\right)=\dots
y\left(x\right)={x}^{3/2}
\stackrel{\text{assign as function}}{\to }
\textcolor[rgb]{0,0,1}{y}
Compute the curvature for
x>0
Write the expression for the curvature.
\frac{y″\left(x\right)}{{\left(1+{\left(y\prime \left(x\right)\right)}^{2}\right)}^{3/2}}
\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}\sqrt{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{}{\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{9}}{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}^{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{2}}}
\stackrel{\text{assuming positive}}{\to }
\frac{\textcolor[rgb]{0,0,1}{6}}{\sqrt{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{}{\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}^{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{2}}}
Calculate the curvature from first principles.
s\left(x\right)
, the arc-length function on the interval
\left[0,x\right]
Write the appropriate integral.
S
{∫}_{0}^{x}\sqrt{1+{\left(y\prime \left(t\right)\right)}^{2}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathit{ⅆ}t
\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{8}}{\textcolor[rgb]{0,0,1}{27}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{27}}\textcolor[rgb]{0,0,1}{}{\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}^{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{2}}
\stackrel{\text{assign to a name}}{\to }
\textcolor[rgb]{0,0,1}{S}
x\left(s\right)
, the inverse of the arc-length function
Apply the solve command and select the first (of three) solutions, the last two of which are complex. Assign this solution to the name
X
X≔\mathrm{solve}\left(S=s,x\right)\left[1\right]
\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{}{\left(\textcolor[rgb]{0,0,1}{27}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8}\right)}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{4}}{\textcolor[rgb]{0,0,1}{9}}
\mathrm{θ}\left(s\right)=\mathrm{arctan}\left(y\prime \left(s\right)\right)
, where the derivative is taken with respect to
x
\mathrm{θ}
, the arctangent of
y\prime \left(x\right)
, which is then evaluated at
x=x\left(s\right)=X
\mathrm{θ}≔\mathrm{arctan}\left(\mathrm{D}\left(y\right)\left(X\right)\right)
\textcolor[rgb]{0,0,1}{\mathrm{arctan}}\textcolor[rgb]{0,0,1}{}\left(\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\sqrt{\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{}{\left(\textcolor[rgb]{0,0,1}{27}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8}\right)}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{4}}{\textcolor[rgb]{0,0,1}{9}}}\right)
\mathrm{θ}\prime \left(s\right)
s
s\left(x\right)
Calculus palette: Differentiation template
\mathrm{θ}
s
(This is the rate of change of
\mathrm{θ}
taken with respect to the arc length.)
Context Panel: Evaluate at a Point
s
s\left(x\right)=S
\frac{ⅆ}{ⅆ s} \mathrm{θ}
\frac{\textcolor[rgb]{0,0,1}{6}}{\sqrt{\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{}{\left(\textcolor[rgb]{0,0,1}{27}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8}\right)}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{4}}{\textcolor[rgb]{0,0,1}{9}}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{27}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8}\right)}
\stackrel{\text{evaluate at point}}{\to }
\frac{\textcolor[rgb]{0,0,1}{6}}{\sqrt{\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{}{\left({\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}^{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{2}}\right)}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{4}}{\textcolor[rgb]{0,0,1}{9}}}\textcolor[rgb]{0,0,1}{}{\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}^{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{2}}}
\stackrel{\text{assuming positive}}{\to }
\frac{\textcolor[rgb]{0,0,1}{6}}{\sqrt{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{}{\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}^{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{/}\textcolor[rgb]{0,0,1}{2}}}
|
For each triangle below, write an equation relating the reference angle (the given acute angle) with the two side lengths of the right triangle. Then solve your equation for
x
Refer to the Math Notes box in Lesson 5.1.2 if you need help writing an equation relating to a reference angle.
It will be helpful for all three parts of this problem.
\sin\theta=\frac{\text{opp}}{\text{hyp}}
\text{sin}(22^{\circ}) = \frac{\it{x}}{17}
17\text{ sin}(22^\circ) = \it{x}
x \approx 6.37
\text{tan}\theta = \frac{\text{opp}}{\text{adj}}
\text{tan}(49º) = \frac{7}{\textit{x}}
x\text{ tan}(49º) = 7
\frac{x\tan(49º)}{\tan(49º)}=\frac{7}{\tan(49º)}
x = \frac{7}{\text{tan}(49^\circ)}
x \approx 6.09
Follow the same steps as in parts (a) and (b).
|
Are There Different Kinds of Rogue Waves? | J. Offshore Mech. Arct. Eng. | ASME Digital Collection
, Ann Arbor, MI, 48105
Liebenberg & Stander International (Pty) Ltd.
, Cape Town, 8000 South Africa
Liu, P. C., and MacHutchon, K. R. (June 6, 2008). "Are There Different Kinds of Rogue Waves?." ASME. J. Offshore Mech. Arct. Eng. May 2008; 130(2): 021007. https://doi.org/10.1115/1.2917431
There is clearly no immediate answer to the question posted by the title of this paper. Inasmuch as that there are not much definitively known about rogue waves and that there is still no universally accepted definition for rogue waves in the ocean, we think there might just be even more than one kind of rogue waves to contend with. While the conventional approach has generally designated waves with
Hmax∕Hs
greater than 2.2 as possible rogue waves, based on Rayleigh distribution considerations, there is conspicuously no provision as to how high the ratio of
Hmax∕Hs
can be and thus not known how high can a rogue wave be. In our analysis of wave measurements made from a gas-drilling platform in South Indian Ocean, offshore from Mossel Bay, South Africa, we found a number of cases that indicated
Hmax∕Hs
could be valued in the range between 4 and 10. If this were to be the case, then these records could be considered to be “uncommon” rogue waves, whereas a record of
Hmax∕Hs
in the range between 2 and 4 could be considered to comprise “typical” rogue waves. On the other hand, the spikes in the
Hmax
data could have been caused by equipment malfunction or some other phenomenon. Clearly, the question of whether or not there are different kinds of rogue waves cannot be readily answered by theoretical considerations alone and there is a crucial need for long-term wave time-series measurements for studying rogue waves.
natural gas technology, ocean waves, offshore installations, oil drilling, time series
Waves, Time series
Freak Ocean Waves
Extreme Events in Field Data and in a Second Order Wave Model
A Possible Freak Wave Event Measured at the Draupner Jacket January, 1, 1995
Wave Cresr Sensor Intercomparison Study: An Overview of WACSIS
Freak Waves—More Frequent Than Rare!
Scientist Behaving Badly
MacHutchon
Exploring Rogue Waves From Observations in South Indian Ocean
Virtual Wave Crest Heights in Deep Water Breaking Waves
Laboratory Measurements of Limiting Freak Waves on Currents
J. Geophys. Res., [Oceans]
Abnormal Waves During Hurricane Camille
The Wave Energy Concentration at the Agulhas Current of South Africa
|
Finery Forge - Ring of Brodgar
Object(s) Required Brick x30, Bar of Cast Iron x2, Bar of Bronze, Iron or Steel x3
Required By Bloom, Dross
Build > Buildings & Construction > Furnaces & Fireplaces > Finery Forge
Finery forges are used to refine cast iron into bloom, which can then be worked by a smith in order to produce wrought iron. Place up to nine units of cast iron bars into the finery forge along with 2 charcoal and set alight, after roughly 9 minutes (Real time) the metal will either turn into valuable bloom or dross.
Finery forges can also be used to convert stacks of a hundred coins into a bar of metal. Doing this takes 2 units of charcoal, and will not affect the quality of the metal.
A finery forge in combination with the fuel used can softcap bloom during the process of making wrought iron.
Finery Forge =
{\displaystyle {\frac {(_{q}Brick+_{q}Metal)}{2}}}
Bloom quality only softcapped by
{\displaystyle {\frac {(_{q}Forge+_{q}Coal)}{2}}}
Fishing for Finery (2021-04-25) >"Added variable materials to "Finery Forge"s. They now take color and texture from the materials they were made from. Note that this implied a slight change to the recipe, which might have interesting quality implications."
Roe, Roe, Roe yer Boat (2021-01-31) >"You can now light fire items (branches, torches, &c) from most ovens and other objects with open flames."
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Finery_Forge&oldid=92716"
|
While shopping at his local home improvement store, Chen noticed that the directions for an extension ladder state, “This ladder is most stable when used at a
\mathit{75º}
angle with the ground.” He wants to buy a ladder to paint a two-story house that is
26
feet high. How long does his ladder need to be? Draw a diagram and set up an equation for this situation. Show all work.
Draw a diagram comparable to the one below.
Refer to the Math Notes box in Lesson 5.1.2 if you need help deciding which trigonometric ratio to use when solving for
x
\text{length of ladder}=x
|
Solution and Gradient Plots with pdeplot and pdeplot3D - MATLAB & Simulink - MathWorks Nordic
2-D Solution and Gradient Plots
3-D Surface and Gradient Plots
To visualize a 2-D scalar PDE solution, you can use the pdeplot function. This function lets you plot the solution without explicitly interpolating the solution. For example, solve the scalar elliptic problem
-\nabla u=1
on the L-shaped membrane with zero Dirichlet boundary conditions and plot the solution.
Create the PDE model, 2-D geometry, and mesh. Specify boundary conditions and coefficients. Solve the PDE problem.
Use pdeplot to plot the solution.
pdeplot(model,'XYData',u,'ZData',u,'Mesh','on')
To get a smoother solution surface, specify the maximum size of the mesh triangles by using the Hmax argument. Then solve the PDE problem using this new mesh, and plot the solution again.
pdeplot(model,'FlowData',[ux,uy])
Obtain a surface plot of a solution with 3-D geometry and N > 1.
First, import a tetrahedral geometry to a model with N = 2 equations and view its faces.
Create a problem with zero Dirichlet boundary conditions on face 4.
applyBoundaryCondition(model,'dirichlet','Face',4,'u',[0,0]);
Create coefficients for the problem, where f = [1;10] and c is a symmetric matrix in 6N form.
f = [1;10];
c = [2;0;4;1;3;8;1;0;2;1;2;4];
Create a mesh for the solution.
generateMesh(model,'Hmax',20);
Plot the two components of the solution.
title('u(1)')
Compute the flux of the solution and plot the results for both components.
pdeplot3D(model,'FlowData',[cgradx(:,1) cgrady(:,1) cgradz(:,1)])
|
Boundary Value Problems/Lesson 4.1 - Wikiversity
Return to previous page Boundary_Value_Problems
1 Sturm Liouville and Orthogonal Functions
3 Homework Assignment from Powell's sixth edition Boundary Value Problems page 71.
3.1 Project 1.2
Sturm Liouville and Orthogonal FunctionsEdit
The solutions in this BVP course will ALL be expressed as series built on orthogonal functions. Understanding that the simple problem
{\displaystyle X''+{\lambda }^{2}X=0}
{\displaystyle \alpha _{1}X(a)+\alpha _{2}X'(a)=0}
{\displaystyle \beta _{1}X(b)+\beta _{2}X'(b)=0}
leads to solutions
{\displaystyle X(x)}
that are orthogonal functions is crucial. Once this concept is grasped the majority of the work in this course is repetitive.
In the following notes think of the function
{\displaystyle \Phi (x)}
as a substitution for
{\displaystyle X(x)}
TO SEE ALL OF THE PAGES DOUBLE CLICK ON THE FIRST PAGE. THEN YOU WILL BE ABLE TO DOWNLOAD NOTES. THESE WILL BE CONVERTED FOR THE WIKI AALD (at a later date)
Fourier SeriesEdit
From the above work, solving the problem:
{\displaystyle X''+{\lambda }^{2}X=0}
{\displaystyle X(0)=0}
{\displaystyle X(L)=0}
leads to an infinite number of solutions
{\displaystyle X_{n}(x)=\Phi _{n}(x)=sin\left({\frac {n\pi }{L}}x\right)}
. These are eigenfunctions with eigenvalues
{\displaystyle \lambda _{n}={\frac {n\pi }{L}}}
Homework Assignment from Powell's sixth edition Boundary Value Problems page 71.Edit
Project 1.2Edit
This is a fourier series application problem.
You are given the piecewise defined function
{\displaystyle f(t)}
The positive unit pulse is 150 μs in duration and is followed by a 100 μs interval where f(t) =0. Then f(t) is a negative unit pulse for 150 μs once again returning to zero. This pattern is repeated every 2860 μs. We will attempt to represent f(t) as a Fourier series,
Determine the value of the period: Ans. Period is 2860 μs. The time for a complete repetition of the waveform.
Find the Fourier Series representation:
{\displaystyle f(t)=a_{0}+\sum _{n=1}^{\infty }a_{n}cos(n\pi t/a)+b_{n}sin(n\pi t/a)}
.The video provides an explanation of the determining the coefficients
{\displaystyle a_{0},a_{n},b_{n}}
. The results are:
{\displaystyle a_{0}=0}
{\displaystyle a_{n}={\frac {\sin \left({\frac {15}{143}}\,n\,\pi \right)+\sin \left({\frac {25}{143}}\,n\,\pi \right)-\sin \left({\frac {40}{143}}\,n\,\pi \right)}{n\pi }}}
{\displaystyle b_{n}={\frac {-\left(-1+\cos \left({\frac {15}{143}}\,n\,\pi \right)+\cos \left({\frac {25}{143}}\,n\,\pi \right)-\cos \left({\frac {40}{143}}\,n\,\pi \right)\right)}{n\pi }}}
Using 100 terms an approximation is;
{\displaystyle f(t)}
right or left by an amount
{\displaystyle b}
such that the resulting periodic function is an odd function. Here is a plot of shifting it to the left half way between the +1 and -1 pulses. This is a shiift of b= 200 μs. The new funnction is
{\displaystyle f(t+200)}
. A plot follows:
. It could also be shifted to the right by 1230 μs, that is
{\displaystyle f(t-1230)}
is the new function.
Retrieved from "https://en.wikiversity.org/w/index.php?title=Boundary_Value_Problems/Lesson_4.1&oldid=1817528"
|
The Black Hole Effect of Uniswap on Algorithmic Stablecoins | Ian Macalinao
The Black Hole Effect of Uniswap on Algorithmic Stablecoins
by Ian Macalinao on January 17, 2021
Algorithmic, undercollateralized stablecoins have popped up everywhere in the past few months. They've been touted to be everything from the cure to inflation to solving world poverty. However, the current pricing mechanism used by these stablecoins is flawed: while expansions are usually able to bring the price sufficiently down, contractionary mechanisms are not able to bring the price back to peg.
While some may say that the underlying notion that an undercollateralized stablecoin could possibly work is the problem, the current problem lies in the Uniswap constant product model.
A brief review of Uniswap
Uniswap is built on one equation: xy = k, where:
x
is the total amount of token 0 staked in the Uniswap pool,
y
is the total amount of token 1 staked in the Uniswap pool, and
k
is a constant supplied at the time of creating the pool.
Due to this,
x
y
must always have the same ratio to each other in the pool in terms of relative value:
y/k = x
x/k = y
. Thus a pool with a 10:1 ratio of DAI to ETH (in terms of dollar value, e.g. $10M DAI and $1M ETH) will always have a ratio of 10:1 DAI to ETH, no matter how expensive the price of ETH becomes.1
However, the price of
x
y
x/y
: if the pool is initialized with $10M of DAI and $1M of ETH, if the price of ETH is 1000 DAI, then that means there are 1000 DAI in the pool for each ETH. This price is able to change while keeping the dollar value ratio of the two tokens the same.
While this model is very simple (and thus cheap in terms of transaction fees), it has several problems which we will uncover later in this article.
Example: buying tokens
Let's say the pool has the price of 1 ETH = $1000 DAI, and the pool has reserves of $2M USD. Due to the constant product, this means there is $1M of ETH in the pool and $1M of DAI, so there are 1000 ETH and 1M DAI in the pool.
xy = k
is the Uniswap invariant. Let the initial ETH reserves be
x_0
and the initial DAI reserves be
y_0
x_0 * y_0 = k
Now let's say I want to buy 500 ETH. Thus the new ETH reserves
x_1 = 500 + 1000 = 1500
k
is constant, we can compute that the new DAI reserves should be
y_1 = k/x_1 = x_0 * y_0 / x_1
= 666,666.66 DAI$.
Thus we will sell 500 ETH to buy 333,333.33 DAI.
Note that the new reserves are now 500 ETH and 666,666 DAI, so the new price of ETH is
666666/500 = 1333
DAI. However, there are now only $666,666 ETH and $666,666 DAI in the pool.
Acceleration of price drift
The above solution seems innocuous at first, but it causes hidden forces in the market: with the same amount of liquidity and trade volume, as the price drifts from the initial price, the price begins to drift faster.
Recall the constant product formula:
xy = k
x
be the number of TOKX in the pool and
y
be the number of TOKY in the pool. Given that the current price of TOKX relative to TOKY
p = x/y
, the change in price with respect to the change in reserves
x
of TOKX is given by the derivative:
\frac{dp}{dx} = \frac{x}{y} \frac{d}{dx} = \frac{x}{k/x} \frac{d}{dx} = \frac{x^2}{k} \frac{d}{dx} = \frac{2x}{k}
This means that for every
x
TOKX tokens sold into the pool for TOKY, the change in the price of TOKY increases at a rate of
2x/k
x
will continue to grow as more TOKX are sold for TOKY, the slippage in price for the same number of TOKY purchased is magnified the lower the price goes.
At some point, the supply of
x
y
is so great such that overcoming the massive amount of liquidity is economically not viable. Even if people unstake their tokens from the pool at low prices, the imbalance may be so great that the pool will never revert to $1.
Consider an extreme where 1 DAI is equivalent to 1 BAC: i.e. BAC is $0.01. Let's say the total amount of liquidity is merely $50k, a fraction of the $60M at the time of writing.
Since the pool has a 100:1 token ratio but 1:1 constant ratio, the reserves can be computed as follows: let
x
be the total BAC reserves and
y
be the total DAI reserves. So
x = 100y
due to the liquidity ratio and
y = 50000/2 = 25000
due to k, where the pool has half of its dollar value in
y
and half of its dollar value in
x
. Solving this system of equations, we get that there are 2.5M BAC in the pool and 25,000 DAI in the pool.
Now, to get the price of BAC to double in this pool, the new ratio of BAC:DAI needs to be 100:2. So
x' = 50y'
. We can solve the following system of equations to compute the change in reserves needed:
\begin{aligned} xy &= 2500000 * 25000 \\ x/100 &= y/2 \end{aligned}
This gives us the new reserves of 1.7M BAC and 35355 DAI. The new liquidity is 2x DAI
\simeq 70k
So $10k of DAI had to be spent in order to rais the price 1 cent. Not terrible, you might be thinking-- unfortunately; the effects amplify as the ratio equalizes.
To get to 10 cents per BAC, the pool requires 79K DAI reserves-- 160K of liquidity.
To get to 1 dollar per BAC, the pool requires a whopping 250K DAI of reserves-- 500K of liquidity.
Amount of DAI required to bring BAC from $0.01 to $1 with $50K liquidity
So you see, despite the pool only having $50K of liquidity in it, the pool requires
\$250K - \$25K = \$225K
in order to recover back to the peg, or 9x the current DAI in the pool. It is unlikely for this recovery to happen to a dead project and the problem only worsens as the price continues to cause more selling of BAC, hence the term: black hole effect.
Using a flatter curve
A flatter curve would prevent the price from slipping until faith in the token has completely dropped off. Curve.fi is one such example of a curve that is optimized for stablecoins:
A Guide to Curve Finance — The Cryptonomist
Basis.cash is already planning on implementing this.
However, this is not a panacea -- since there is more liquidity in the middle of the curve, if the stablecoin is problematic for long enough, the price will fall rapidly at the extremes of the curve. This curve simply deplays the inevitable black hole.
Reweighting the pair back to 1:1
By modifying the
k
in the Uniswap constant product, one can avoid this black hole effect.
One solution is described by the Fei Protocol as follows2:
Peg Reweights — In the event of extended periods below the peg, the Fei Protocol can reweight the Uniswap price back to the peg. It achieves this by executing the following atomic trade: 1. Withdraw all protocol owned liquidity, 2. Buy FEI with the withdrawn ETH to bring price up to peg. 3. Resupply remaining PCV as liquidity. 4. Burn the excess FEI.
Since Basis.cash and its forks rely on a "BAC/DAI LP" pool, the pool could theoretically be upgraded in a later version of the protocol to implement this algorithm.3
However, this may not always work since not all liquidity is staked into the BAC/DAI pool-- one may run into the effects mentioned above where there isn't enough money in the pool. A naive solution to this could be to have the protocol run the exchange used to determine the peg; however BAC that exists outside of the pool could in theory be used to drive the prices back down despite a peg reweighting occurring. This is okay though since the intent of BAC is to be worth $1.
There is another downside to using such a reweighting system in BAC, where stakers in the BAC/DAI pool may be hesitant to stake as their BAC continues to be chipped off every time there is a dip in the market. This may be okay though since it is desirable to not have liquidity in the pools when prices are low again due to the black hole effect.
One may think that undercollateralized seigniorage share tokens have been proven to be unreliable due to recent events. However, the current depression of three-token stablecoins is only due to the constant product model and the contraction mechanism used. The algorithmic stablecoin space is developing rapidly and I am really looking forward to what projects propose as solutions to this problem.
The pool ratio can change over time due to pool fees collected: each transaction takes a 0.3% cut of the total input tokens transacted. The fees are kept in the pool's reserves and LP token holders accrue these fees in their pool tokens.↩
Source: https://medium.com/fei-protocol/introducing-fei-protocol-2db79bd7a82b↩
Most algorithmic stablecoins have some version of this pool.↩
Thanks for reading! Have any questions, comments, or suggestions? Feel free to use the comment section below or email me at [email protected] and I'll do my best to respond.
Alternatively, you can view the source of the post here and send a pull request.
|
Arithmetic mean - Wikipedia
Sum of a collection of numbers divided by the oul' count of numbers in the bleedin' collection
In mathematics and statistics, the oul' arithmetic mean ( /ˌærɪθˈmɛtɪk ˈmiːn/ air-ith-MET-ik) or arithmetic average, or simply just the mean or the average (when the feckin' context is clear), is the sum of an oul' collection of numbers divided by the bleedin' count of numbers in the collection.[1] The collection is often an oul' set of results of an experiment or an observational study, or frequently a bleedin' set of results from a survey. Be the holy feck, this is a quare wan. The term "arithmetic mean" is preferred in some contexts in mathematics and statistics, because it helps distinguish it from other means, such as the bleedin' geometric mean and the harmonic mean.
In addition to mathematics and statistics, the bleedin' arithmetic mean is used frequently in many diverse fields such as economics, anthropology and history, and it is used in almost every academic field to some extent. Jasus. For example, per capita income is the arithmetic average income of a holy nation's population.
While the feckin' arithmetic mean is often used to report central tendencies, it is not a holy robust statistic, meanin' that it is greatly influenced by outliers (values that are very much larger or smaller than most of the oul' values). For skewed distributions, such as the bleedin' distribution of income for which a few people's incomes are substantially greater than most people's, the feckin' arithmetic mean may not coincide with one's notion of "middle", and robust statistics, such as the feckin' median, may provide better description of central tendency.
2 Motivatin' properties
5 Symbols and encodin'
Given a feckin' data set
{\displaystyle X=\{x_{1},\ldots ,x_{n}\}}
, the feckin' arithmetic mean (or mean or average), denoted
{\displaystyle {\bar {x}}}
(read
{\displaystyle x}
bar), is the bleedin' mean of the bleedin'
{\displaystyle n}
{\displaystyle x_{1},x_{2},\ldots ,x_{n}}
The arithmetic mean is the oul' most commonly used and readily understood measure of central tendency in a holy data set. Story? In statistics, the bleedin' term average refers to any of the bleedin' measures of central tendency. The arithmetic mean of a set of observed data is defined as bein' equal to the oul' sum of the bleedin' numerical values of each and every observation, divided by the total number of observations, like. Symbolically, if we have an oul' data set consistin' of the feckin' values
{\displaystyle a_{1},a_{2},\ldots ,a_{n}}
, then the feckin' arithmetic mean
{\displaystyle A}
is defined by the bleedin' formula:
{\displaystyle A={\frac {1}{n}}\sum _{i=1}^{n}a_{i}={\frac {a_{1}+a_{2}+\cdots +a_{n}}{n}}}
(for an explanation of the feckin' summation operator, see summation.)
For example, consider the bleedin' monthly salary of 10 employees of an oul' firm: 2500, 2700, 2400, 2300, 2550, 2650, 2750, 2450, 2600, 2400. The arithmetic mean is
{\displaystyle {\frac {2500+2700+2400+2300+2550+2650+2750+2450+2600+2400}{10}}=2530.}
If the data set is an oul' statistical population (i.e., consists of every possible observation and not just a bleedin' subset of them), then the feckin' mean of that population is called the population mean, and denoted by the bleedin' Greek letter
{\displaystyle \mu }
, grand so. If the feckin' data set is a bleedin' statistical sample (a subset of the population), then we call the statistic resultin' from this calculation a feckin' sample mean (which for a data set
{\displaystyle X}
{\displaystyle {\overline {X}}}
The arithmetic mean can be similarly defined for vectors in multiple dimension, not only scalar values; this is often referred to as an oul' centroid. Be the holy feck, this is a quare wan. More generally, because the feckin' arithmetic mean is a convex combination (coefficients sum to 1), it can be defined on a bleedin' convex space, not only a holy vector space.
Motivatin' properties[edit]
The arithmetic mean has several properties that make it useful, especially as a measure of central tendency. G'wan now and listen to this wan. These include:
If numbers
{\displaystyle x_{1},\dotsc ,x_{n}}
have mean
{\displaystyle {\bar {x}}}
{\displaystyle (x_{1}-{\bar {x}})+\dotsb +(x_{n}-{\bar {x}})=0}
{\displaystyle x_{i}-{\bar {x}}}
is the feckin' distance from a bleedin' given number to the mean, one way to interpret this property is as sayin' that the oul' numbers to the left of the feckin' mean are balanced by the numbers to the bleedin' right of the bleedin' mean. The mean is the only single number for which the bleedin' residuals (deviations from the bleedin' estimate) sum to zero.
If it is required to use a feckin' single number as a "typical" value for a feckin' set of known numbers
{\displaystyle x_{1},\dotsc ,x_{n}}
, then the arithmetic mean of the oul' numbers does this best, in the sense of minimizin' the feckin' sum of squared deviations from the oul' typical value: the feckin' sum of
{\displaystyle (x_{i}-{\bar {x}})^{2}}
. (It follows that the oul' sample mean is also the best single predictor in the oul' sense of havin' the oul' lowest root mean squared error.)[2] If the arithmetic mean of a population of numbers is desired, then the oul' estimate of it that is unbiased is the oul' arithmetic mean of a sample drawn from the bleedin' population.
{\displaystyle Avg(c*a_{1},c*a_{2}...c*a_{n})}
{\displaystyle c*Avg(a_{1},a_{2}...a_{n})}
The Arithmetic mean of any amount of equal-sized number groups together is the oul' Arithmetic mean of the bleedin' Arithmetic means of each group.
Contrast with median[edit]
The arithmetic mean may be contrasted with the oul' median. The median is defined such that no more than half the feckin' values are larger than, and no more than half are smaller than, the feckin' median. If elements in the feckin' data increase arithmetically, when placed in some order, then the bleedin' median and arithmetic average are equal. Jaykers! For example, consider the data sample
{\displaystyle {1,2,3,4}}
. The average is
{\displaystyle 2.5}
, as is the bleedin' median. Would ye swally this in a minute now?However, when we consider a holy sample that cannot be arranged so as to increase arithmetically, such as
{\displaystyle {1,2,4,8,16}}
, the bleedin' median and arithmetic average can differ significantly, the cute hoor. In this case, the oul' arithmetic average is 6.2, while the feckin' median is 4. Bejaysus here's a quare one right here now. In general, the oul' average value can vary significantly from most values in the bleedin' sample, and can be larger or smaller than most of them.
There are applications of this phenomenon in many fields. For example, since the 1980s, the oul' median income in the feckin' United States has increased more shlowly than the oul' arithmetic average of income.[4]
Weighted average[edit]
Main article: Weighted average
A weighted average, or weighted mean, is an average in which some data points count more heavily than others, in that they are given more weight in the bleedin' calculation.[5] For example, the arithmetic mean of
{\displaystyle 3}
{\displaystyle 5}
{\displaystyle {\frac {(3+5)}{2}}=4}
{\displaystyle \left({\frac {1}{2}}\cdot 3\right)+\left({\frac {1}{2}}\cdot 5\right)=4}
. Arra' would ye listen to this. In contrast, a feckin' weighted mean in which the feckin' first number receives, for example, twice as much weight as the feckin' second (perhaps because it is assumed to appear twice as often in the general population from which these numbers were sampled) would be calculated as
{\displaystyle \left({\frac {2}{3}}\cdot 3\right)+\left({\frac {1}{3}}\cdot 5\right)={\frac {11}{3}}}
. G'wan now. Here the weights, which necessarily sum to the value one, are
{\displaystyle (2/3)}
{\displaystyle (1/3)}
, the former bein' twice the latter, begorrah. The arithmetic mean (sometimes called the feckin' "unweighted average" or "equally weighted average") can be interpreted as a special case of a feckin' weighted average in which all the feckin' weights are equal to each other (equal to
{\displaystyle {\frac {1}{2}}}
in the bleedin' above example, and equal to
{\displaystyle {\frac {1}{n}}}
in a situation with
{\displaystyle n}umbers bein' averaged).
Continuous probability distributions[edit]
Comparison of two log-normal distributions with equal median, but different skewness, resultin' in different means and modes
If a feckin' numerical property, and any sample of data from it, could take on any value from a continuous range, instead of, for example, just integers, then the oul' probability of a number fallin' into some range of possible values can be described by integratin' an oul' continuous probability distribution across this range, even when the naive probability for a feckin' sample number takin' one certain value from infinitely many is zero. In fairness now. The analog of a bleedin' weighted average in this context, in which there are an infinite number of possibilities for the precise value of the bleedin' variable in each range, is called the oul' mean of the feckin' probability distribution. A most widely encountered probability distribution is called the oul' normal distribution; it has the property that all measures of its central tendency, includin' not just the mean but also the aforementioned median and the mode (the three M's[6]), are equal to each other. This equality does not hold for other probability distributions, as illustrated for the feckin' log-normal distribution here.
Particular care must be taken when usin' cyclic data, such as phases or angles. Listen up now to this fierce wan. Naively takin' the arithmetic mean of 1° and 359° yields a feckin' result of 180°. This is incorrect for two reasons:
Firstly, angle measurements are only defined up to an additive constant of 360° (or 2π, if measurin' in radians). Thus one could as easily call these 1° and −1°, or 361° and 719°, since each one of them gives a feckin' different average.
Secondly, in this situation, 0° (equivalently, 360°) is geometrically a feckin' better average value: there is lower dispersion about it (the points are both 1° from it, and 179° from 180°, the oul' putative average).
In general application, such an oversight will lead to the oul' average value artificially movin' towards the bleedin' middle of the oul' numerical range. Stop the lights! A solution to this problem is to use the bleedin' optimization formulation (viz., define the oul' mean as the oul' central point: the feckin' point about which one has the feckin' lowest dispersion), and redefine the oul' difference as a modular distance (i.e., the bleedin' distance on the circle: so the feckin' modular distance between 1° and 359° is 2°, not 358°).
Proof without words of the inequality of arithmetic and geometric means:
PR is a diameter of a holy circle centred on O; its radius AO is the arithmetic mean of a and b. Usin' the geometric mean theorem, triangle PGR's altitude GQ is the geometric mean. For any ratio a:b, AO ≥ GQ.
Symbols and encodin'[edit]
The arithmetic mean is often denoted by a holy bar, (a.k.a vinculum or macron), for example as in
{\displaystyle {\bar {x}}}
{\displaystyle x}
bar).[2]
Some software (text processors, web browsers) may not display the x̄ symbol properly. For example, the feckin' x̄ symbol in HTML is actually a combination of two codes - the oul' base letter x plus an oul' code for the feckin' line above (̄ or ¯).[7]
In some texts, such as pdfs, the x̄ symbol may be replaced by a cent (¢) symbol (Unicode ¢), when copied to text processor such as Microsoft Word.
Standard error of the oul' mean
^ Weisstein, Eric W. Soft oul' day. "Arithmetic Mean". Story? mathworld.wolfram.com. Sure this is it. Retrieved 21 August 2020.
^ Krugman, Paul (4 June 2014) [Fall 1992]. Arra' would ye listen to this shite? "The Rich, the Right, and the Facts: Deconstructin' the bleedin' Income Distribution Debate", would ye swally that? The American Prospect.
^ {{Cite web|title=Mean {{!}tannica.com/science/mean|access-date=2020-08-21|website=Encyclopedia Britannica|language=en}}
^ Thinkmap Visual Thesaurus (30 June 2010). "The Three M's of Statistics: Mode, Median, Mean June 30, 2010". Arra' would ye listen to this. www.visualthesaurus.com. Jesus Mother of Chrisht almighty. Retrieved 3 December 2018.
^ "Notes on Unicode for Stat Symbols". Bejaysus here's a quare one right here now. www.personal.psu.edu. Here's another quare one for ye. Retrieved 14 October 2018.
^ If AC = a and BC = b. Whisht now and eist liom. OC = AM of a and b, and radius r = QO = OG.
Usin' Pythagoras' theorem, QC² = QO² + OC² ∴ QC = √QO² + OC² = QM.
Usin' Pythagoras' theorem, OC² = OG² + GC² ∴ GC = √OC² − OG² = GM.
Usin' similar triangles, [edit]
Huff, Darrell (1993), grand so. How to Lie with Statistics. W. W, the cute hoor. Norton. ISBN 978-0-393-31072-6.
Calculations and comparisons between arithmetic mean and geometric mean of two numbers
Calculate the oul' arithmetic mean of a holy series of numbers on fxSolver
Missin' data
Samplin' distribution
Estimatin' equations
Resamplin'
Testin' hypotheses
Anderson–Darlin'
Confoundin' variable
Exponential smoothin'
First hittin' time
Engineerin' statistics
Methods engineerin'
Krigin'
|
Lori has written the conjectures below. For each one, decide if it is true or not. If you believe it is not true, find a counter example (an example that proves that the statement is false).
60^\circ
Can a triangle have one
60^\circ
angle and the other angles not be
60^{\circ}
To find the area of a shape, you always multiply the length of the base by the height.
How do you find the area of a trapezoid? Think of its formula.
360^\circ
What does it mean to have
360^\circ
rotation?
|
Geomechanical modeling of hydraulic fractures interacting with natural fractures — Validation with microseismic and tracer data from the Marcellus and Eagle Ford | Interpretation | GeoScienceWorld
Yamina E. Aimene;
Yamina E. Aimene
, Corvallis, Oregon,
. E-mail: aimene.yamina@gmail.com.
, The Woodlands, Texas,
. E-mail: aouenes@fracgeo.com.
Yamina E. Aimene, Ahmed Ouenes; Geomechanical modeling of hydraulic fractures interacting with natural fractures — Validation with microseismic and tracer data from the Marcellus and Eagle Ford. Interpretation 2015;; 3 (3): SU71–SU88. doi: https://doi.org/10.1190/INT-2014-0274.1
We have developed a new geomechanical workflow to study the mechanics of hydraulic fracturing in naturally fractured unconventional reservoirs. This workflow used the material point method (MPM) for computational mechanics and an equivalent fracture model derived from continuous fracture modeling to represent natural fractures (NFs). We first used the workflow to test the effect of different stress anisotropies on the propagation path of a single NF intersected by a hydraulic fracture. In these elementary studies, increasing the stress anisotropy was found to decrease the curving of a propagating NF, and this could be used to explain the observed trends in the microseismic data. The workflow was applied to Marcellus and Eagle Ford wells, where multiple geomechanical results were validated with microseismic data and tracer tests. Application of the workflow to a Marcellus well provides a strain field that correlates well with microseismicity, and a maximum energy release rate, or
J
integral at each completion stage, which appeared to correlate to the production log and could be used to quantify the impact of skipping the completion stages. On the first of two Eagle Ford wells considered, the MPM workflow provided a horizontal differential stress map that showed significant variability imparted by NFs perturbing the regional stress field. Additionally, a map of the strain distribution after stimulating the well showed the same features as the interpreted microseismic data: three distinct regions of microseismic character, supported by tracer tests and explained by the MPM differential stress map. Finally, the workflow was able to estimate, in the second well with no microseismic data, its main performance characteristics as validated by tracer tests. The field-validated MPM geomechanical workflow is a powerful tool for completion optimization in the presence of NFs, which affect in multiple ways the final outcome of hydraulic fracturing.
|
Treaps | Brilliant Math & Science Wiki
Agnishom Chattopadhyay, Debarghya Adhikari, Geoff Pilling, and
Treaps are a non-deterministic data structure in the form of a Cartesian tree used to maintain a balanced binary search tree. As opposed to most other balanced binary search trees like scapegoat tree, red-black tree, avl tree, fenwick tree etc., the treap data structure does not guarantee balance per se, but it ensures so with a very high probability.
Cartesian Trees and Structure of the Treap
Implicit Treaps
Treaps are Cartesian trees, which means that they are trees with an ordered pair and that the nodes follow the binary search tree property with respect to one of them and the heap property with respect to the other.
This means that the treap is composed of
nodes, each of which is a pair
(H_i, B_i)
, called the priority and the key, respectively, where for every node
i
H_{\text{left}(i)} \leq H_i \geq H_{\text{right}(i)}
B_{\text{left}(i)} \leq B_i \leq B_{\text{right}(i)}.
A Cartesian tree with numbers as priorities and characters as keys
(H_i, B_i)
pairs, there exists a unique Cartesian tree made by them.
If the list of pairs is empty, then we construct the empty tree and we're done.
Otherwise, we pick the root node as the pair with the highest priority and recursively build the left subtree with all the pairs whose keys are less than (or equal to) that of the root and the right subtree with all the pairs whose keys are greater than that of the root.
_\square
Treaps are Cartesian trees where the priorities are chosen at random from a large range of integers.
We do not prove this here, but it is intuitively obvious that the expected height of the treap is in
O(\log n).
Thus, each node of the treap consists of a value, a priority, and pointers towards its two children (which may be null).
We'll build the tree by inserting an element one by one. Let us look at our first strategy to do this.
Insert blindly only using the binary search tree property. Using a random value as the priority, rotate the node up to the appropriate location.
We do not define rotations formally here but illustrate with a graphic:
Notice that the BST invariant remains intact under this operation.
Cite as: Treaps. Brilliant.org. Retrieved from https://brilliant.org/wiki/treaps/
|
Drag coefficient - Wikipedia
(Redirected from Coefficient of drag)
Dimensionless parameter to quantify fluid resistance
Drag coefficients in fluids with Reynolds number approximately 104[1][2]
In fluid dynamics, the drag coefficient (commonly denoted as:
{\displaystyle c_{\mathrm {d} }}
{\displaystyle c_{x}}
{\displaystyle c_{\rm {w}}}
) is a dimensionless quantity that is used to quantify the drag or resistance of an object in a fluid environment, such as air or water. It is used in the drag equation in which a lower drag coefficient indicates the object will have less aerodynamic or hydrodynamic drag. The drag coefficient is always associated with a particular surface area.[3]
The drag coefficient of any object comprises the effects of the two basic contributors to fluid dynamic drag: skin friction and form drag. The drag coefficient of a lifting airfoil or hydrofoil also includes the effects of lift-induced drag.[4][5] The drag coefficient of a complete structure such as an aircraft also includes the effects of interference drag.[6][7]
3 Drag coefficient examples
4 Blunt and streamlined body flows
Table of drag coefficients in increasing order, of assorted prisms (right column) and rounded shapes (left column) at Reynolds numbers between 104 and 106 with flow from the left [8]
The drag coefficient
{\displaystyle c_{\mathrm {d} }}
{\displaystyle c_{\mathrm {d} }={\dfrac {2F_{\mathrm {d} }}{\rho u^{2}A}}}
{\displaystyle F_{\mathrm {d} }}
is the drag force, which is by definition the force component in the direction of the flow velocity;[9]
{\displaystyle \rho }
is the mass density of the fluid;[10]
{\displaystyle u}
is the flow speed of the object relative to the fluid;
{\displaystyle A}
is the reference area
The reference area depends on what type of drag coefficient is being measured. For automobiles and many other objects, the reference area is the projected frontal area of the vehicle. This may not necessarily be the cross-sectional area of the vehicle, depending on where the cross-section is taken. For example, for a sphere
{\displaystyle A=\pi r^{2}}
(note this is not the surface area =
{\displaystyle 4\pi r^{2}}
For airfoils, the reference area is the nominal wing area. Since this tends to be large compared to the frontal area, the resulting drag coefficients tend to be low, much lower than for a car with the same drag, frontal area, and speed.
Airships and some bodies of revolution use the volumetric drag coefficient, in which the reference area is the square of the cube root of the airship volume (volume to the two-thirds power). Submerged streamlined bodies use the wetted surface area.
Flow around a plate, showing stagnation. The force in the upper configuration is equal to
{\displaystyle F=\rho u^{2}A}
and in the down configuration
{\displaystyle F_{d}={\tfrac {1}{2}}\rho u^{2}c_{d}A}
{\displaystyle F_{\rm {d}}={\tfrac {1}{2}}\rho u^{2}c_{\rm {d}}A}
is essentially a statement that the drag force on any object is proportional to the density of the fluid and proportional to the square of the relative flow speed between the object and the fluid.
{\displaystyle c_{\mathrm {d} }}
is not a constant but varies as a function of flow speed, flow direction, object position, object size, fluid density and fluid viscosity. Speed, kinematic viscosity and a characteristic length scale of the object are incorporated into a dimensionless quantity called the Reynolds number
{\displaystyle \scriptstyle Re}
{\displaystyle \scriptstyle C_{\mathrm {d} }}
is thus a function of
{\displaystyle \scriptstyle Re}
. In a compressible flow, the speed of sound is relevant, and
{\displaystyle c_{\mathrm {d} }}
is also a function of Mach number
{\displaystyle \mathrm {Ma} }
For certain body shapes, the drag coefficient
{\displaystyle c_{\mathrm {d} }}
only depends on the Reynolds number
{\displaystyle \mathrm {Re} }
, Mach number
{\displaystyle \mathrm {Ma} }
and the direction of the flow. For low Mach number
{\displaystyle \mathrm {Ma} }
, the drag coefficient is independent of Mach number. Also, the variation with Reynolds number
{\displaystyle \mathrm {Re} }
within a practical range of interest is usually small, while for cars at highway speed and aircraft at cruising speed, the incoming flow direction is also more-or-less the same. Therefore, the drag coefficient
{\displaystyle c_{\mathrm {d} }}
can often be treated as a constant.[11]
For a streamlined body to achieve a low drag coefficient, the boundary layer around the body must remain attached to the surface of the body for as long as possible, causing the wake to be narrow. A high form drag results in a broad wake. The boundary layer will transition from laminar to turbulent if Reynolds number of the flow around the body is sufficiently great. Larger velocities, larger objects, and lower viscosities contribute to larger Reynolds numbers.[12]
Drag coefficient Cd for a sphere as a function of Reynolds number Re, as obtained from laboratory experiments. The dark line is for a sphere with a smooth surface, while the lighter line is for the case of a rough surface. The numbers along the line indicate several flow regimes and associated changes in the drag coefficient:
For other objects, such as small particles, one can no longer consider that the drag coefficient
{\displaystyle c_{\mathrm {d} }}
is constant, but certainly is a function of Reynolds number.[13][14][15] At a low Reynolds number, the flow around the object does not transition to turbulent but remains laminar, even up to the point at which it separates from the surface of the object. At very low Reynolds numbers, without flow separation, the drag force
{\displaystyle F_{\mathrm {d} }}
{\displaystyle \scriptstyle v}
{\displaystyle v^{2}}
; for a sphere this is known as Stokes' law. The Reynolds number will be low for small objects, low velocities, and high viscosity fluids.[12]
{\displaystyle c_{\mathrm {d} }}
equal to 1 would be obtained in a case where all of the fluid approaching the object is brought to rest, building up stagnation pressure over the whole front surface. The top figure shows a flat plate with the fluid coming from the right and stopping at the plate. The graph to the left of it shows equal pressure across the surface. In a real flat plate, the fluid must turn around the sides, and full stagnation pressure is found only at the center, dropping off toward the edges as in the lower figure and graph. Only considering the front side, the
{\displaystyle c_{\mathrm {d} }}
of a real flat plate would be less than 1; except that there will be suction on the backside: a negative pressure (relative to ambient). The overall
{\displaystyle c_{\mathrm {d} }}
of a real square flat plate perpendicular to the flow is often given as 1.17.[citation needed] Flow patterns and therefore
{\displaystyle \scriptstyle C_{\mathrm {d} }}
for some shapes can change with the Reynolds number and the roughness of the surfaces.
Drag coefficient examples[edit]
{\displaystyle c_{\mathrm {d} }}
is not an absolute constant for a given body shape. It varies with the speed of airflow (or more generally with Reynolds number
{\displaystyle \mathrm {Re} }
). A smooth sphere, for example, has a
{\displaystyle c_{\mathrm {d} }}
that varies from high values for laminar flow to 0.47 for turbulent flow. Although the drag coefficient decreases with increasing
{\displaystyle \mathrm {Re} }
, the drag force increases.
0.001 Laminar flat plate parallel to the flow (
{\displaystyle \mathrm {Re} <10^{6}}
0.005 Turbulent flat plate parallel to the flow (
{\displaystyle \mathrm {Re} >10^{6}}
0.1 Smooth sphere (
{\displaystyle \mathrm {Re} =10^{6}}
0.47 Smooth sphere (
{\displaystyle \mathrm {Re} =10^{5}}
0.81 Triangular trapeze (45°)
0.9-1.7 Trapeze with triangular basis (45°)
0.295 Bullet (not ogive, at subsonic velocity)
0.48 Rough sphere (
{\displaystyle \mathrm {Re} =10^{6}}
1.0–1.1 Skier
1.0–1.3 Wires and cables
1.0–1.3 Adult human (upright position)
1.1-1.3 Ski jumper[17]
1.28 Flat plate perpendicular to flow (3D)[18]
1.3–1.5 Empire State Building
1.8–2.0 Eiffel Tower
1.98–2.05 Long flat plate perpendicular to flow (2D)
As noted above, aircraft use their wing area as the reference area when computing
{\displaystyle c_{\mathrm {d} }}
, while automobiles (and many other objects) use projected frontal area; thus, coefficients are not directly comparable between these classes of vehicles. In the aerospace industry, the drag coefficient is sometimes expressed in drag counts where 1 drag count = 0.0001 of a
{\displaystyle c_{\mathrm {d} }}
Aircraft type[20]
0.021 210 F-4 Phantom II (subsonic)
0.022 220 Learjet 24
0.024 240 Boeing 787[21]
0.0265 265 Airbus A380[22]
0.027 270 Cessna 172/182
0.027 270 Cessna 310
0.031 310 Boeing 747
0.044 440 F-4 Phantom II (supersonic)
0.048 480 F-104 Starfighter
Main article: Automobile drag coefficient
Blunt and streamlined body flows[edit]
The force between a fluid and a body, when there is relative motion, can only be transmitted by normal pressure and tangential friction stresses. So, for the whole body, the drag part of the force, which is in-line with the approaching fluid motion, is composed of frictional drag (viscous drag) and pressure drag (form drag). The total drag and component drag forces can be related as follows:
{\displaystyle {\begin{aligned}c_{\mathrm {d} }&={\dfrac {2F_{\mathrm {d} }}{\rho v^{2}A}}\\&=c_{\mathrm {p} }+c_{\mathrm {f} }\\&=\underbrace {{\dfrac {2}{\rho v^{2}A}}\displaystyle \int \limits _{S}\mathrm {d} S(p-p_{o})\left({\hat {\mathbf {n} }}\cdot {\hat {\mathbf {i} }}\right)} _{c_{\mathrm {p} }}+\underbrace {{\dfrac {2}{\rho v^{2}A}}\displaystyle \int \limits _{S}\mathrm {d} S\left({\hat {\mathbf {t} }}\cdot {\hat {\mathbf {i} }}\right)T_{\rm {w}}} _{c_{\mathrm {f} }}\end{aligned}}}
A is the planform area of the body,
S is the wet surface of the body,
{\displaystyle c_{\mathrm {p} }}
is the pressure drag coefficient,
{\displaystyle c_{\mathrm {f} }}
is the friction drag coefficient,
{\displaystyle {\hat {\mathbf {t} }}}
= Direction of the shear stress acting on the body surface dS,
{\displaystyle {\hat {\mathbf {n} }}}
= Direction perpendicular to the body surface dS, points from the fluid to the solid,
{\displaystyle T_{\mathrm {w} }}
magnitude of the shear Stress acting on the body surface dS,
{\displaystyle p_{\mathrm {o} }}
is the pressure far away from the body (note that this constant does not affect the final result),
{\displaystyle p}
is pressure at surface dS,
{\displaystyle {\hat {\mathbf {i} }}}
is the unit vector in direction of free stream flow
Therefore, when the drag is dominated by a frictional component, the body is called a streamlined body; whereas in the case of dominant pressure drag, the body is called a blunt or bluff body. Thus, the shape of the body and the angle of attack determine the type of drag. For example, an airfoil is considered as a body with a small angle of attack by the fluid flowing across it. This means that it has attached boundary layers, which produce much less pressure drag.
Trade-off relationship between zero-lift drag and lift induced drag
The wake produced is very small and drag is dominated by the friction component. Therefore, such a body (here an airfoil) is described as streamlined, whereas for bodies with fluid flow at high angles of attack, boundary layer separation takes place. This mainly occurs due to adverse pressure gradients at the top and rear parts of an airfoil.
Due to this, wake formation takes place, which consequently leads to eddy formation and pressure loss due to pressure drag. In such situations, the airfoil is stalled and has higher pressure drag than friction drag. In this case, the body is described as a blunt body.
A streamlined body looks like a fish (Tuna), Oropesa, etc. or an airfoil with small angle of attack, whereas a blunt body looks like a brick, a cylinder or an airfoil with high angle of attack. For a given frontal area and velocity, a streamlined body will have lower resistance than a blunt body. Cylinders and spheres are taken as blunt bodies because the drag is dominated by the pressure component in the wake region at high Reynolds number.
To reduce this drag, either the flow separation could be reduced or the surface area in contact with the fluid could be reduced (to reduce friction drag). This reduction is necessary in devices like cars, bicycle, etc. to avoid vibration and noise production.
Practical example[edit]
The aerodynamic design of cars has evolved from the 1920s to the end of the 20th century. This change in design from a blunt body to a more streamlined body reduced the drag coefficient from about 0.95 to 0.30.
Time history of cars' aerodynamic drag in comparison to change in geometry of streamlined bodies (blunt to streamline).
^ Baker, W.E. (1983). Explosion Hazards and Evaluation, Volume 5. Elsevier Science. ISBN 9780444599889.
^ AARØNÆS, ANTON STADE (2014). Dynamic response of pipe rack steel structures to explosion loads (PDF). CHALMERS UNIVERSITY OF TECHNOLOGY.
^ McCormick, Barnes W. (1979). Aerodynamics, Aeronautics, and Flight Mechanics. New York: John Wiley & Sons, Inc. p. 24. ISBN 0471030325.
^ Clancy, L. J. (1975). "5.18". Aerodynamics. ISBN 9780470158371.
^ Abbott, Ira H., and Von Doenhoff, Albert E.: Theory of Wing Sections. Sections 1.2 and 1.3
^ "NASA's Modern Drag Equation". Wright.nasa.gov. 2010-03-25. Archived from the original on 2011-03-02. Retrieved 2010-12-07.
^ Hoerner, Sighard F. (1965). Fluid-Dynamic Drag : Practical Information on Aerodynamic Drag and Hydrodynamic Resistance (2 ed.). p. 3–17.
^ See lift force and vortex induced vibration for a possible force components transverse to the flow direction
^ Note that for the Earth's atmosphere, the air density can be found using the barometric formula. Air is 1.293 kg/m3 at 0 °C (32 °F) and 1 atmosphere.
^ Clancy, L. J.: Aerodynamics. Sections 4.15 and 5.4
^ a b Clancy, L. J.: Aerodynamics. Section 4.17
^ Clift R., Grace J. R., Weber M. E.: Bubbles, drops, and particles. Academic Press NY (1978).
^ Briens C. L.: Powder Technology. 67, 1991, 87-91.
^ Haider A., Levenspiel O.: Powder Technology. 58, 1989, 63-70.
^ Shapes
^ "Drag Coefficient". Engineeringtoolbox.com. Archived from the original on 2010-12-04. Retrieved 2010-12-07.
^ "Shape Effects on Drag". NASA. Archived from the original on 2013-02-16. Retrieved 2013-03-11.
^ Basha, W. A. and Ghaly, W. S., "Drag Prediction in Transitional Flow over Airfoils," Journal of Aircraft, Vol. 44, 2007, p. 824–32.
^ "Ask Us - Drag Coefficient & Lifting Line Theory". Aerospaceweb.org. 2004-07-11. Retrieved 2010-12-07.
^ "Boeing 787 Dreamliner : Analysis". Lissys.demon.co.uk. 2006-06-21. Archived from the original on 2010-08-13. Retrieved 2010-12-07.
^ "Airbus A380" (PDF). 2005-05-02. Archived (PDF) from the original on 2015-09-23. Retrieved 2014-10-06.
L. J. Clancy (1975): Aerodynamics. Pitman Publishing Limited, London, ISBN 0-273-01120-0
Abbott, Ira H., and Von Doenhoff, Albert E. (1959): Theory of Wing Sections. Dover Publications Inc., New York, Standard Book Number 486-60586-8
Hoerner, Dr. Sighard F., Fluid-Dynamic Drag, Hoerner Fluid Dynamics, Bricktown New Jersey, 1965.
Bluff Body: http://user.engineering.uiowa.edu/~me_160/lecture_notes/Bluff%20Body2.pdf
Drag of Blunt Bodies and Streamlined Bodies: http://www.princeton.edu/~asmits/Bicycle_web/blunt.html
Hucho, W.H., Janssen, L.J., Emmelmann, H.J. 6(1975): The optimization of body details-A method for reducing the aerodynamics drag. SAE 760185.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Drag_coefficient&oldid=1058568110"
|
Work with Negative Interest Rates Using Functions - MATLAB & Simulink - MathWorks 한êµ
Call\left(K,T\right)=Blac{k}_{call}\left(F,K,r,T,{\mathrm{Ï}}_{Black}\left(\mathrm{α},\mathrm{β},\mathrm{Ï},\mathrm{ν},F,K,T\right)\right)
As shown, the Black and Normal volatility approximations allow you to use the SABR model with the Black and Normal model option pricing formulas. However, although the Normal model itself allows negative rates and the SABR model has an implied Normal volatility approximation, the underlying dynamics of the SABR model do not allow negative rates, unless β = 0. In the Shifted SABR model, the Shifted Black volatility approximation can be used to allow negative rates with a fixed negative lower bound defined by the amount of shift.
You can compute the implied Normal volatility in terms of the SABR model parameters, for either β = 0 (Normal SABR), or any other value of β allowed by the SABR model (0 ≤ β ≤ 1) using normalvolbysabr.
normalvolbysabrcomputes the implied Normal volatility σ N in terms of the SABR model parameters. Using normalvolbysabr to compute σ N, you can then you this with other functions for Normal model pricing (for example, capbynormal, floorbynormal, and swaptionbyblk).
|
Stochastic Processes | Brilliant Math & Science Wiki
wd soul, Eli Ross, and Jimin Khim contributed
Stochastic process is the process of some values changing randomly over time. At its simplest form, it involves a variable changing at a random rate through time. There are various types of stochastic processes. Some well-known types are random walks, Markov chains, and Bernoulli processes. They are used in mathematics, engineering, computer science, and various other fields. They can be classified into two distinct types: discrete-time and continuous stochastic processes.
One of the most simplistic stochastic processes would be a Bernoulli process. Example: Coin-flip (repeatedly)
This is a template. Not an actual example. This is the answer to the question, with a detailed solution. If math is needed, it can be done inline:
x^2 = 144
\frac{x^2}{x+3} = 4y
_\square
Cite as: Stochastic Processes. Brilliant.org. Retrieved from https://brilliant.org/wiki/stochastic-processes/
|
Experimental Study and Genetic-Algorithm-Based Correlation on Pressure Drop and Heat Transfer Performances of a Cross-Corrugated Primary Surface Heat Exchanger | J. Heat Transfer | ASME Digital Collection
Dong-Jie Zhang,
Wang, Q., Zhang, D., and Xie, G. (March 31, 2009). "Experimental Study and Genetic-Algorithm-Based Correlation on Pressure Drop and Heat Transfer Performances of a Cross-Corrugated Primary Surface Heat Exchanger." ASME. J. Heat Transfer. June 2009; 131(6): 061802. https://doi.org/10.1115/1.3090716
Heat transfer and pressure drop characteristics of a cross-corrugated (CC) primary surface heat exchanger with different CC passages (
P/H=2
θ=60
and 120 deg, called CC2-60 and CC2-120, respectively) in two air sides have been experimentally investigated in this study. It is shown that the corrugation angle
(θ)
and the ratio of the wavelength
P
to height
H
(P/H)
are the two key parameters of CC passages to influence the heat transfer and flow friction performances. The heat transfer and friction factor correlations for these two configurations are also obtained with Reynolds numbers ranging from
Re=450–5500(CC2-60)
Re=570–6700(CC2-120)
. At a certain
P/H
, the Nusselt number, Nu, and the friction factor,
f
, are affected by the corrugation angle,
θ
. The heat transfer performance of CC2-120 are much better than that of CC2-60 while the pressure drop of the former is higher than that of the latter, especially at high Reynolds numbers region. The critical Reynolds numbers at which the flow mode transits from laminar to turbulent in the two different passages are also estimated. Furthermore, in this study a genetic algorithm (GA) has been used to determine the coefficients of heat transfer correlations by separation of total heat transfer coefficient without any information of measured wall temperatures. It is concluded that the GA-based separated heat transfer Nusselt number provides a good agreement with the experimental data; the averaged relative deviation by GA (1.95%) is lower than that by regression analysis (2.84%). The inversely yielding wall temperatures agree well with the measured data in turn supporting the reliability of experimental system and measurements. It is recommended that GA techniques can be used to handle more complicated problems and to obtain both-side heat transfer correlations simultaneously, where the conventional Wilson-plot method cannot be applied.
genetic algorithms, heat exchangers, heat transfer, laminar to turbulent transitions, heat transfer and friction factor, cross-corrugated primary surface, genetic algorithm, separated correlation, wall temperature
Heat exchangers, Heat transfer, Pressure drop, Genetic algorithms, Wall temperature, Flow (Dynamics), Heat transfer coefficients
Low-Cost Compact Primary Surface Recuperator Concept for Microturbines
Evaluation of the Cross Corrugated and Some Other Candidate Heat Transfer Surfaces for Microturbine Recuperators
Recuperators and Regenerators in Gas Turbine Systems
Comparison of Heat Transfer Surfaces for Microturbine Recuperators
Configuration and Thermal Design of a New Type Exchanger Used in Fresh Air Ventilator
Proceedings of the Second Proseminar of Heat Transfer Technology
A New Type Primary Surface Heat Exchanger Used in Fresh Air Ventilator
,” Chinese Patent No. ZL200510042697.X.
The Effect of the Corrugation Inclination Angle on the Thermohydraulic Performance of Plate Heat Exchangers
Investigation of Flow and Heat Transfer in Corrugated Passages—I. Experimental Results
Investigation of Flow and Heat Transfer in Corrugated Passages—II. Numerical Simulations
Enhanced Heat Transfer Characteristics of Single-Phase Flows in a Plate Heat Transfer With Mixed Chevron Plates
Heat Transfer Enhancement in Three-Dimensional Corrugated Channel Flow
Blomerius
Höisken
Hydrodynamic and Thermal Characteristics of Corrugated Channels: Experimental Approach
Hydrodynamics and Thermal Characteristics of Corrugated Channels: Computational Approach
Comparison of Heat and Mass Transfer in Different Heat Exchanger Geometries With Corrugated Walls
Numerical Analysis of Forced Convection in Plate and Frame Heat Exchangers
Enhanced Heat Transfer Due to Curvature-Induced Lateral Vortices in Laminar Flows in Sinusoidal Corrugated-Plate Channels
Numerical Study of Periodically Fully Developed Flow and Heat Transfer in Cross-Corrugated Triangular Channels in Transitional Flow Regime
Convective Mass Transport in Cross-Corrugated Membrane Exchangers
Simultaneous Determination of In-and-Over-Tube Heat Transfer Correlations in Heat Exchangers by Global Regression
Heat Transfer Analysis for Shell-and-Tube Heat Exchangers With Experimental Data by Artificial Neural Networks Approach
Application of a Genetic Algorithm for Thermal Design of Fin-and-Tube Heat Exchangers
Genetic Algorithm Based Design and Optimization of Outer-Fins and Inner-Fins Tube Heat Exchangers
Chemical Laser Modeling With Genetic Algorithms
School of Engineering, The University of Alabama
Test-Case Generator for Constrained Parameter Optimization Techniques
|
Disclaimer: The following is intended to be an opinion piece based on current events and personal events regarding COVID-19, and is not medical advice. This article is for the purpose of expressing opinion only.
The reason for writing this article is simply because I've had a few discussions with friends recently and there are various misconceptions about COVID and how vaccines will affect the spread of the virus. I therefore want to clarify expectations for both myself and them, based on what we know so far and what can be expected to be seen in the future.
With that all out of the way, let's get into this.
As you may or may not be aware, this little known thing called COVID-19 was detected and spread from Wuhan China, way back in December 2019. We know the problematic strain of COVID came from Wuhan because that's where the number of cases exploded from - despite claims from the WHO that it could have come from anywhere, cases of COVID exploding in no other population prior to that outbreak. It may be an inconvenient truth to China's CCP, but the global pandemic was spread on their watch.
The CCP may or may not have been responsible for the creation and/or initial infection of COVID-19, but they most definitely have signifiant blame for its subsequent spread after becoming aware of it. Don't let records forget the fact that they sent everybody in Wuhan on a Chinese public holiday, told the WHO that it was not transmissible (despite there being hundreds of cases at this point) and called any Country who locked down their border Xenophobic.
I personally narrowly missed travelling through China on a transit journey on my way back from the UK to NZ, missing out on a ticket due to a web page timeout whilst I was rummaging for my credit card to make the payment. I ended up with a more expensive ticket, and not flying through China. When I got to NZ, my sister, who worked at the time in a medical profession, messaged me panicked asking if I eventually did go through China. It was at this time, in late January 2020 that I first became aware of COVID. At this time, there were 150 or so known active cases in Wuhan, but the curve was exponential. I distinctly remember seeing this exponential curve and telling people in the room "this will be bad, that's a high
{R}_{0}
". Little did I know how true those words would end up being.
Side note: What is
{R}_{0}
? This is a number that describes a spread rate, in particular the average number of people any one infected person will spread to. An
{R}_{0}<1
means that each person will spread to less than one person, and it will die out on its own accord. An
{R}_{0}=1
means each person on average spreads to another person. An
{R}_{0}>1
means that each person spreads to more than one person. Note that these can be fractions, as it is an average.
The equation is 1:
{R}_{0}=\beta cD
\beta
is the transmission probability,
c
is the number of contacts and
D
is the average infection time. You can read more information here.
The WHO (World Health Organisation), which is mostly funded by the US and was setup and funded to prevent such a global pandemic, has fallen short at almost every possible event. They failed to perform a root cause analysis in Wuhan and to this day, have not been allowed unrestricted access to Wuhan, more than one year after COVID-19 initially spread 2.
Infection model based on China's data
Early on in the pandemic I wrote a prediction model to figure out the spread of COVID based on the numbers coming out of China. Based on their initial reporting, we were not going to see greater than 80k cases worldwide. Of course, this made one simple assumption: The CCP's reporting of COVID cases is accurate. This turned out to be an incorrect assumption. To this day, the numbers officially reported by the CCP are impossibly low based on the known
{R}_{0}
By about the 19th of February 2020, you can see containment in China has completely failed. About this time it was well underway in places like Italy, which had unfortunately not locked down their borders based on information from the WHO. After hitting Europe, it was pretty soon after it spread to everywhere else too.
It was around the time of March 2020, when China begin to declare their victory over COVID-19 due to the 'decisive action' of authoritarian lockdowns in China, where they quite literally welded people in their homes. After having been in lockdown for quite a while, they began to ease lockdowns and mysteriously saw no rise in cases - despite no vaccine existing at this time. This is counter to the experience of every other Country.
After seeing the 'success' of China's lockdowns and the news media working with governments into scaring the public, authoritarian lockdowns were rolled out globally as literally the only action governments could enforce. At the time, medical facilities are told to inflate numbers in any way they possibly can - with essentially any death of a person that happened to also have COVID being attributed to COVID. These inflated numbers were then used as propaganda and to drive policy decisions in governments around the world.
The problem is, because these numbers were fake, governments really had no idea how bad the problem really was and still do not know to this day. At the time of writing the Telegraph reports leaked NHS data that shows more than half of all 'COVID hospitalisation' counts in the UK are from people who tested positive after being hospitalised. That is to say, they were hospitalised for an independent condition prior to contracting COVID, and then contracted it in the hospital after the fact.
So how effective have lockdowns been? The real answer is that nobody has a clue - there is not a single data source out in the wild that is reliable. Numbers have been inflated at every level. And bare in mind, we have not even begun to discuss the negative impact of lockdowns either, in terms of mortality and economic issues it has rained down upon people around the world.
To clarify my position - COVID is real. People are dying from this virus. But people die all the time from many things, in fact, evidence suggests that you will most likely die from something. When figuring out how to respond to anything that causes increased deaths, we must perform some kind of risk analysis. These kinds of morbid number crunching exercises happen every day. For example, when allocating funding the NHS, one million pounds less means one less facility, and some number of people will die as a result. It is not possible to feasibly save everybody, we simply do the best with the resources we have.
Understandably, after more than a year of lockdowns, people are pretty annoyed with them. Governments have been able to keep people locked up with the promise that 'when a vaccine becomes available, people will be allowed out'. This understanding between governments and the frustrated public was stretched further with the idea that some critical number of people must be vaccinated, to reach heard immunity.
This critical point is where the UK currently finds itself, with most people now having both doses of vaccination.
At the time of writing, at least 46 million people in the UK have been vaccinated with at least one dose. Phase 1 was to see 'priority' groups be vaccinated first, i.e. persons most likely to die from contracting COVID. The UK is now comfortably in the phase 2 stage, with people between the ages of 18-49 now receiving the vaccine.
After phase 2, apparently the UK government will start offering vaccinations to children under the age of 18, a group that is exceptionally unlikely to die from COVID. There is still a question around whether it is even beneficial to vaccinate children in any case.
Vaccine roll out has been successful in the UK. It won't get much better than the current numbers, we will never reach 100% vaccinations - some people will simply never be vaccinated either due to risk or option.
I have the current screenshots from the Worldometer website that appears to be responsible for collecting all of the data from around the world on active COVID cases. I believe it has been trustworthy so far, but I'm not entirely sure where they came from and who funds them. They appear to be particularly closed about exactly who they actually are, but apparently they are based in the United States. CNN also find this particularly suspicious. I specifically remember that at the start of the pandemic the Worldometer website listed itself as being based in Shanghai China, something that is now changed on their website 3.
UK daily active cases
As you can see, there are several 'infection waves' in the daily active cases graph. The most important consideration is the delta cases, i.e. the rising edge on the curve, indicating a growing number of cases. Each time this is an exponential increase, and lockdown measures are applied to reduce the rate of infection.
UK daily deaths
In the daily deaths graph, we see an increase in daily deaths approximately increasing at the point we see a riding edge on the daily active cases graph. Some points to note about this:
In the first wave of the death graph, I believe this is based on elevated statistics that essentially counted anybody who died and happened to also have COVID. This was done on purpose to help justify the legal framework for the lockdowns themselves.
As the first wave is likely elevated, the second wave is most likely more reflective of the actual death rate that can be expected from COVID. Bare in mind though that this was not for the delta (India) variant, which appears to have a higher mortality rate.
Once a death spike raises it can take many months to come back down again, as some people hold out on life-support machines, etc. Some people die the first day they get COVID, some people take months before they give up the fight against the infection.
What do we see? So far, the death rate is not even remotely close to the daily active cases. This indicates that the UK's vaccination program has been effective at reducing deaths, but not the spread of COVID. This is likely the result of freedom day encouraging people to meet one another and celebrate after such a long time, going from isolation to abnormally high amount of social contact. I would expect the daily cases to now fall off within the next few weeks, with zero government intervention.
Given the high vaccination rate in the UK, I suspect other governments cannot expect a better result than this. The reason for writing this article is exactly because governments have done a very poor job of managing expectations for what a successful vaccine roll-out will look like. Many people are under the false impression that we should expect daily cases to drop to zero, which is a disillusion.
The fact of the matter is, a vaccine essentially just prepares your immune system to tackle COVID when it eventually meets it. People will still contract the virus, but their infection time is reduced and therefore the
{R}_{0}
spread factor should also be greatly reduced. The expected behaviour here is that people will still contract COVID, just less of them will die as a result.
This is the best result we can reasonably hope for. Aiming for zero deaths is impossible and unattainable. Again, I believe the UK government and governments around the world have managed vaccination expectations very poorly, especially after scaring people for the last year or so to keep them in a strict lockdown.
Governments must immediately begin an education campaign to manage vaccine expectations. If they fail to do this, COVID hysteria will increase and they will struggle to get their workforce back into action for what will already be a long and painful economic recovery process.
Note that is equation is simplified as it does not consider death rate or total population size, as well as other contributing factors.↩
As of current day, the CCP have still not officially apologised for the spread of COVID-19.↩
I heavily suspect there is a political element to the information displayed on Worldometer. For example, China's total active cases on the website is recorded to be 92k, despite COVID starting there and lockdowns having been lifted in China. I suspect this website is run by CCP state actors.↩
|
10.12: Odds and Odds Ratios - Statistics LibreTexts
https://stats.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fstats.libretexts.org%2FBookshelves%2FIntroductory_Statistics%2FBook%253A_Statistical_Thinking_for_the_21st_Century_(Poldrack)%2F10%253A_Probability%2F10.12%253A_Odds_and_Odds_Ratios
\text{odds of A} = \frac{P(A)}{P(\neg A)}
\text{odds of cancer} = \frac{P(\text{cancer})}{P(\neg \text{cancer})} =\frac{0.14}{1 - 0.14} = 0.16
\text{odds of 6} = \frac{1}{5} = 0.2
As an aside, this is a reason why many medical researchers have become increasingly wary of the use of widespread screening tests for relatively uncommon conditions; most positive results will turn out to be false positives.
\text{prior odds} = \frac{P(\text{cancer})}{P(\neg \text{cancer})} =\frac{0.058}{1 - 0.058} = 0.061
ods ratio \(=\frac{\text { posterior odds }}{\text { prior odds }}=\frac{0.16}{0.061}=2.62\)
This tells us that the odds of having cancer are increased by 2.62 times given the positive test result. An odds ratio is example of what we will later call an effect size, which is a way of quantifying how relatively large any particular statistical effect is.
10.12: Odds and Odds Ratios is shared under a not declared license and was authored, remixed, and/or curated by Russell A. Poldrack via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
Cauldron - Ring of Brodgar
Required By Bar of Soap, Barley Wort, Beeswax, Blubber Feast, Boiled Egg, Boiled Gooseneck Barnacle, Boiled Lobster, Boiled Odds, Boiled Pepper Drupe, Boiled Razor Clam, Boiled River Pearl Mussel, Boiled Round Clam, Bone Glue, Box of Matches, Butter-steamed Cavebulb, Buttered Leeks, Candleberry Wax, Cattail Stew, Chum Bait, Clambake, Cone Gruel, Creamy Cock, Cucumber Salsa, Curd'n'Chives, Deep Fried Bird, Divination in Tin, Felt, Fish in Tears, Fishballs, Fishsticks, Gelatin, Glazed Honeyons, Goldbeater's Skin, Gray Grease, Haggis, Hardened Leather, Honey Gruel, Hop Jellies, Jelly Heart, Kelp Cream, Lye, Mead Must, Meat-in-Jelly, Moules Frites, Mushrooms in Jelly, Onion Rings, Onioned Escargot, Opium, Parboiled Morels, Pumpkin Stew... further results
The Cauldron is an important tool for Cooking and a variety of miscellaneous tasks. May refer to Metal Cauldron or Clay Cauldron, both of which can hold 30L of water. Clay cauldrons are easier to make but must be allowed to sit for about five minutes after lighting to allow the water to boil, whereas metal cauldrons begin boiling immediately after being lit. A cauldron must be boiling to use for crafting, and small amounts of the water in the cauldron are consumed every time something is crafted and every few minutes the boiling cauldron remains lit. It is therefore recommended to only keep the cauldron lit when it is being used.
The quality of objects crafted with the Clay Cauldron is given by:
{\displaystyle {_{q}Products={\frac {_{q}Ingredients*6+_{q}Cauldron+{\frac {_{q}Water}{2}}}{8}}}}
The quality of objects crafted with the Metal Cauldron is given by:
{\displaystyle _{q}Products={\frac {_{q}Ingredients*6+_{q}Cauldron+_{q}Water}{8}}}
Quality of fuel used to lit either type of cauldron doesn't matter.
The lengths of time it takes to get the various items from their precursor items by boiling in a Cauldron are as follows:
Egg 0:02:13 0:07:00 Boiled Egg
Gooseneck Barnacle 0:09:07 0:30:00 Boiled Gooseneck Barnacle
Lobster 0:27:21 1:30:00 Boiled Lobster
Morels 0:18:14 1:00:00 Parboiled Morels
Poppy Grist 1:49:25 6:00:00 Opium Using full stack highly advised.
Razor Clam 0:09:07 0:30:00 Boiled Razor Clam
Red Deer Antlers 1:49:25 6:00:00 Hartshorn
Snow 0:01:00 ? Water 1 Liter per snow. Water quality is not affected by caulron
Clay Cauldrons take 5 minutes to heat up. Metal Cauldrons boil instantly.
Clay Cauldrons halve the quality of the water used.
The wood blocks does not affect the quality of the cauldron.
No longer works (World 11 (2019-02-01)):
Putting bark in a cauldron will turn the water to tanning fluid. It is easier to get high quality bark than high quality water, so this is a good way to get a high quality liquid to boil things in.
Fishing for Finery (2021-04-25) >"Reduced passive water consumption of lit cauldrons. One full fuel meter should now burn through half (15l) of a cauldron's water meter."
Rekt!-ing Ball (2016-04-19) >"Added variable materials to Wheelbarrow, Metal Cauldron, and Charter Stone."
Burning Harmonica (2016-03-30) >"You may now put out cauldrons."
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Cauldron&oldid=93011"
Page-img Hafen Img-parm
|
Write an equation to represent this problem and find the unknown side lengths. Use the 5-D Process to help you organize your thinking and to define your variables, if you need to do so. Remember to define your variable.
A trapezoid has a perimeter of
117
cm. The two shortest sides have the same length. The third side is
12
cm longer than one short side. The final side is
9
cm less than three times one short side. How long is each side of the trapezoid?
Perimeter is the sum of the lengths of all the sides.
Create an expression comparing each side with the shortest side, represented by
x
Two Shortest Sides:
x
12+x
3x-9
117
Shortest Sides:
19
What are the other side lengths?
Be sure to know how to get this length and the other lengths.
|
Unipolar resistive switching of ZnO-single-wire memristors | Nanoscale Research Letters | Full Text
\left(\mathrm{Schottky}\right)\phantom{\rule{0.5em}{0ex}}lnI\propto \frac{e\sqrt{\left(\mathit{eV}\right)/\left(4\pi {ϵ}_{r}{ϵ}_{0}d\right)}}{\mathit{kT}}
\left(\mathrm{PF}\right)\phantom{\rule{0.5em}{0ex}}lnI/V\propto \frac{e\sqrt{\left(\mathit{eV}\right)/\left(\pi {ϵ}_{r}{ϵ}_{0}d\right)}}{\mathit{kT}}
\left(\mathrm{SCLC}\right)I\propto {ϵ}_{r}{ϵ}_{0}\mu {V}^{2}
|
The Hopfield-like neural network with governed ground state | BMC Neuroscience | Full Text
The Hopfield-like neural network with governed ground state
Leonid B Litinskii1 &
Magomed Yu Malsagov1
Using a vector
u=\left({u}_{1},...,{u}_{p}\right)
let us construct a matrix
{M}_{ij}=\left(1-{\delta }_{ij}\right){u}_{i}{u}_{j}
i,j=1,..,p
{\delta }_{ij}
is the Kronecker delta,
{u}_{i}\in {R}^{1}
{∥u∥}^{2}=p
. We define a Hopfield-like neural network with a connection matrix
{J}_{ij}=\left(1-2x\right){M}_{ij}
M=\left({M}_{ij}\right)
{T}_{i}=q\left(1-x\right){u}_{i}
proportional to coordinates
{u}_{i}
. Real quantities
x
q
are our free parameters. The dynamics of the network is defined by the equation
{s}_{i}\left(\tau +1\right)=\mathsf{\text{sgn}}\phantom{\rule{1em}{0ex}}\left({\sum }_{j=1}^{p}{J}_{ij}{s}_{i}\left(\tau \right)+{T}_{i}\right)
{s}_{i}\left(\tau \right)=±1
are binary coordinates of the configuration vector
s\left(\tau \right)=\left({s}_{1}\left(\tau \right),...,{s}_{p}\left(\tau \right)\right)
describing the state of the network at the given time
\tau
. Fixed points of the network are local minima of the energy. The configurations providing for the global minimum are called the ground state. Just the ground state is usually associated with the memory of the network. It turns out that to a considerable extent the ground state of our network can be governed by the parameters
x
q
u
. The point is that in full the energy
E\left(s\right)
is defined by the scalar product of vectors
u
E\left(s\right)~-\left(1-2x\right){\left(u,s\right)}^{2}-2\left(1-x\right)q\left(u,s\right)
. Then the number of different values of the energy is equal to the number of different values of the cosine
\mathsf{\text{cos}}w=\left(s,u\right)/p
s
ranges over all
{2}^{p}
configurations. Let us arrange different values of cosines in the decreasing order starting the numeration from 0:
\mathsf{\text{cos}}{w}_{0}
\mathsf{\text{cos}}{w}_{1}
> ...>
\mathsf{\text{cos}}{w}_{t}
. The set of all the configurations for which the cosine is equal to
\mathsf{\text{cos}}{w}_{k}
we define as the class
{\text{Σ}}_{k}
{\text{Σ}}_{k}=\left\{s:\phantom{\rule{0.3em}{0ex}}\left(s,u\right)=p\cdot \mathsf{\text{cos}}{w}_{k}\right\}
. It is easy to see that for each
k
the equalities
{\text{Σ}}_{t-k}=-{\text{Σ}}_{k}
\mathsf{\text{cos}}{w}_{t-k}=-\mathsf{\text{cos}}{w}_{k}
are fulfilled. The following statement is true:
Theorem. As
x
increases beginning from the initial value equals to 0, the ground state of the network coincides in consecutive order with the classes
{\text{Σ}}_{k}
{\text{Σ}}_{0}\to {\text{Σ}}_{1}\to {\text{Σ}}_{2}\to ...\to {\text{Σ}}_{{k}_{\mathsf{\text{max}}}}
. The transition from
{\text{Σ}}_{k-1}\to {\text{Σ}}_{k}
takes place in the critical point
{x}_{k}=\frac{q/p+\left(\mathsf{\text{cos}}{w}_{k-1}+\mathsf{\text{cos}}{w}_{k}\right)/2}{q/p+\mathsf{\text{cos}}{w}_{k-1}+\mathsf{\text{cos}}{w}_{k}},\phantom{\rule{2.77695pt}{0ex}}k=1,2,..{k}_{\mathsf{\text{max}}}
x\in \left({x}_{k},{x}_{k+1}\right)
the ground state of the network is the class
{\text{Σ}}_{k}
. The transitions
{\text{Σ}}_{k-1}\to {\text{Σ}}_{k}
ceases when the denominator of the expression for
{x}_{k}
becomes negative. If
q/p>2
{k}_{\mathsf{\text{max}}}=t
In large part this theorem allows one to regulate the ground state of the network. Let us examine
p
-dimensional hypercube whose side length is 2 and the center is in the origin of coordinates. The configurations
coincide with vertices of the hypercube. Possible symmetric directions of the hypercube have to be chosen as vectors
u
. For each choice of
u
the configurations
are distributed symmetrically around this vector. Each such symmetrical set of configurations is one of the classes
{\text{Σ}}_{k}
, and using the theorem one can do it the ground state of the network. In particular, we can construct the ground state with very large number (~
{C}_{p}^{k}
) of configurations. If nonzero components of the vector
u
equal in moduli, for each
x
the only fixed points of the network are its ground state. The classification of all possible applications of this Theorem is not yet finished.
Computer simulations show that basins of attraction of such fixed points are very small. It is not surprising, since the number of the fixed points is very large, and the volume of each basin of attraction is of the order of the volume of the unit hypersphere divided by the number of fixed points.
The work was supported by Russian Basic Research Foundation (grant 12-07-00259).
Scientific Research Institute for System Analysis Russian Academy of Sciences, Moscow, 119333, Russia
Leonid B Litinskii & Magomed Yu Malsagov
Leonid B Litinskii
Magomed Yu Malsagov
Correspondence to Leonid B Litinskii.
Litinskii, L.B., Malsagov, M.Y. The Hopfield-like neural network with governed ground state. BMC Neurosci 14, P257 (2013). https://doi.org/10.1186/1471-2202-14-S1-P257
|
Rosenzweig – Monochromatic Portraits with GLSL
Monochromatic Portraits with GLSL
In my Computer Graphics Art class, we were assigned a monochromatic portrait project. Given a photograph of a subject, we were to split the image into a small number of discrete sections of varying brightnesses, all of the same colour. Typically, this process would be completed by hand in a tool like Krita or Photoshop.
I chose GLSL.
Rather than manually producing the portrait, I realised that the project can be distilled into a number of per-pixel filters, a perfect fit for fragment shaders. We pass in the source photograph as a texture, transform it in our shader, and the filtered image will be written out to the framebuffer.
This post assumes a basic familiarity with the OpenGL Shading Language (GLSL). The interface between the fragment shader and the rest of the world is (relatively) trivial and will not be covered in-depth here. For my early experiments, I modified shaders from glmark’s effect2d scene, which allowed rapid prototyping. Later, I moved the demo into the web browser via three.js. Source code is available under the MIT license.
First, we include the sampler corresponding to our photograph, and a varying defined in the vertex shader corresponding to the texture coordinate.
uniform sampler2D frame;
Next, let’s start with a simple pass-through shader, reading the specified texel and outputting that to the screen.
vec3 rgb = texture2D(frame, v_coord).rgb;
The colour photograph shines through as-is – everything sanity checked. However, when we make monotone portraits, we don’t care about the colour, only the brightness. So, we need to convert the pixel to greyscale. There are various ways to do this, but the easiest is to multiply the RGB values with some “magic” coefficients. That is,
\mathrm{grey} = c_r \cdot \mathrm{red} + c_g \cdot \mathrm{green} + c_b \cdot \mathrm{blue}
What coefficients do we choose? An obvious choice is
\frac{1}{3}
for each, taking equal parts red, green, and blue. However, human colour perception is not fair; a psych teacher told me that bias is literally in our DNA. Oops, wait, I’m not supposed to talk politics in here. Anyway!
Point is, we want coefficients corresponding to human perception. One choice is BT.709 coefficients, which are used when computing the luminance (Y) component of the YUV colour space. These coefficients correspond to a particular vector:
\vec{c} = \begin{pmatrix}0.2126\\0.7152\\0.0722\end{pmatrix}
We just take the dot product of those coefficients with our RGB value, et voila, we have a greyscale image instead:
vec3 coefficients = vec3(0.2126, 0.7152, 0.0722);
float grey = dot(coefficients, rgb);
At this point, we might adjust the image to taste. For instance, to make the greyscale image 20% cooler brighter, we just multiply in the corresponding coefficient, clamping (saturating) between 0.0 and 1.0 to avoid out-of-bounds behaviour:
grey = clamp(grey * brightness, 0.0, 1.0);
Now, here comes the magic. Beyond the artistic description, monotone portraits, the technical name for this effect is “posterization”. Posterization, at its core, transforms an image with many colours with smooth transitions into an image with few colours and sharp transitions. There are many ways to approach this, but one is particularly simple: rounding!
All of our colour (and greyscale) values are within the range
[0, 1]
0
is black and
1
is white. So, if we simply round the value, the darks will become black and the lights will become white: posterization with two levels (colours)!
What if we want more than two levels? Well, think about what happens if we multiply the colour by an integer
n
greater than one, and then round: the rounded value will map linearly to
n + 1
discrete values, from
0
n
. (Psst, where did the plus one come from? If we multiply by
1
– not changing anything from the black/white case – there are two possibilities, not one. It’s a fencepost problem).
However, after we scale the grey value from
[0, 1]
[0, n]
, we probably want to scale back to
[0, 1]
. That’s achieved easily enough – divide the rounded value by
n
All in all, we can posterize to six levels, for instance, quite simply:
float levels = (6.0) - 1.0;
float posterized = round(grey * levels) / level;
Et voila, we have a greyscale posterized image. For some OpenGL versions lacking a round function, just replace round with floor with
0.5
added to the argument.
Posterized greyscale, but feeling clustered
What’s next? Well, the posterized values feel a little “clustered”, for lack of a better word. They are faithful to the actual brightness in the image, but we’re not going for photorealistic here – we want our colours to pop. So, increase the contrast by some factor; I chose 30%. How do we adjust contrast? Well, first we need to define contrast: contrast is how far everything is from grey. By grey, I mean
0.5
, half-way between black and white. So, we can subtract
0.5
from our posterized colour value, multiply it by some contrast factor (think percentages), and subtract
0.5
again to bring us back. Again, we saturate (clamp to
[0, 1]
) at the end to keep everything in-range.
float contrasted = clamp(contrast * (posterized - 0.5) + 0.5, 0.0, 1.0);
If you’re a geometric thinker, or if you have a little background in linear algebra, we are effectively scaling (dilating) pixel values with the “origin” set to grey (
0.5
), rather than black (
0
). You can express it nicely in terms of some simple composited affine transformations, but I digress.
Posterized with contrast adjusted
Anyway, with the above, we have a nice, posterized, grey image. Grey?! No fun. Let’s add a splash of colour.
Unfortunately, within RGB, adding colour can be tricky. Simply multiplying our base colour with the greyscale value will perform a tint, but it’s a different effect than we want. For these monotone portraits, given grey values, we want
0
to correspond to black,
0.5
to a colour of our choosing, and
1
to white. Values in between should interpolate nicely.
This problem is neigh intractable in RGB… but we can take another trick out of linear algebra’s book, and perform a change of basis! Or colour space, in this case.
In particular, the HSL (hue/saturation/lightness) colour space, modeled after artistic perception of colour rather than the properties of light, has exactly the property we want. Within HSL, zero lightness is black, half-lightness is a particular colour, and full-lightness is white. Hue and saturation decide the colour shade, and the lightness is decided by, well, the lightness.
So, we can pick a particular hue and saturation value, set the lightness to the greyscale lightness we calculated, and bingo! All that’s left is to convert back from HSL to RGB, since our hardware does not feature native support for HSL. For instance, choosing a hue of
0.8
and a saturation of
0.6
– values corresponding to pastel blues – we compute:
vec3 rgb = hsl2rgb(vec3(0.6, 0.4, contrasted);
Finally, we just set the default alpha value and write that out!
“But wait,” you ask. “Where did hsl2rgb come from? I didn’t see it in the GLSL specification?”
A fair question; indeed, we have to define this routine ourselves. A straightforward implementation based on the definition of HSL does not take full advantage of the GPU’s vectorized and parallelism capabilities. A discussion of the issue is found on the Lol engine blog, which includes a well-optimized GLSL routine for HSV to RGB conversions. The code is easily adapted to HSL to RGB (as HSV and HSL are closely related), so presented with proof is the following implementation of hsl2rgb. Verifying correctness is left as an exercise to the reader (please do!):
vec3 hsl2rgb(vec3 c) {
float t = c.y * ((c.z < 0.5) ? c.z : (1.0 - c.z));
return (c.z + t) * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), 2.0*t / c.z);
Everything we have so far works, but it’s noisy as seen above. The “jumps” from level to level are not smooth like we would like; they are awkward and jaggedy. Why? Stepping through, we see the major artefacts are introduced during the posterization routine. The input image is noisy, and then when we posterize (quantize) the image, small perturbations of noise around the edges correspond to large noisy jumps in the output.
What’s the solution? One easy fix is to smooth the input image, so there’s no perturbations to worry about in the first place. In an offline implementation of something like a cartoon cutout filter, like that included in G’MIC, a complex noise reduction algorithm would be used. G’MIC’s cutout filter uses a median filter; even better results can be achieved with a bilateral filter. Each of these filters attempts to reduce noise without reducing the edges. But they’re slow.
What can we do instead of a sophisticated noise reduction filter? An unsophisticated one! Experimenting with filters in Krita, I found that any blur of suitable size does the trick, not just an edge-preserving filter. Even something simple like a Gaussian blur or even a box blur does the trick. So, instead of reading a single texel rgb, we read a group of texels and average them to compute rgb. The demo code uses a single-pass blur, which suffers from performance issues; this initial blur is by far the slowest part of the pipeline. That said, for a production application, it would be trivial to optimize this section to use a two-pass blur, weighted as desired, and to take better advantage of native bilinear interpolation. Implementing fast GPU-accelerated blurs is out-of-the-scope of this article, however.
Regardless, with a blur added, results are much cleaner!
All in all, the algorithm fits in a short, efficient, single-pass fragment shader… which means even on low-end mobile devices, as long as there’s GPU-acceleration, we can run it on input from the webcam in real-time.
For best results, ensure good lighting. Known working on Firefox (Linux) and Chromium (Linux, Windows, Android). Known issues on iOS (?).
|
Home : Support : Online Help : Connectivity : Database Package : PreparedStatement : Execute
preparedstat:-Execute( opts )
anything; parameter to insert into the prepared statement
Execute inserts its arguments into the prepared statement represented by preparedstat. The statement is then executed.
Execute accepts at most the number of arguments as parameters ("?") in the SQL string passed into CreatePreparedStatement. If less are given, then SQL NULL is used for the unspecified parameters.
Maple attempts to convert the Maple type passed into Execute to the correct SQL type. If the default Maple conversion is incorrect, you can specify the conversion using a type cast. For details, see Database/conversions.
Execute returns the return value of the first statement. The return values of subsequent statements are accessible via NextResult.
If a statement is an update, then the return value is an integer representing the update count. If a statement is a query, then the return value is a Result module representing the table of values.
Not all Java Database Connectivity [JDBC] Drivers can handle multiple SQL statements in a single string. In this case, the behavior of Execute with multiple statements is undefined.
If a large number of similar statements are to be executed (for example populating a table with INSERTs), using a PreparedStatement may be more efficient than Statement.
\mathrm{driver}≔\mathrm{Database}[\mathrm{LoadDriver}]\left(\right):
\mathrm{conn}≔\mathrm{driver}:-\mathrm{OpenConnection}\left(\mathrm{url},\mathrm{name},\mathrm{pass}\right):
\mathrm{pstat}≔\mathrm{conn}:-\mathrm{CreateStatement}\left("SELECT name FROM animals WHERE id = ?; SELECT name FROM animals WHERE id = ?; SELECT name FROM animals WHERE id = ?;"\right):
\mathrm{res}≔\mathrm{pstat}:-\mathrm{Execute}\left(1,2,3\right):
\mathrm{res}:-\mathrm{Next}\left(\right);
\mathrm{res}:-\mathrm{GetData}\left("name"\right)
\textcolor[rgb]{0,0,1}{"fish"}
\mathrm{res}≔\mathrm{pstat}:-\mathrm{NextResult}\left(\right):
\mathrm{res}:-\mathrm{Next}\left(\right):
\mathrm{res}:-\mathrm{GetData}\left("name"\right)
\textcolor[rgb]{0,0,1}{"dog"}
\mathrm{res}≔\mathrm{pstat}:-\mathrm{NextResult}\left(\right):
\mathrm{res}:-\mathrm{Next}\left(\right):
\mathrm{res}:-\mathrm{GetData}\left("name"\right)
\textcolor[rgb]{0,0,1}{"cat"}
Database[PreparedStatement][NextResult]
|
Rule of 72 - Wikipedia
(Redirected from The rule of 72)
Not to be confused with 72-year rule.
In finance, the rule of 72, the rule of 70[1] and the rule of 69.3 are methods for estimating an investment's doubling time. The rule number (e.g., 72) is divided by the interest percentage per period (usually years) to obtain the approximate number of periods required for doubling. Although scientific calculators and spreadsheet programs have functions to find the accurate doubling time, the rules are useful for mental calculations and when only a basic calculator is available.[2]
These rules apply to exponential growth and are therefore used for compound interest as opposed to simple interest calculations. They can also be used for decay to obtain a halving time. The choice of number is mostly a matter of preference: 69 is more accurate for continuous compounding, while 72 works well in common interest situations and is more easily divisible. There are a number of variations to the rules that improve accuracy. For periodic compounding, the exact doubling time for an interest rate of r percent per period is
{\displaystyle t={\frac {\ln(2)}{\ln(1+r/100)}}\approx {\frac {72}{r}}}
where t is the number of periods required. The formula above can be used for more than calculating the doubling time. If one wants to know the tripling time, for example, replace the constant 2 in the numerator with 3. As another example, if one wants to know the number of periods it takes for the initial value to rise by 50%, replace the constant 2 with 1.5.
1 Using the rule to estimate compounding periods
2 Choice of rule
4 Adjustments for higher accuracy
4.1 E-M rule
4.2 Padé approximant
5.1 Periodic compounding
5.2 Continuous compounding
Using the rule to estimate compounding periodsEdit
To estimate the number of periods required to double an original investment, divide the most convenient "rule-quantity" by the expected growth rate, expressed as a percentage.
For instance, if you were to invest $100 with compounding interest at a rate of 9% per annum, the rule of 72 gives 72/9 = 8 years required for the investment to be worth $200; an exact calculation gives ln(2)/ln(1+0.09) = 8.0432 years.
To determine the time for money's buying power to halve, financiers divide the rule-quantity by the inflation rate. Thus at 3.5% inflation using the rule of 70, it should take approximately 70/3.5 = 20 years for the value of a unit of currency to halve.[1]
To estimate the impact of additional fees on financial policies (e.g., mutual fund fees and expenses, loading and expense charges on variable universal life insurance investment portfolios), divide 72 by the fee. For example, if the Universal Life policy charges an annual 3% fee over and above the cost of the underlying investment fund, then the total account value will be cut to 50% in 72 / 3 = 24 years, and then to 25% of the value in 48 years, compared to holding exactly the same investment outside the policy.
Choice of ruleEdit
The value 72 is a convenient choice of numerator, since it has many small divisors: 1, 2, 3, 4, 6, 8, 9, and 12. It provides a good approximation for annual compounding, and for compounding at typical rates (from 6% to 10%); the approximations are less accurate at higher interest rates.
For continuous compounding, 69 gives accurate results for any rate, since ln(2) is about 69.3%; see derivation below. Since daily compounding is close enough to continuous compounding, for most purposes 69, 69.3 or 70 are better than 72 for daily compounding. For lower annual rates than those above, 69.3 would also be more accurate than 72.[3] For higher annual rates, 78 is more accurate.
Actual Years
Rate × Actual Years
Rule of 69.3
E-M rule
0.25% 277.605 69.401 288.000 280.000 277.200 277.667 277.547
0.5% 138.976 69.488 144.000 140.000 138.600 139.000 138.947
1% 69.661 69.661 72.000 70.000 69.300 69.667 69.648
7% 10.245 71.713 10.286 10.000 9.900 10.238 10.259
8% 9.006 72.052 9.000 8.750 8.663 9.000 9.023
10% 7.273 72.725 7.200 7.000 6.930 7.267 7.295
Note: The most accurate value on each row is in italics, and the most accurate of the simpler rules in bold.
An early reference to the rule is in the Summa de arithmetica (Venice, 1494. Fol. 181, n. 44) of Luca Pacioli (1445–1514). He presents the rule in a discussion regarding the estimation of the doubling time of an investment, but does not derive or explain the rule, and it is thus assumed that the rule predates Pacioli by some time.
A voler sapere ogni quantità a tanto per 100 l'anno, in quanti anni sarà tornata doppia tra utile e capitale, tieni per regola 72, a mente, il quale sempre partirai per l'interesse, e quello che ne viene, in tanti anni sarà raddoppiato. Esempio: Quando l'interesse è a 6 per 100 l'anno, dico che si parta 72 per 6; ne vien 12, e in 12 anni sarà raddoppiato il capitale. (emphasis added).
Adjustments for higher accuracyEdit
For higher rates, a larger numerator would be better (e.g., for 20%, using 76 to get 3.8 years would be only about 0.002 off, where using 72 to get 3.6 would be about 0.2 off). This is because, as above, the rule of 72 is only an approximation that is accurate for interest rates from 6% to 10%.
For every three percentage points away from 8%, the value of 72 could be adjusted by 1:
{\displaystyle t\approx {\frac {72+(r-8)/3}{r}}}
or, for the same result:
{\displaystyle t\approx {\frac {70+(r-2)/3}{r}}}
Both of these equations simplify to:
{\displaystyle t\approx {\frac {208}{3r}}+{\frac {1}{3}}}
{\displaystyle {\frac {208}{3}}}
is quite close to 69.3.
E-M ruleEdit
The Eckart–McHale second-order rule (the E-M rule) provides a multiplicative correction for the rule of 69.3 that is very accurate for rates from 0% to 20%, whereas the rule is normally only accurate at the lowest end of interest rates, from 0% to about 5%.
To compute the E-M approximation, multiply the rule of 69.3 result by 200/(200−r) as follows:
{\displaystyle t\approx {\frac {69.3}{r}}\times {\frac {200}{200-r}}}
For example, if the interest rate is 18%, the rule of 69.3 gives t = 3.85 years, which the E-M rule multiplies by
{\displaystyle {\frac {200}{182}}}
(i.e. 200/ (200−18)) to give a doubling time of 4.23 years. As the actual doubling time at this rate is 4.19 years, the E-M rule thus gives a closer approximation than the rule of 72.
To obtain a similar correction for the rule of 70 or 72, one of the numerators can be set and the other adjusted to keep their product approximately the same. The E-M rule could thus be written also as
{\displaystyle t\approx {\frac {70}{r}}\times {\frac {198}{200-r}}}
{\displaystyle t\approx {\frac {72}{r}}\times {\frac {192}{200-r}}}
In these variants, the multiplicative correction becomes 1 respectively for r=2 and r=8, the values for which the rules of 70 and 72 are most accurate.
Padé approximantEdit
The third-order Padé approximant gives a more accurate answer over an even larger range of r, but it has a slightly more complicated formula:
{\displaystyle t\approx {\frac {69.3}{r}}\times {\frac {600+4r}{600+r}}}
Periodic compoundingEdit
For periodic compounding, future value is given by:
{\displaystyle FV=PV\cdot (1+r)^{t}}
{\displaystyle PV}
is the present value,
{\displaystyle t}
is the number of time periods, and
{\displaystyle r}
stands for the interest rate per time period.
The future value is double the present value when the following condition is met:
{\displaystyle (1+r)^{t}=2\,}
This equation is easily solved for
{\displaystyle t}
{\displaystyle {\begin{array}{ccc}\ln((1+r)^{t})&=&\ln 2\\t\ln(1+r)&=&\ln 2\\t&=&{\frac {\ln 2}{\ln(1+r)}}\end{array}}}
A simple rearrangement shows:
{\displaystyle {\frac {\ln {2}}{\ln {(1+r)}}}={\bigg (}{\frac {\ln 2}{r}}{\bigg )}{\bigg (}{\frac {r}{\ln(1+r)}}{\bigg )}}
If r is small, then ln(1 + r) approximately equals r (this is the first term in the Taylor series). That is, the latter factor grows slowly when
{\displaystyle r}
Call this latter factor
{\displaystyle f(r)={\frac {r}{\ln(1+r)}}}
{\displaystyle f(r)}
is shown to be accurate in the approximation of
{\displaystyle t}
for a small, positive interest rate when
{\displaystyle r=.08}
(see derivation below).
{\displaystyle f(.08)\approx 1.03949}
, and we therefore approximate time
{\displaystyle t}
{\displaystyle t={\bigg (}{\frac {\ln 2}{r}}{\bigg )}f(.08)\approx {\frac {.72}{r}}}
Written as a percentage:
{\displaystyle {\frac {.72}{r}}={\frac {72}{100r}}}
This approximation increases in accuracy as the compounding of interest becomes continuous (see derivation below).
{\displaystyle 100r}
{\displaystyle r}
written as a percentage.
In order to derive the more precise adjustments presented above, it is noted that
{\displaystyle \ln(1+r)\,}
is more closely approximated by
{\displaystyle r-{\frac {r^{2}}{2}}}
(using the second term in the Taylor series).
{\displaystyle {\frac {0.693}{r-r^{2}/2}}}
can then be further simplified by Taylor approximations:
{\displaystyle {\begin{array}{ccc}{\frac {0.693}{r-r^{2}/2}}&=&{\frac {69.3}{R-R^{2}/200}}\\&&\\&=&{\frac {69.3}{R}}{\frac {1}{1-R/200}}\\&&\\&\approx &{\frac {69.3(1+R/200)}{R}}\\&&\\&=&{\frac {69.3}{R}}+{\frac {69.3}{200}}\\&&\\&=&{\frac {69.3}{R}}+0.3465\end{array}}}
Replacing the "R" in R/200 on the third line with 7.79 gives 72 on the numerator. This shows that the rule of 72 is most accurate for periodically compounded interests around 8%. Similarly, replacing the "R" in R/200 on the third line with 2.02 gives 70 on the numerator, showing the rule of 70 is most accurate for periodically compounded interests around 2%.
Alternatively, the E-M rule is obtained if the second-order Taylor approximation is used directly.
Continuous compoundingEdit
For continuous compounding, the derivation is simpler and yields a more accurate rule:
{\displaystyle {\begin{array}{ccc}(e^{r})^{p}&=&2\\e^{rp}&=&2\\\ln e^{rp}&=&\ln 2\\rp&=&\ln 2\\p&=&{\frac {\ln 2}{r}}\\&&\\p&\approx &{\frac {0.693147}{r}}\end{array}}}
^ a b Donella Meadows, Thinking in Systems: A Primer, Chelsea Green Publishing, 2008, page 33 (box "Hint on reinforcing feedback loops and doubling time").
^ Slavin, Steve (1989). All the Math You'll Ever Need. John Wiley & Sons. pp. 153–154. ISBN 0-471-50636-2.
^ Kalid Azad Demystifying the Natural Logarithm (ln) from BetterExplained
The Scales Of 70 – extends the rule of 72 beyond fixed-rate growth to variable rate compound growth including positive and negative rates.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Rule_of_72&oldid=1085731051"
|
Extension of the Friction Mastercurve to Limiting Shear Stress Models | J. Tribol. | ASME Digital Collection
B. Jacod,
B. Jacod
University of Twente, Faculty of Mechanical Engineering, P.O. Box 217, NL 7500 AE Enschede, The Netherlands
SKF Engineering and Research Center B.V., P.O. Box 2350, NL 3430 DT Nieuwegein, The Netherlands
Contributed by the Tribology Division for publication in the ASME JOURNAL OF TRIBOLOGY. Manuscript received by the Tribology Division January 3, 2002; revised manuscript received December 30, 2002. Associate Editor: M. D. Bryant.
Jacod , B., Venner, C. H., and Lugt, P. M. (September 25, 2003). "Extension of the Friction Mastercurve to Limiting Shear Stress Models ." ASME. J. Tribol. October 2003; 125(4): 739–746. https://doi.org/10.1115/1.1572513
A previous study of the behavior of friction in EHL contacts for the case of Eyring lubricant behavior resulted in a friction mastercurve. In this paper the same approach is applied to the case of limiting shear stress behavior. By means of numerical simulations the friction coefficient has been computed for a wide range of operating conditions and contact geometries. It is shown that the same two parameters that were found in the Eyring study, a characteristic shear stress, and a reduced coefficient of friction, also govern the behavior of the friction for the case of limiting shear stress models. When the calculated traction data is plotted as a function of these two parameters all results for different cases lie close to a single curve. Experimentally measured traction data is used to validate the observed behavior. Finally, the equations of the mastercurves for both types of rheological model are compared resulting in a relation between the Eyring stress
τ0
and the limiting shear stress
τL.
friction, rheology, lubrication, stress analysis
Friction, Lubricants, Rheology, Shear stress, Traction, Viscosity
Kudish, I. I., 1981, “On Solution of some Contact and Elastohydrodynamic Problems,” Warsawa, 3rd International Tribology Congress EUROTRIB 81.
Subsurface Stresses in Micro-EHL Line Contacts
A Circular Non-Newtonian Model: Part 1- Used in Elasto-hydrodynamic Lubrication
A Rheological Model for Elastohydrodynamic Contacts Based in Primary Laboratory Data
A new non-Newtonian fluid model for elastohydrodynamic lubrication of rectangular contacts
Lubricant Limiting Shear Stress Effect on EHD Film Thickness
Sharif, K. J., Morris, S. J., Evans, H. P., and Snidle, R. W., “Comparison of non-Newtonian EHL models in high sliding applications,” Proc. 27th Leeds-Lyon Symposium on Tribology, pp. 787–796, ed., D. Dowson, Elsevier, Amsterdam.
Dowson, D., and Higginson, G. R., 1966, Elastohydrodynamic Lubrication, The Fundamentals of Roller and Gear Lubrication, Pergamon Press, Oxford.
Roelands, C., 1966, “Correlational Aspects of the Viscosity-Temperature-Pressure Relationship of Lubrication Oils,” Ph.D. thesis, Technische Hogeschool Delft, The Netherlands.
An Application of Free Volume Model To Lubricant Rheology
Venner, C. H., and Lubrecht, A. A., 2000, Multilevel Methods in Lubrication, Tribology Series, 37, ed., D. Dowson, Elsevier, Amsterdam.
Pressure-Viscosity Behavior of Lubricants to 1.4 GPa and Its Relation to EHD Traction
Rheological Properties of EHD Lubricants
Moore, A. J., 1981, “The Derivation of basic liquid flow properties from disc machine traction tests,” Proc. 7th Leeds-Lyon Symposium on Tribology, pp. 289–295, ed., D. Dowson, Westbury House, Guilford, UK.
Bair, S., and Winer, W. O., 2000, “The Pressure-Viscosity Coefficient at Hertzian Pressure and Its Relation to Concentrated Contact Traction,” Proc. 26th Leeds-Lyon Symposium on Tribology, pp. 433–443, ed., D. Dowson, Elsevier, Amsterdam.
Experimental Investigation of the Shear Strength of Lubricants Subjected to High Pressure and Temperature
Bair, S., 2001, “Ordinary shear thinning Behavior in liquids and its effect upon EHD Traction,” Proc. 27th Leeds-Lyon Symposium on Tribology, pp. 733–742, ed., D. Dowson, Elsevier, Amsterdam.
Ehret, P., Chevalier, F., Dowson, D., Taylor, C. M., Okamura, H., and Sano, T., 2000, “Traction in EHL elliptical contacts with spin conditions,” Proc. 26Th Leeds-Lyon Symposium on Tribology, pp. 71–84, ed., D. Dowson, Elsevier, Amsterdam.
Venner, C. H., and Lubrecht, A. A., 1999, “Amplitude Reduction of Non-Isotropic Harmonic Patterns in Circular EHL Contacts, under Pure Rolling,” Proc. 25th Leeds-Lyon Symposium on Tribology, pp. 151–162, ed., D. Dowson, Elsevier, Amsterdam.
|
Home : Support : Online Help : Science and Engineering : Units : Environments : Simple : diff
differentiation and partial differentiation in the Simple Units environment
In the Simple Units environment, the diff function differentiates an expression a with respect to a name x that can have a unit. The result is the derivative of a with respect to x; its unit is the unit of a divided by the unit of x.
\mathrm{unit}
\mathrm{with}\left(\mathrm{Units}[\mathrm{Simple}]\right):
-3.532{x}^{2}\mathrm{Unit}\left('J'\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3.532}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}⟦\textcolor[rgb]{0,0,1}{J}⟧
\mathrm{diff}\left(,x\mathrm{Unit}\left('s'\right)\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{7.064}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}⟦\textcolor[rgb]{0,0,1}{W}⟧
32{x}^{2}\mathrm{Unit}\left('\mathrm{ft}'\right)+7x\mathrm{Unit}\left('\mathrm{inch}'\right)+45\mathrm{Unit}\left('m'\right)
\left(\frac{\textcolor[rgb]{0,0,1}{6096}}{\textcolor[rgb]{0,0,1}{625}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{889}}{\textcolor[rgb]{0,0,1}{5000}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{45}\right)\textcolor[rgb]{0,0,1}{}⟦\textcolor[rgb]{0,0,1}{m}⟧
\mathrm{diff}\left(,x\mathrm{Unit}\left('s'\right),x\mathrm{Unit}\left('s'\right)\right)
\frac{\textcolor[rgb]{0,0,1}{12192}}{\textcolor[rgb]{0,0,1}{625}}\textcolor[rgb]{0,0,1}{}⟦\frac{\textcolor[rgb]{0,0,1}{m}}{{\textcolor[rgb]{0,0,1}{s}}^{\textcolor[rgb]{0,0,1}{2}}}⟧
4{x}^{4}-3x+2
\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}
\mathrm{diff}\left(,x\mathrm{Unit}\left('s'\right)\right)
\left(\textcolor[rgb]{0,0,1}{16}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}\right)\textcolor[rgb]{0,0,1}{}⟦\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{s}}⟧
|
Home : Support : Online Help : Programming : Bits : Join
join a list of digits into a number
Join(digits)
Join(digits, sbits)
a list of base 2^sbits positive digits
positive integer number of bits to join on
The Join command joins the input list of integers, least significant first, into a number base 2^sbits (sbits defaults to 1), using only the first sbits bits of each digit.
The most common usage of this command would be for conversion of a list of bits to a number.
The reverse operation, converting the number to digits, is accomplished using the Split command.
\mathrm{with}\left(\mathrm{Bits}\right):
\mathrm{Join}\left([1,1,1,1,1,1,1,1]\right)
\textcolor[rgb]{0,0,1}{255}
Higher bits are ignored
\mathrm{Join}\left([3,3,3,3,3,3,3,3]\right)
\textcolor[rgb]{0,0,1}{255}
Convert from base-4 - 2 bits
\mathrm{Join}\left([3,3,3,3,3,3,3,3],2\right)
\textcolor[rgb]{0,0,1}{65535}
To number, then back to bits
\mathrm{Join}\left([0,1,1,0,1,1,1,1,0,1,1,1,1,0,0,1,0,1,1,1,0,1,0,1,0,1,0,0,0,1,0,0,0,0,1,1,1,0,1]\right)
\textcolor[rgb]{0,0,1}{395718860534}
\mathrm{Split}\left(395718860534\right)
[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]
|
Ordinary To Reflected Gray - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Combinatorics : Iterator : Mixed Radix : Ordinary To Reflected Gray
OrdinaryToReflectedGray(b,m)
OrdinaryToReflectedGray converts an ordinary mixed-radix tuple to the mixed-radix reflected Gray code tuple of the same rank.
The b parameter is the ordinary mixed-radix tuple. It is a list or one-dimensional rtable of nonnegative integers. The first element is the low-order element.
\mathrm{with}\left(\mathrm{Iterator}:-\mathrm{MixedRadix}\right):
Compare, by rank, the ordinary mixed-radix tuples with the mixed-radix Gray codes.
\mathrm{radices}≔[4,3,2]:
M≔\mathrm{Iterator}:-\mathrm{MixedRadixTuples}\left(\mathrm{radices}\right):
G≔\mathrm{Iterator}:-\mathrm{MixedRadixGrayCode}\left(\mathrm{radices}\right):
\mathbf{for}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}b\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{in}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}M\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}g≔\mathrm{OrdinaryToReflectedGray}\left(b,\mathrm{radices}\right);\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}\mathrm{printf}\left("%2d : %d : %d : %2d\n",\mathrm{Rank}\left(M\right),b,g,\mathrm{Rank}\left(G,g\right)\right)\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathbf{end}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}:
The Iterator[MixedRadix][OrdinaryToReflectedGray] command was introduced in Maple 2016.
|
The Housekeeper and the Professor - Wikipedia
Novel by Yōko Ogawa
Find sources: "The Housekeeper and the Professor" – news · newspapers · books · scholar · JSTOR (March 2013) (Learn how and when to remove this template message)
English language cover
Hon'ya Taisho
The Housekeeper and the Professor (博士の愛した数式, hakase no ai shita suushiki) (literally "The Professor's Beloved Equation") is a novel by Yōko Ogawa set in modern-day Japan. It was published in Japan in August 2003, by Shinchosha. In 2009, the English translation by Stephen Snyder was published.[1]
4 Mathematical terminology that occurs in the story
The story centers around a mathematician, "the Professor," who suffered brain damage in a traffic accident in 1975 and since then can produce only 80 minutes' worth of memories, and his interactions with a housekeeper (the narrator) and her son "Root" as the Professor shares the beauty of equations with them. The novel's bibliography lists the book The Man Who Loved Only Numbers, a biography of the mathematician Paul Erdős. It has been said that Erdős was used as a model for the Professor.
The novel received the Hon'ya Taisho award, was adapted into a film version in January 2006, and after being published in paperback in December 2005, sold one million copies in two months, faster than any other Shinchosha paperback.[2][3][4]
The narrator's housekeeping agency dispatches her to the house of the Professor, a former mathematician who can remember new memories for only 80 minutes. She is more than a little frustrated to find that he loves only mathematics and shows no interest whatsoever in anything or anyone else. One day, upon learning that she has a 10-year-old son waiting home alone until late at night every day, the Professor flies into a rage and tells the narrator to have her son come to his home directly from school from that day on. The next day, her son comes and the Professor nicknames him "Root". From then on, their days begin to be filled with warmth.
64 years old. A former university professor who specializes in number theory. He loves mathematics, children, and the Hanshin Tigers (especially Yutaka Enatsu, who was playing for the Tigers at the time of the Professor's accident and whose uniform number was 28, the second smallest perfect number). After being in an auto accident at the age of 47, he can retain new memories for only 80 minutes. He keeps important information on notes that are attached all over his suits. He keeps baseball cards and other important mementos in a cookie tin. He has trouble interacting with other people and a habit of talking about numbers when he does not know what else to say. He has a talent for reading things backwards and finding the first star in the sky. His 80 minutes of memory eventually begins to fail, and thus he is moved to a nursing home where he spends the rest of his remaining days. But the Housekeeper, her son Root, and his sister-in-law continue to visit him. While the Housekeeper is working for him, he teaches her and Root about many of the math skills he knows and loves.
The Narrator/Housekeeper
The Professor's housekeeper and a single mother. She was hired by the Professor's sister-in-law through the housekeeping agency and is the tenth housekeeper the Professor has gone through. She initially feels frustration at the Professor, who shows interest only in mathematics, but through observing the Professor's kindness and his passion for mathematics, comes to feel respect and affection for him. She first manages to connect with the Professor when he discovers that her birthday is February 20 (220), which is an amicable number with the number 284, which is imprinted on the underside of his watch, which he received as the University President's Award for a thesis he wrote in university on transcendental number theory. She cannot pronounce the title of the Journal of Mathematics (to which the Professor submits contest entries) very well, so she refers to it as "Jaanaru obu." Towards the end of the novel and at a pivotal point in the story, she and Root give the Professor a rare baseball card of Yutaka Enatsu as a congratulatory present.
Ten years old. The Housekeeper's son. The Professor refers to him as "Root" on account of the top of his head being flat like a square root (
{\displaystyle {\sqrt {~~~~~}}}
) symbol. He is the only character given something close to a name. He is an avid fan of baseball as well as the Hanshin Tigers just like the Professor, and gets the Professor to repair his old radio so that they can listen to baseball broadcasts together. His relationship with the Professor is close to that of a father and son, for the Professor is the first fatherly figure in his life. He eventually grows up to become a junior high school mathematics teacher.
The Widow/Sister-in-Law
Sister-in-law of the Professor (wife of the Professor's brother). Initially, she fired the Housekeeper for disregarding the employment contract rules (bringing her child into a client's home, staying past her assigned hours) and accused the Housekeeper's affection as an attempt to extort money from the Professor. However, after the Professor writes down Euler's formula during this confrontation, the Widow immediately comes to accept the Housekeeper and Root. She cannot walk well, which the Housekeeper later discovers was a result of her being in the same auto accident as the Professor. While browsing through the Professor's baseball card collection in the cookie tin, the Housekeeper discovers an old photograph of a younger Professor and his sister-in-law. It is hinted that perhaps long ago the Professor and his sister-in-law once had romantic feelings for each other.
Mathematical terminology that occurs in the story[edit]
Ruth-Aaron pair
Napier's constant
Main article: The Professor's Beloved Equation (film)
A film based on the novel was released on January 21, 2006. It was directed by Takashi Koizumi.[5][6]
In contrast to the original work, which is told from the perspective of the narrator, the film is shown from the perspective of 29-year-old "Root" as he recounts his memories of the Professor to a group of new pupils. Though there are a few differences between the film and the original work (for example, the movie touches on the relationship between the Professor and the widow, while the book does not give much detail), the film is generally faithful to the original.
^ The Housekeeper and the Professor, translated by Stephen Snyder, New York : Picador, 2008. ISBN 0-312-42780-8
^ (in Japanese) http://yayoi.senri.ed.jp/departments/SISMath/0702movie.htm
^ (in Japanese) http://www.hontai.or.jp/history/index.html
^ (in English) J'Lit | Awards : Booksellers Award | Books from Japan
^ "The Professor and His Beloved Equation". mubi.com. Retrieved 21 June 2021.
^ "The Professor and His Beloved Equation (2006)". FilmAffinity. Retrieved 21 June 2021.
Retrieved from "https://en.wikipedia.org/w/index.php?title=The_Housekeeper_and_the_Professor&oldid=1064769461"
Shinchosha books
Novels by Yōko Ogawa
|
Experimental Investigation of Miniature Three-Dimensional Flat-Plate Oscillating Heat Pipe | J. Heat Transfer | ASME Digital Collection
R. A. Winholtz,
R. A. Winholtz
S. M. Thompson Graduate Research Assistant
H. B. Ma Associate Professor
R. A. Winholtz Associate Professor
C. Wilson Graduate Research Assistant
Thompson, S. M., Ma, H. B., Winholtz, R. A., and Wilson, C. (February 24, 2009). "Experimental Investigation of Miniature Three-Dimensional Flat-Plate Oscillating Heat Pipe." ASME. J. Heat Transfer. April 2009; 131(4): 043210. https://doi.org/10.1115/1.3072953
An experimental investigation on the effects of condenser temperatures, heating modes, and heat inputs on a miniature three-dimensional (3D) flat-plate oscillating heat pipe (FP-OHP) was conducted visually and thermally. The 3D FP-OHP was charged with acetone at a filling ratio of 0.80, had dimensions of
101.60×63.50×2.54 mm3
, possessed 30 total turns, and had square channels on both sides of the device with a hydraulic diameter of 0.762 mm. Unlike traditional flat-plate designs, this new three-dimensional compact design allows for multiple heating arrangements and higher heat fluxes. Transient and steady-state temperature measurements were collected at various heat inputs, and the activation/start-up of the OHP was clearly observed for both bottom and side heating modes during reception of its excitation power for this miniature 3D FP-OHP. The neutron imaging technology was simultaneously employed to observe the internal working fluid flow for all tests directly through the copper wall. The activation was accompanied with a pronounced temperature field relaxation and the onset of chaotic thermal oscillations occurring with the same general oscillatory pattern at locations all around the 3D FP-OHP. Qualitative and quantitative analysis of these thermal oscillations, along with the presentation of the average temperature difference and thermal resistance, for all experimental conditions are provided. The novelty of the three-dimensional OHP design is its ability to still produce the oscillating motions of liquid plugs and vapor bubbles and, more importantly, its ability to remove higher heat fluxes.
bubbles, channel flow, chaos, condensation, heat pipes, heat transfer, oscillations, vaporisation
Condensers (steam plant), Fluids, Heat, Heat pipes, Heating, Oscillations, Temperature, Thermal resistance, Flat plates, Excitation, Steady state, Design, Transients (Dynamics)
Flow Visualization of Oscillation Characteristics of Liquid and Vapor Flow in the Oscillating Capillary Tube Heat Pipe
Experimental Study of a Pulsating Heat Pipe Using FC-72, Ethanol and Water as Working Fluids
The Tenth Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronics Systems
New Neutron Imaging Facility at the NIST
Experimental Study on a Flat Loop Heat Pipe Coupling the Compensation Chamber and the Condenser
|
Source Localization Using Generalized Cross Correlation - MATLAB & Simulink - MathWorks América Latina
Triangulation Formula
Source and Sensor Geometry
Define Waveform
Radiate, Propagate, and Collect Signals
GCC Estimation and Triangulation
This example shows how to determine the position of the source of a wideband signal using generalized cross-correlation (GCC) and triangulation. For simplicity, this example is confined to a two-dimensional scenario consisting of one source and two receiving sensor arrays. You can extend this approach to more than two sensors or sensor arrays and to three dimensions.
Source localization differs from direction-of-arrival (DOA) estimation. DOA estimation seeks to determine only the direction of a source from a sensor. Source localization determines its position. In this example, source localization consists of two steps, the first of which is DOA estimation.
Estimate the direction of the source from each sensor array using a DOA estimation algorithm. For wideband signals, many well-known direction of arrival estimation algorithms, such as Capon's method or MUSIC, cannot be applied because they employ the phase difference between elements, making them suitable only for narrowband signals. In the wideband case, instead of phase information, you can use the difference in the signal's time-of-arrival among elements. To compute the time-of-arrival differences, this example uses the generalized cross-correlation with phase transformation (GCC-PHAT) algorithm. From the differences in time-of-arrival, you can compute the DOA. (For another example of narrowband DOA estimation algorithms, see High Resolution Direction of Arrival Estimation).
Calculate the source position by triangulation. First, draw straight lines from the arrays along the directions-of-arrival. Then, compute the intersection of these two lines. This is the source location. Source localization requires knowledge of the position and orientation of the receiving sensors or sensor arrays.
The triangulation algorithm is based on simple trigonometric formulas. Assume that the sensor arrays are located at the 2-D coordinates (0,0) and (L,0) and the unknown source location is (x,y). From knowledge of the sensor arrays positions and the two directions-of-arrival at the arrays,
{\theta }_{1}
{\theta }_{2}
, you can compute the (x,y) coordinates from
L=y\mathrm{tan}{\theta }_{1}+y\mathrm{tan}{\theta }_{2}
which you can solve for y
y=L/\left(\mathrm{tan}{\theta }_{1}+\mathrm{tan}{\theta }_{2}\right)
and then for x
x=y\mathrm{tan}{\theta }_{1}
The remainder of this example shows how you can use the functions and System objects of the Phased Array System Toolbox™ to compute source position.
Set up two receiving 4-element ULAs aligned along the x-axis of the global coordinate system and spaced 50 meters apart. The phase center of the first array is (0,0,0) . The phase center of the second array is (50,0,0) . The source is located at (30,100) meters. As indicated in the figure, the receiving array gains point in the +y direction. The source transmits in the -y direction.
Specify the baseline between sensor arrays.
Create a 4-element receiver ULA of omnidirectional microphones. You can use the same phased.ULA System object™ for the phased.WidebandCollector and phased.GCCEstimator System objects for both arrays.
rxULA = phased.ULA('Element',phased.OmnidirectionalMicrophoneElement,...
'NumElements',N);
Specify the position and orientation of the first sensor array. When you create a ULA, the array elements are automatically spaced along the y-axis. You must rotate the local axes of the array by 90° to align the elements along the x-axis of the global coordinate system.
rxpos1 = [0;0;0];
rxvel1 = [0;0;0];
rxax1 = azelaxes(90,0);
Specify the position and orientation of the second sensor array. Choose the local axes of the second array to align with the local axes of the first array.
rxpos2 = [L;0;0];
rxax2 = rxax1;
Specify the signal source as a single omnidirectional transducer.
srcpos = [30;100;0];
srcax = azelaxes(-90,0);
srcULA = phased.OmnidirectionalMicrophoneElement;
Choose the source signal to be a wideband LFM waveform. Assume the operating frequency of the system is 300 kHz and set the bandwidth of the signal to 100 kHz. Assume a maximum operating range of 150 m. Then, you can set the pulse repetition interval (PRI) and the pulse repetition frequency (PRF). Assume a 10% duty cycle and set the pulse width. Finally, use a speed of sound in an underwater channel of 1500 m/s.
Set the LFM waveform parameters and create the phased.LinearFMWaveform System object.
fc = 300e3; % 300 kHz
c = 1500; % 1500 m/s
dmax = 150; % 150 m
pri = (2*dmax)/c;
bw = 100.0e3; % 100 kHz
waveform = phased.LinearFMWaveform('SampleRate',fs,'SweepBandwidth',bw,...
'PRF',prf,'PulseWidth',pri/10);
The transmit signal can then be generated as
signal = waveform();
Modeling the radiation and propagation for wideband systems is more complicated than modeling narrowband systems. For example, the attenuation depends on frequency. The Doppler shift as well as the phase shifts among elements due to the signal incoming direction also vary according to the frequency. Thus, it is critical to model those behaviors when dealing with wideband signals. This example uses a subband approach.
Set the number of subbands to 128.
Specify the source radiator and the sensor array collectors.
radiator = phased.WidebandRadiator('Sensor',srcULA,...
'CarrierFrequency',fc,'NumSubbands',nfft);
collector1 = phased.WidebandCollector('Sensor',rxULA,...
Create the wideband signal propagators for the paths from the source to the two sensor arrays.
channel1 = phased.WidebandFreeSpace('PropagationSpeed',c,...
'SampleRate',fs,'OperatingFrequency',fc,'NumSubbands',nfft);
Determine the propagation directions from the source to the sensor arrays. Propagation directions are with respect to the local coordinate system of the source.
[~,ang1t] = rangeangle(rxpos1,srcpos,srcax);
Radiate the signal from the source in the directions of the sensor arrays.
sigt = radiator(signal,[ang1t ang2t]);
Then, propagate the signal to the sensor arrays.
sigp1 = channel1(sigt(:,1),srcpos,rxpos1,srcvel,rxvel1);
Compute the arrival directions of the propagated signal at the sensor arrays. Because the collector response is a function of the directions of arrival in the sensor array local coordinate system, pass the local coordinate axes matrices to the rangeangle function.
[~,ang1r] = rangeangle(srcpos,rxpos1,rxax1);
Collect the signal at the receive sensor arrays.
sigr1 = collector1(sigp1,ang1r);
Create the GCC-PHAT estimators.
doa1 = phased.GCCEstimator('SensorArray',rxULA,'SampleRate',fs,...
Estimate the directions of arrival.
angest1 = doa1(sigr1);
Triangulate the source position using the formulas established previously. Because the scenario is confined to the x-y plane, set the z-coordinate to zero.
yest = L/(abs(tand(angest1)) + abs(tand(angest2)));
xest = yest*abs(tand(angest1));
zest = 0;
srcpos_est = [xest;yest;zest]
srcpos_est = 3×1
The estimated source location matches the true location to within 30 cm.
This example showed how to perform source localization using triangulation. In particular, the example showed how to simulate, propagate, and process wideband signals. The GCC-PHAT algorithm is used to estimate the direction of arrival of a wideband signal.
|
"XXI" and "Twenty-One" redirect here. For other uses, see 21.
(twenty-first)
a composite number, its proper divisors being 1, 3 and 7, and a deficient number as the sum of these divisors is less than the number itself.
a Fibonacci number as it is the sum of the preceding terms in the sequence, 8 and 13.[1]
the fifth Motzkin number.[2]
a triangular number,[3] because it is the sum of the first six natural numbers (1 + 2 + 3 + 4 + 5 + 6 = 21).
an octagonal number.[4]
a Padovan number, preceded by the terms 9, 12, 16 (it is the sum of the first two of these) in the padovan sequence.[5]
a Blum integer, since it is a semiprime with both its prime factors being Gaussian primes.[6]
the sum of the divisors of the first 5 positive integers (i.e., 1 + (1 + 2) + (1 + 3) + (1 + 2 + 4) + (1 + 5))
a Harshad number.[7]* a repdigit in base 4 (1114).
the smallest number of differently sized squares needed to square the square.[8]
the largest n with this property: for any positive integers a,b such that a + b = n, at least one of
{\displaystyle {\tfrac {a}{b}}}
{\displaystyle {\tfrac {b}{a}}}
is a terminating decimal. See a brief proof below.
Note that a necessary condition for n is that for any a coprime to n, a and n - a must satisfy the condition above, therefore at least one of a and n - a must only have factor 2 and 5.
{\displaystyle A(n)}
denote the quantity of the numbers smaller than n that only have factor 2 and 5 and that are coprime to n, we instantly have
{\displaystyle {\frac {\varphi (n)}{2}}<A(n)}
We can easily see that for sufficiently large n,
{\displaystyle A(n)\sim {\frac {\log _{2}(n)\log _{5}(n)}{2}}={\frac {\ln ^{2}(n)}{2\ln(2)\ln(5)}}}
{\displaystyle \varphi (n)\sim {\frac {n}{e^{\gamma }\;\ln \ln n}}}
{\displaystyle A(n)=o(\varphi (n))}
as n goes to infinity, thus
{\displaystyle {\frac {\varphi (n)}{2}}<A(n)}
fails to hold for sufficiently large n.
In fact, For every n > 2, we have
{\displaystyle A<1+\log _{2}(n)+{\frac {3\log _{5}(n)}{2}}+{\frac {\log _{2}(n)\log _{5}(n)}{2}}}
{\displaystyle \varphi (n)>{\frac {n}{e^{\gamma }\;\log \log n+{\frac {3}{\log \log n}}}}}
{\displaystyle {\frac {\varphi (n)}{2}}<}
fails to hold when n > 273 (actually, when n > 33).
Just check a few numbers to see that '= 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 15, 21.
Look up twenty-one in Wiktionary, the free dictionary.
The atomic number of scandium.
It is very often the day of the solstices in both June and December, though the precise date varies by year.
In thirteen countries, 21 is the age of majority. See also: Coming of age.
In eight countries, 21 is the minimum age to purchase tobacco products.
In seventeen countries, 21 is the drinking age.
In nine countries, it is the voting age.
21 is the minimum age at which a person may gamble or enter casinos in most states (since alcohol is usually provided).
21 is the minimum age to purchase a handgun or handgun ammunition under federal law.
21 is the age at which one can purchase multiple tickets to an R-rated film.
In some states, 21 is the minimum age to accompany a learner driver, provided that the person supervising the learner has held a full driver license for a specified amount of time. See also: List of minimum driving ages.
For retired sports figures who wore this number, see List_of_retired_numbers#21
Twenty-one is a variation of street basketball, in which each player, of which there can be any number, plays for himself only (i.e. not part of a team); the name comes from the requisite number of baskets.
In three-on-three basketball games held under FIBA rules, branded as 3x3, the game ends by rule once either team has reached 21 points.
In AFL Women's, the top-level league of women's Australian rules football, each team is allowed a squad of 21 players (16 on the field and five interchanges).
Building called "21" in Zlín, Czech Republic.
Detail of the building entrance
The Twenty-first Amendment repealed the Eighteenth Amendment, thereby ending Prohibition.
The number of spots on a standard cubical (six-sided) die (1+2+3+4+5+6)
The number of firings in a 21-gun salute honoring royalty or leaders of countries
"Twenty One", a 1994 song by an Irish rock band The Cranberries
"21 Guns", a 2009 song by the punk-rock band Green Day
Twenty One Pilots, an American musical duo
There are 21 trump cards of the tarot deck if one does not consider The Fool to be a proper trump card.
The standard TCP/IP port number for FTP connection
The Twenty-One Demands were a set of demands which were sent to the Chinese government by the Japanese government of Okuma Shigenobu in 1915
21 Demands of MKS led to the foundation of Solidarity in Poland.
In Israel, the number is associated with the profile 21 (the military profile designation granting an exemption from the military service)
Duncan MacDougall reported that 21 grams is the weight of the soul, according to an experiment.
The number of the French department Côte-d'Or
Twenty-One (card game), an ancient card game in which the key value and highest-winning point total is 21
Blackjack, a modern version of Twenty-One played in casinos
The number of shillings in a guinea.
The number of solar rays in the flag of Kurdistan.
Twenty-One, an American game show that became the center of the 1950s quiz show scandals when it was shown to be rigged.
The number on the logo for the American game show Catch 21
The total number of Bitcoin to be released is 21 million “[1]”
Twenty-One, a 1991 British-American drama film directed by Don Boyd and starring Patsy Kensit.
^ "Sloane's A000217 : Triangular numbers". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 2016-05-31.
|
Tea - Ring of Brodgar
Skill(s) Required Plant Lore
Object(s) Required Water (1.0 liters), Teapot; Green Tea Leaves or Black Tea Leaves x2
Craft > Food > Brew tea
Tea is a Drink made from Green Tea Leaves or Black Tea Leaves, which increases stamina, as well as reduces the Food Satiations
With a Teapot, a boiling Cauldron with at least 1 Liter of water, and two Green Tea Leaves or Black Tea Leaves, open the cauldron's interface and craft the items together to receive 1 Liter of Piping hot Tea
Piping Hot tea lasts for about 1 hour before turning into regular Tea and provides approximatley a 20 quality increase when drunk hot.
The proper way to drink tea is to use your Teapot on a Mug and drink from the mug, but tea can be stored in Waterskins, Waterflasks, Buckets, and Barrels
Do note that not drinking out of a proper vessel (in this case the mug) halves the effective quality of your tea
Quality 10 tea recovers 10% stamina and drains 20% energy per 0.05L sip. Higher quality tea will decrease the energy drain but stamina recovery remains the same at all qualities.
This makes high quality tea useful for performing stamina draining tasks without having to eat as much to recover, thus preserving your hunger bonus. Though energy is always shown as a whole number, it is not simply rounded up or down. For example, Q11 tea will alternate between draining 20% and 19% energy at regular intervals. Other drinkable liquids follow the same rules and formula.
{\displaystyle ({\frac {1}{\sqrt {quality/10}}}+1)*10}
There is no difference in the tea created from Green or Black Tea Leaves, however black tea leaves result in higher quality if your herbalist table is higher quality than the leaves being dried (output quality = averaging of inputted leaves and table quality); green tea leaves are a product of being dried once, but black tea leaves are dried two times on the herbalist table and therefore boosted up in quality twice.
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Tea&oldid=91154"
|
Transverse Momentum Distributions in AuAu and dAu Collisions at GeV
Li-Li Wang, "Transverse Momentum Distributions in AuAu and dAu Collisions at GeV", Advances in High Energy Physics, vol. 2014, Article ID 731864, 6 pages, 2014. https://doi.org/10.1155/2014/731864
Li-Li Wang1
We study the transverse momentum distributions of identified particles produced in Au + Au and d + Au collisions at GeV. The Tsallis description is applied in the multisource model. The results are compared with the experimental data in detail. We obtain some information of the thermodynamic properties of matter produced in the collisions. The difference of the transverse momentum distributions in Au + Au and d + Au collisions is not significant.
Nucleus-nucleus collisions at high energy are important experiments to study the matter at an extreme temperature. Relativistic heavy ion collider (RHIC) in Brookhaven National Laboratory (BNL) is a valuable tool to probe quark-gluon plasma (QGP) produced in the collisions. In order to understand QGP more deeply, scientists have built a large hadron collider (LHC) at the European Organization for Nuclear Research (CERN). In the high-energy collisions, thousands of final-state particles are produced per event. The investigation of the identified particles produced in the collisions brings valuable insight into properties of QGP. In Au + Au collisions, final-state particle yields provided the information of the temperature and chemical potential by using a statistical model [1]. The transverse momentum of a particle is defined as , where and are the momentum components in the transverse momentum plane. The transverse momentum distributions of the final-state particles are called first observations in the high-energy experiments. To describe such many-particle system, statistical approaches have been used widely over past few years.
In order to describe transverse momentum spectra of the identified particles, the Tsallis statistics have been utilized to understand the particle production in high-energy physics and have been used to describe the transverse momentum spectra measured at the RHIC [2] and at the LHC [3, 4]. By the analysis of the experimental data, the Tsallis distribution has gained prominence with very good descriptions. Recently, the Tsallis distribution was improved to satisfy the thermodynamic consistence in the case of relativistic high-energy quantum distribution [5]. By fitting the data observed at LHC, the temperature and the parameter have been estimated. One-particle rapidity (or pseudorapidity) distributions measured at RHIC are well described by the Ornstein-Uhlenbeck process [6, 7].
In our previous work [8], inclusive transverse momentum spectra of meson in Au-Au, d-Au, and p-p collisions were studied in the framework of a thermalized cylinder model. In the region of high transverse momentum, the considered distributions of meson have a tail part at the maximum energy of RHIC. To explain the wider transverse momentum spectra, we considered the relative importance of hard and soft processes in the particle production. The experimental data of the PHENIX Collaboration have been described by the improved cylinder model, which contains two fundamental components. The multisource thermal model was developed from the cylinder model [9–11]. In this paper, we consider the different longitudinal rapidity of emission sources produced in Au + Au and d + Au collisions at 200 GeV and extend the one-source Tsallis distribution to the multisource Tsallis distribution in the picture of the multisource thermal model.
2. The Distribution Law of Particles Produced in AuAu and dAu Collisions at 200 GeV
At high energy, the primary nucleon-nucleon collision may be treated as a few sources. The participant nucleons in the primary collisions have probabilities to collide with latter nucleons in cascade collisions. Furthermore, the particles produced in primary or cascade nucleon-nucleon collisions have probabilities to take part in secondary collisions with latter nucleons and other particles. Each cascade or secondary collision is also treated as an emission source or a few emission sources. The identified particles are emitted from the emission sources produced in Au + Au and d + Au collisions at RHIC. According to the improved Tsallis distribution [5], the total number of the particles is where , , , , , and are the momentum, the energy, the temperature, the chemical potential, the volume, and the degeneracy factor, respectively. The parameter characterizes the degree of nonequilibrium. Then, we have momentum distribution At midrapidity , for zero chemical potential, the transverse momentum spectrum is given by The distribution of the particles is contributed by an emission source at midrapidity . Considering the contributions of the sources at the different rapidities [13], the spectrum is
Figure 1 shows the invariant yields of positive pions and negative pions as a function of the transverse momentum for , , , , and centralities in Au + Au collisions and for , , , , and centralities in d + Au collisions at 200 GeV. The scattered symbols denote the experimental data measured by the PHENIX Collaboration [12]. The yields are scaled by arbitrary factors indicated in the figure for the sake of clarity and for keeping the collision species grouped together. The lines are the results calculated by the model. The parameters and used in the calculations and the corresponding /dof are given in Table 1. The maximum /dof is 0.495. Our results of and are in good agreement with the experimental data for all concerned centralities. The values of the temperature increase slowly with increasing the centrality. The does not change significantly. In both Au + Au and d + Au, the trends of and are the same.
Particles Collision Centralities (GeV) /dof
Au + Au
02–10% 1.110 0.752 0.427
d + Au
The parameters and in Figure 1.
Invariant yield of and as a function of in Au + Au and d + Au collisions at = 200 GeV. The scattered symbols denote the experimental data measured by the PHENIX Collaboration [12] and statistical uncertainties are too small to be seen. The solid lines denote the results of the model.
In Figure 2, we show the transverse momentum spectra of positive kaons and negative kaons in Au + Au and d + Au collisions at 200 GeV. The scattered symbols denote the experimental data measured by the PHENIX Collaboration [12] and the solid lines are the results calculated by the formula of the multisource thermal model. The parameters and are given in Table 2 with the corresponding /dof. The mass of the kaon is heavier than that of the poin. But, for and with all concerned centralities, our results are also in good agreement with the experimental data. The maximum /dof is 0.425. Similarly, the values of the temperature increase slowly with the centrality and the parameter hardly changes in both Au + Au and d + Au collisions.
Au + Au 0–10% 1.105 0.108 0.392
d + Au 0–20% 1.104 0.103 0.212
Figure 3 presents the invariant yields of protons and negative protons for , , , , and centralities in Au + Au collisions and for , , , , and centralities in d + Au collisions at 200 GeV. The scattered symbols denote the experimental data [12] in different centrality cuts indicated in the figure. The solid lines are our results calculated by the model. The parameters and are given in Table 3 with /dof. The range of /dof is 0.151–1.440. Therefore, the model can approximately describe the experimental data of and for all concerned centralities in Au + Au and d + Au systems. It is also found that the temperature increases slowly with increasing the centrality and the does not change significantly in both Au + Au and d + Au collisions.
Invariant yield of and as a function of in Au + Au and d + Au collisions at 200 GeV. The scattered symbols denote the data measured by the PHENIX Collaboration [12] and statistical uncertainties are too small to be seen. The solid lines denote the results of the model.
We have studied the invariant yields of , , , , , and produced in Au + Au and d + Au collisions at = 200 GeV in the framework of the multisource model, which is combined with Tsallis statistics. A formula was introduced to describe the transverse momentum distributions and to obtain and the temperature . For the two collision systems Au + Au and d + Au at high energy, the mechanism of the particle production has the commonality of their inherent and fundamental laws. So the identified particles can be described in the same model. In recent years, the particle production in high-energy ion collisions has attracted much attention to understand the strongly coupled QGP (sQGP) by analyzing the production mechanisms [14, 15]. Thermal-statistical models have succeeded in the description of particle yields in various collision systems at different energies [10, 11, 16]. In the rapidity space, different sources of final-state particles stay at different positions due to stronger longitudinal flow [17].
In our previous work, the transverse momentum distributions of meson in Au + Au, d + Au, and p + p collisions were investigated in the framework of a thermalized cylinder model. There is a tail part in the transverse momentum distributions of mesons at RHIC energies. To explain the wider transverse momentum spectra, the hard and soft processes have been taken into account in the particle production. The improved cylinder model with two-component distribution is successful in the description of the meson production. But we can only obtain an indirect association with the temperature of the emission sources. The multisource thermal model was improved from the cylinder model. In this paper, we consider the different longitudinal rapidity of the emission sources created in Au + Au and d + Au collisions at 200 GeV and extend the improved Tsallis distribution with one source to the Tsallis distribution with multisource in the picture of the multisource thermal model. The relativistic treatment for the transverse direction would be needed as long as the stochastic approach is adopted. Our results are in agreement with the experimental data of PHENIX Collaboration. Even more important, the model can quantitatively provide the temperature information of the emission sources.
The author would like to thank Dr. J. H. Kang for her guidance throughout the work. The author thanks also Dr. B. C. Li for his improvements to the paper.
J. Adams, C. Adler, M. M. Aggarwal et al., “Identified particle distributions in pp and Au+Au Collisions at
\sqrt{{S}_{NN}}=200
GeV,” Physical Review Letters, vol. 92, Article ID 112301, 2004. View at: Google Scholar
p+p
\sqrt{s}=200
and 62.4 GeV,” Physical Review C, vol. 83, Article ID 064903, 2011. View at: Google Scholar
V. Khachatryan, A. M. Sirunyan, A. Tumasyan et al., “Strange particle production in pp collisions at
\sqrt{S}=0.9
and 7 TeV,” Journal of High Energy Physics, vol. 1105, article 064, 2011. View at: Publisher Site | Google Scholar
G. Aad, B. Abbott, J. Abdallah et al., “Charged-particle multiplicities in pp interactions measured with the ATLAS detector at the LHC,” New Journal of Physics, vol. 13, Article ID 053033, 68 pages, 2011. View at: Google Scholar
\sqrt{s}=0.9
TeV at the LHC,” Journal of Physics G: Nuclear and Particle Physics, vol. 39, Article ID 025006, 2012. View at: Publisher Site | Google Scholar
G. Wolschin, “Diffusion and local deconfinement in relativistic systems,” Physical Review C, vol. 69, Article ID 024906, 2004. View at: Publisher Site | Google Scholar
N. Suzuki and M. Biyajima, “Transverse momentum distribution with radial flow in relativistic diffusion model,” International Journal of Modern Physics E, vol. 16, p. 133, 2007. View at: Publisher Site | Google Scholar
L.-L. Wang, “Emission of η meson with high transverse momentum in Au-Au, d-Au and p-p collisions at TeX GeV,” Indian Journal of Physics, vol. 87, no. 6, pp. 575–579, 2013. View at: Publisher Site | Google Scholar
B.-C. Li, Y. Fu, L.-L. Wang, and F.-H. Liu, “Dependence of elliptic flows on transverse momentum and number of participants in Au+Au collisions at
{\sqrt{S}}_{NN}
= 200 GeV,” Journal of Physics G: Nuclear and Particle Physics, vol. 40, no. 2, Article ID 025104, 2013. View at: Publisher Site | Google Scholar
\sqrt{{S}_{NN}}=200
GeV,” Physical Review C, vol. 88, Article ID 024906, 2013. View at: Google Scholar
B. C. Li, Y. Z. Wang, F. H. Liu, X. J. Wen, and Y. E. Dong, “Particle production in relativistic
pp\left(\overline{p}\right)
collisions at RHIC and LHC energies with Tsallis statistics using the two-cylindrical multisource thermal model,” Physical Review D, vol. 89, Article ID 054014, 2014. View at: Google Scholar
A. Adare, S. Afanasiev, C. Aidala et al., “Azimuthal anisotropy of
{\pi }^{0}
and η mesons in Au + Au collisions at
\sqrt{{S}_{NN}}=200
B. B. Abelev, J. Adam, D. Adamova et al., “
{K}_{S}^{0}
\Lambda
\sqrt{{S}_{NN}}=200
TeV,” Physical Review Letters, vol. 111, Article ID 222301, 2013. View at: Google Scholar
\sqrt{{S}_{NN}}=200
P. Braun-Munzinger, J. Stachel, J. P. Wessels, and N. Xu, “Thermal equilibration and expansion in nucleus-nucleus collisions at the AGS,” Physics Letters B: Nuclear, Elementary Particle and High-Energy Physics, vol. 344, pp. 43–48, 1995. View at: Google Scholar
Copyright © 2014 Li-Li Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The publication of this article was funded by SCOAP3.
|
A Beginner’s Guide to Integrals | Outlier
Whereas we use derivatives in calculus to compute instantaneous rates of change of functions, integrals measure net change or total change of functions over an interval.
For example, if you were driving along an interstate highway and you had a function
f\left(t\right)
that measured the distance (in miles) from your starting location as a function of the time t (in hours) since you started driving, then the integral
\int_2^4f\left(t\right)\operatorname dt
would represent the total distance traveled between hours 2 and 4 of your trip.
Integrals are used throughout physics, engineering, and math to compute quantities such as area, volume, mass, physical work, and more. In this article, we’ll explore the basics behind integrals, the difference between definite and indefinite integrals, and some basic strategies for computing them.
All of the various applications of integrals we mentioned in the previous section are examples of definite integrals. In general, a definite integral looks like this:
\int_a^bf\left(x\right)\operatorname dx
The mathematical definition of a definite integral is a little complicated, but in practice we can think of the notation above as representing the net change of the function
f
over the interval from
x=a
x=b
. This value happens to coincide with the signed area between the graph of
and the x-axis:
Indefinite integrals arise from the way that we typically compute definite integrals. Most of the time, we evaluate definite integrals using the Fundamental Theorem of Calculus or FTC):
\int_a^bf\left(x\right)\operatorname dx=F\left(b\right)-F\left(a\right)\newline\text{ where }F'\left(x\right)=f\left(x\right)
In words, to evaluate the integral of a function
f
a\leq x\leq b
, we need to find a function
F
whose derivative is
f
; evaluate that function at the two endpoints of the interval; and then subtract those two values. For example, to compute an integral of the function
f\left(x\right)=x^2
, we could use the function
F\left(x\right)=\frac13x^3
, since the derivative of the latter is
x^2
\int_1^3x^2\operatorname dx=\frac13\left(3\right)^3-\frac13\left(1\right)^3=\frac{26}3
F
used in the FTC is called an antiderivative of
: it’s a function whose derivative is
f
. This is where indefinite integrals come in: we use the notation
\int f\left(x\right)\operatorname dx
for the general antiderivative of
. Based on the example above, we would have
\int x^2\operatorname dx=\frac13x^3+C
where the extra "+C" term is there to take into account all of the functions whose derivative are x^2.
In summary: definite integrals—those with bounds on the integral symbol—are extremely useful for computing areas and net changes of functions, and we can extend them to compute other quantities, such as mass and volume. To actually evaluate a definite integral, however, we need to find an indefinite integral (i.e., an antiderivative).
The process of computing antiderivatives can be tricky, as it’s not as algorithmic as finding derivatives. Fortunately, to get us started using integrals, there are a few functions for which finding the antiderivative is not too difficult. Here are the indefinite integrals of some simple functions that you’ll encounter frequently.
Integrals of Power Functions
Just like derivatives, there’s a “Power Rule” for integrals. Can you guess what it is from the pattern in this table?
We'll cover the general rule in greater depth below, in the Power Rule section.
The exponential and logarithmic functions play key roles throughout both theoretical and applied mathematics. Notice that because
e^x
is its own derivative, it’s also its own antiderivative!
Trigonometric functions are useful in any situation that involves periodic behavior, where a function takes on the same values in repeating intervals.
Outlier Instructor Hannah Fry Explains Integration Rules
As with derivatives, there are a handful of rules we can use to find antiderivatives of complicated functions based on our knowledge of simpler antiderivatives. For each rule, we’ve provided a few examples to show how it works.
The Power Rule for integrals is something of an opposite to the usual Power Rule for differentiation:
\int x^k\operatorname dx=\frac1{k+1}x^{k+1}+C
Notice that this doesn’t work if
k=-1
, since then we have division by zero on the right side of the equation. (The antiderivative of
x^{-1}
is shown in the table in the previous section.) Here are a couple examples of the Power Rule:
\int x^5\operatorname dx=\frac1{5+1}x^{5+1}+C=\frac16x^6+C
\int x^\frac13\operatorname dx=\frac1{{\displaystyle\frac13}+1}x^{\frac13+1}+C=\frac34x^\frac43+C
Because derivatives and integrals are closely related, they obey some similar algebraic properties. In particular, integrals follow the same rules regarding sums, differences, and constant multiples of functions:
\int\left(f\left(x\right)+g\left(x\right)\right)\operatorname dx=\int f\left(x\right)\operatorname dx+\int g\left(x\right)\operatorname dx
\int\left(f\left(x\right)-g\left(x\right)\right)\operatorname dx=\int f\left(x\right)\operatorname dx-\int g\left(x\right)\operatorname dx
\int c\;f\left(x\right)\operatorname dx=c\int f\left(x\right)\operatorname dx
So the integral of a sum or difference is the same as the sum or difference of the integrals, and we can “pull” constants outside of integrals. Here are a couple examples of those three rules used together:
\int\left(e^x-\sin\;x\right)\operatorname dx=\int e^x\operatorname dx-\int\sin\;x\operatorname dx=e^x-\left(-\cos\;x\right)+C=e^x+\cos\;x+C
\int\left(6x^3+5x^2-4x\right)\operatorname dx
=6\int x^3\operatorname dx+5\int x^2\operatorname dx-4\int x\operatorname dx
=6\left(\frac14x^4\right)+5\left(\frac13x^3\right)-4\left(\frac12x^2\right)+C
=\frac32x^4+\frac53x^3-2x^2+C
This is probably the most useful rule for computing integrals, even though it takes some time to master it. On paper, the formula for substitution is, once again, an opposite of sorts, this time to the Chain Rule for derivatives:
\int f'\left(u\left(x\right)\right)u'\left(x\right)\operatorname dx=f\left(u\left(x\right)\right)+C
We can apply substitution to an integral whenever we can identify a piece
\left(u\left(x\right)\right)
of the integrand whose derivative <katex>\left(u'\left(x\right)\right) also appears in the integrand. Let’s work through an example to see how this goes. This time, we’ll find the indefinite integral of:
\int2x\;e^x\operatorname dx
Notice that the integrand
2x\;e^x
fits the criterion for using substitution: if we define
u\left(x\right)=x^2
, then the derivative
u'\left(x\right)=2x
also appears in the integrand. So we substitute
u=x^2
into this integral. We replace
x^2
u
2x\operatorname dx
du
\int2x\;e^{x^2}\operatorname dx\rightarrow\int e^u\operatorname du
This leaves us with a simpler integral that we can compute (using the tables above, for instance):
\int e^u\operatorname du=e^u+C
To finish the process, we plug
u=x^2
back into the antiderivative we found:
\int2x\;e^{x^2}\operatorname dx=e^{x^2}+C
(Remember, the goal with integrals is to find functions whose derivatives are the functions we are given. So we can always check our
\frac d{\operatorname dx}\left(e^{x^2}+C\right)=2x\;e^{x^2}
via the Chain Rule.)
Here’s another example. Let’s compute
\int\frac x{\sqrt{x^2-1}}\operatorname dx
We’d like to substitute
u=x^2-1
here, but the derivative of that is
2x
, which doesn’t appear in the integrand. However, we can rewrite the integral in a clever way to make that happen:
\int\frac x{\sqrt{x^2-1}}\operatorname dx=\int\frac12\frac{2x}{\sqrt{x^2-1}}dx
u=x^2-1
\operatorname du=2x\operatorname dx
, which simplifies the integral into something we can compute using the Power Rule:
\int\frac x{\sqrt{x^2-1}}\operatorname dx=\int\frac12\frac{2x}{\sqrt{x^2-1}}dx=\int\frac12\frac1{\sqrt u}du=\int\frac12u^{-\frac12}\operatorname du=\frac12\left(2u^\frac12\right)+C=\sqrt u+C
To finish up, we need to substitute
u=x^2-1
back in to get our result in terms of
x
\int\frac x{\sqrt{x^2-1}}\operatorname dx=\sqrt{x^2-1}+C
|
\mathbf{F}=z \mathbf{i}-x \mathbf{j}-y \mathbf{k}
C
S
Figure 9.9.1(a) shows arrows of the curl field of F, along with the upper hemisphere
S
. The black arrow at the "north pole" is a representative normal taken on
S
The curl of F:
∇×\mathbf{F}=\left|\begin{array}{ccc}\mathbf{i}& \mathbf{j}& \mathbf{k}\\ {∂}_{x}& {∂}_{y}& {∂}_{z}\\ z& -x& -y\end{array}\right|
\left[\begin{array}{c}-1\\ 1\\ -1\end{array}\right]
A unit normal on
S
can be obtained by normalizing
{\mathbf{R}}_{x}×{\mathbf{R}}_{y}
, where R is a position-vector representation of
S
local F,p;
F:=VectorField(<z,-x,-y>);
p:=Flux(Curl(F),Surface(<x,y,sqrt(-x^2-y^2+1)>,x=-1..1,y=-sqrt(-x^2+1)..sqrt(-x^2+1)),output=plot,fieldoptions=[grid=[3,3,3]],scaling=constrained,caption="",tickmarks=[3,3,3],axes=frame,orientation=[155,85,0]);
∇×\mathbf{F}
S
Z=\sqrt{1-{x}^{2}-{y}^{2}}
be a Cartesian representation of the upper hemisphere
S
\mathbf{R}=\left[\begin{array}{c}x\\ y\\ Z\end{array}\right]
⇒
{\mathbf{R}}_{x}×{\mathbf{R}}_{y}
\left|\begin{array}{ccc}\mathbf{i}& \mathbf{j}& \mathbf{k}\\ 1& 0& -x/Z\\ 0& 1& -y/Z\end{array}\right|
\left[\begin{array}{c}x/Z\\ y/Z\\ 1\end{array}\right]
\mathbf{N}=\left[\begin{array}{c}x\\ y\\ Z\end{array}\right]
The element of surface area can be obtained from
\sqrt{1+{Z}_{x}^{2}+{Z}_{y}^{2}}=\sqrt{1+{\left(x/Z\right)}^{2}+{\left(y/Z\right)}^{2}}
1/Z
∥{\mathbf{R}}_{x}×{\mathbf{R}}_{y}∥=1/Z
. In either event,
\left(∇×\mathbf{F}\right)·\mathbf{N} \mathrm{dσ}=\left[\begin{array}{c}-1\\ 1\\ -1\end{array}\right]·\left[\begin{array}{c}x\\ y\\ Z\end{array}\right]\cdot \frac{1}{Z} dA=\left(\frac{y}{Z}-\frac{x}{Z}-1\right)\mathrm{dA}
∫{∫}_{S}\left(∇×\mathbf{F}\right)·\mathbf{N} \mathrm{dσ}
can be implemented in Cartesian coordinates as
{∫}_{-1}^{1}{∫}_{-\sqrt{1-{x}^{2}}}^{{\sqrt{1-{x}^{2}}}_{}}\left(\frac{y}{\sqrt{1-{x}^{2}-{y}^{2}}}-\frac{x}{\sqrt{1-{x}^{2}-{y}^{2}}}-1\right) \mathrm{dy} \mathrm{dx}=-\mathrm{π}
or in polar coordinates as
{∫}_{0}^{1}{∫}_{0}^{2 \mathrm{π}}r\left(\frac{r \mathrm{sin}\left(\mathrm{θ}\right)}{\sqrt{1-{r}^{2}}}-\frac{r \mathrm{cos}\left(\mathrm{θ}\right)}{\sqrt{1-{r}^{2}}}-1\right) \mathrm{dθ} \mathrm{dr}= -\mathrm{π}
The line integral around
C
, the unit circle centered at the origin, given by
{∳}_{C}\mathbf{F}·\mathbf{dr}
, can be evaluated if
C
\mathbf{r}=\mathrm{cos}\left(t\right) \mathbf{i}+\mathrm{sin}\left(t\right) \mathbf{j}+0 \mathbf{k}
C
\mathbf{F}·\mathbf{dr}=\left[\begin{array}{c}0\\ -\mathrm{cos}(t)\\ -\mathrm{sin}(t)\end{array}\right]·\left[\begin{array}{c}-\mathrm{sin}(t) \mathrm{dt}\\ \mathrm{cos}(t) \mathrm{dt}\\ 0\end{array}\right]= -{\mathrm{cos}}^{2}\left(t\right) \mathrm{dt}
{∳}_{C}\mathbf{F}·\mathbf{dr}
-{∫}_{0}^{2 \mathrm{π}}{\mathrm{cos}}^{2}\left(t\right) \mathrm{dt}= -\mathrm{π}
The parametrization chosen for
C
induces a counterclockwise traverse of the circle, an orientation consistent with the choice of an outward normal on
S
〈z,-x,-y〉
\left[\begin{array}{c}\textcolor[rgb]{0,0,1}{z}\\ \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\\ \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\end{array}\right]
\stackrel{\text{to Vector Field}}{\to }
\left[\begin{array}{c}\textcolor[rgb]{0,0,1}{z}\\ \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\\ \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\end{array}\right]
\stackrel{\text{assign to a name}}{\to }
\textcolor[rgb]{0,0,1}{F}
∇×\mathbf{F}
S
∇×\mathbf{F}
\left[\begin{array}{r}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\end{array}\right]
\stackrel{\text{assign to a name}}{\to }
\textcolor[rgb]{0,0,1}{\mathrm{curlF}}
Table 9.9.1(a) contains a task template with which the flux of
∇×\mathbf{F}
S
is computed. Should the "Clear All and Reset" button in the Task Template be pressed, all the data that has been input to the template will be lost. In that event, the reader should simply re-launch the example to recover the appropriate inputs to the template.
Table 9.9.1(a) Flux of
∇×\mathbf{F}
S
Table 9.9.1(b) contains the calculation of the line integral
{∳}_{C}\mathbf{F}·\mathbf{dr}
C
is the circle capped by
S
(Complete the Line Integral Domain dialog as per Figure 9.9.1(b).)
Figure 9.9.1(b) Line Integral Domain dialog
\mathbf{F}
\left[\begin{array}{c}\textcolor[rgb]{0,0,1}{z}\\ \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\\ \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\end{array}\right]
\stackrel{\text{line integral}}{\to }
{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{π}}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{1}}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{r}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{r}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{t}
=
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{π}}
Table 9.9.1(b) Line integral: tangential component of F along the circle
C
C
\mathbf{r}=\mathrm{cos}\left(t\right) \mathbf{i}+\mathrm{sin}\left(t\right) \mathbf{j}+0 \mathbf{k}
C
\mathbf{F}·\mathbf{dr}=\left[\begin{array}{c}0\\ -\mathrm{cos}(t)\\ -\mathrm{sin}(t)\end{array}\right]·\left[\begin{array}{c}-\mathrm{sin}(t) \mathrm{dt}\\ \mathrm{cos}(t) \mathrm{dt}\\ 0\end{array}\right]= -{\mathrm{cos}}^{2}\left(t\right) \mathrm{dt}
{∫}_{C}\mathbf{F}·\mathbf{dr}
-{∫}_{0}^{2 \mathrm{π}}{\mathrm{cos}}^{2}\left(t\right) \mathit{ⅆ}t
-\mathrm{\pi }
\mathrm{with}\left(\mathrm{Student}:-\mathrm{VectorCalculus}\right):
\mathrm{BasisFormat}\left(\mathrm{false}\right):
\mathbf{F}≔\mathrm{VectorField}\left(〈z,-x,-y〉\right):
Define the upper hemisphere as
Z=z\left(x,y\right)
Z≔\sqrt{1-{x}^{2}-{y}^{2}}:
∇×\mathbf{F}
S
\mathrm{Flux}\left(\mathrm{Curl}\left(\mathbf{F}\right),\mathrm{Surface}\left(〈x,y,Z〉,\left[x,y\right]=\mathrm{Circle}\left(〈0,0〉,1,\left[r,\mathrm{θ}\right]\right)\right),\mathrm{output}=\mathrm{integral}\right)
{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{π}}}\left(\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{cos}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{θ}}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{sin}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{θ}}\right)\textcolor[rgb]{0,0,1}{+}\sqrt{\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}}\right)}{\sqrt{\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{r}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}}}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{\mathrm{θ}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{r}
\mathrm{Flux}\left(\mathrm{Curl}\left(\mathbf{F}\right),\mathrm{Surface}\left(〈x,y,Z〉,\left[x,y\right]=\mathrm{Circle}\left(〈0,0〉,1,\left[r,\mathrm{\theta }\right]\right)\right)\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{π}}
Use the LineInt command to form and evaluate
{∳}_{C}\mathbf{F}·\mathbf{dr}
\mathrm{LineInt}\left(\mathbf{F},\mathrm{Circle3D}\left(〈0,0,0〉,1,〈0,0,1〉\right),\mathrm{output}=\mathrm{integral}\right)
{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{π}}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{1}}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{r}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{r}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{t}
\mathrm{LineInt}\left(\mathbf{F},\mathrm{Circle3D}\left(〈0,0,0〉,1,〈0,0,1〉\right)\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{π}}
Figure 9.9.1(a) can be obtained with the Flux command, provided the integration is implemented in Cartesian coordinates with the following syntax. The actual options applied in the figure can be seen in the code hidden in the table cell containing the figure.
\mathrm{Flux}\left(\mathrm{Curl}\left(\mathbf{F}\right),\mathrm{Surface}\left(〈x,y,Z〉,x=-1..1,y=-\sqrt{1-{x}^{2}}..\sqrt{1-{x}^{2}}\right),\mathrm{output}=\mathrm{plot}\right):
|
OrePoly Structure - Maple Help
Home : Support : Online Help : Mathematics : Algebra : Skew Polynomials : OreTools : OrePoly Structure
The OrePoly Structure
An Ore polynomial is represented by an OrePoly structure. It consists of the constructor OrePoly with a sequence of coefficients starting with the one of degree zero. For example, in the differential case with the differential operator D, OrePoly(2/x, x, x+1, 1) represents the operator 2/x+xD+(x+1)D^2+D^3.
\mathrm{with}\left(\mathrm{OreTools}\right):
A≔\mathrm{SetOreRing}\left(x,'\mathrm{differential}'\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{UnivariateOreRing}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{differential}}\right)
\mathrm{Poly}≔\mathrm{OrePoly}\left(\frac{2\left(3x+1\right)}{{x}^{2}\left(4+27x\right)},-\frac{2}{x\left(4+27x\right)},1\right)
\textcolor[rgb]{0,0,1}{\mathrm{Poly}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{OrePoly}}\textcolor[rgb]{0,0,1}{}\left(\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)}{{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{27}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{2}}{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{27}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\right)
\mathrm{Apply}\left(\mathrm{Poly},f\left(x\right),A\right)
\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)}{{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{27}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\left(\frac{\textcolor[rgb]{0,0,1}{ⅆ}}{\textcolor[rgb]{0,0,1}{ⅆ}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\right)}{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{27}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)}\textcolor[rgb]{0,0,1}{+}\frac{{\textcolor[rgb]{0,0,1}{ⅆ}}^{\textcolor[rgb]{0,0,1}{2}}}{\textcolor[rgb]{0,0,1}{ⅆ}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)
A≔\mathrm{SetOreRing}\left(n,'\mathrm{shift}'\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{UnivariateOreRing}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{shift}}\right)
\mathrm{Poly}≔\mathrm{OrePoly}\left(1,-2,-2,1\right)
\textcolor[rgb]{0,0,1}{\mathrm{Poly}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{OrePoly}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\right)
\mathrm{Apply}\left(\mathrm{Poly},s\left(n\right),A\right)
\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\right)
Define the q-shift algebra.
A≔\mathrm{SetOreRing}\left([x,q],'\mathrm{qshift}'\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{UnivariateOreRing}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{qshift}}\right)
\mathrm{Poly}≔\mathrm{OrePoly}\left(-q\left(1-qx\right),1\right)
\textcolor[rgb]{0,0,1}{\mathrm{Poly}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{OrePoly}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\right)
\mathrm{Apply}\left(\mathrm{Poly},s\left(x\right),A\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)
|
Weißbier - Ring of Brodgar
Skill(s) Required Carpentry, Farming, Pottery, Metal Working
Object(s) Required Wheat Wort
Produced By Demijohn
Weißbier (Pronounce "vice-beer", means "white beer" in German.) buffs Vegetables satiation by 0.5% and Fish by 1.0% per gulp of Q10. It also gives a negative satiation against itself which results in a diminishing return.
It is made the same way as normal Beer, except you must use Seeds of Wheat in all crafts that would normally require Seeds of Barley.
Place Seeds of Wheat on a Herbalist Table and wait for the seeds to germinate.
Roast Seeds of Sprouted Wheat in a Kiln to create the Malt.
Grind Malted Wheat at a Quern to create the Grist.
Boil Wheat Grist with Hop Cones in a Cauldron to make Wort.
Store Wheat Wort in a Demijohn until it becomes Weißbier.
The proper way to drink Weißbier is from the Tankard, not drinking out of a proper vessel halves the effective quality of your Weißbier.
Quality 10 Weißbier recovers 10% stamina and drains 20% energy per 0.05L sip. Higher quality Weißbier will decrease the energy drain but stamina recovery remains the same at all qualities.
This makes high quality Weißbier useful for performing stamina draining tasks without having to eat as much to recover, thus preserving your hunger bonus. Though energy is always shown as a whole number, it is not simply rounded up or down. For example, Q11 Weißbier will alternate between draining 20% and 19% energy at regular intervals. Other drinkable liquids follow the same rules and formula.
Energy Drain=
{\displaystyle ({\frac {1}{\sqrt {quality/10}}}+1)*10}
Gives energy if Q >20
According to that formula initial quality growth is more effective, and it always higher than 10% per 0.05 sip. Naturally stamina regenerate 10% per 10% energy.
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Weißbier&oldid=93184"
Powders, Liquids, and Seeds
|
Branch - Ring of Brodgar
Object(s) Required None
Produced By Tree (must be sufficiently grown)
Required By Barkboat, Bone Arrow, Candy Apple, Cave Skewer, Centibab, Cone Cow, Drying Frame, Fire, Firebrand, Folded Roasting Spit, Hearth Fire, Hopped-up Cone Cow, Linen Crate, Loom, Metal Arrow, Mushroom Spit, Onion Skewer, Perched Perch, Rat-on-a-Stick, Roundpole Fence, Silk Thread, Stone Arrow, Stone Axe, Tarsticks, Two-Bird Skewer, Wicker Basket, Wicker Breadbasket, Wicker Picker
Stockpile Branch (60)
Right click on a sufficiently grown tree and select the appropriate option.
You can also obtain branches by getting a block of wood, right clicking and choosing split, you will then get 5 branches.
Qualities of branches obtained from block of wood depend both on quality of axe and quality of log using formula:
{\displaystyle {\sqrt[{2}]{Axe*Block}}}
To get high quality branches you need to grow high quality tree, doing so by using a good quality treeplanter's pot, good quality soil, water, and a seed used to sow a tree, repeat this with the tree you grew to get higher quality branches. Keep in mind that most trees have 7 branches.
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Branch&oldid=92492"
|
Estelle is trying to find
x
in the triangle at right. She lost her scientific calculator, but luckily her teacher told her that
\sin23^\circ\approx0.391
\cos23^\circ\approx0.921
\tan23^\circ\approx0.424
Write an equation that Estelle could use to solve for
x
Use the Triangle Angle Sum Theorem to calculate the missing angle, this means all the angles in a triangle total
180^\circ
. Label added, third angle, theta = 23 degrees.
67^\circ + 90^\circ + θ = 180^\circ
-157-157
θ = 23^\circ
Place a "
y
" on the missing side. Write the trigonometric ratios for the angle, reference the Math Note box in Lesson 5.1.2 for extra help. Label added, short side is y.
\sin23^\circ=\frac{y}{x}\ \ \ \ \cos23^\circ=\frac{18}{x}\ \ \ \tan23^\circ=\frac{y}{18}
Which equation can be used to solve for
x
{\cos 23^\circ=\frac{18}{x}}
Without a calculator, how could Estelle find
\sin67^\circ
\text{sin}(67^\circ)=\frac{\text{opposite}}{\text{hypotenuse}}=\frac{18}{x}
\text{sin}(67^\circ)=\frac{18}{x}=\text{cos}(23^\circ)
\cos23^\circ\approx0.921
\sin67^\circ=\cos23^\circ\approx0.921
|
Came across a fun little problem over the past few weeks that is related to the topic of policy impact evaluation - a long time interest of mine! Here's the setting: we have a large population of individuals and a number of treatments that we want to gauge the effectiveness of. The treatments are not necessarily the same but are targeted towards certain sub-segments in the population. Examples of such situations include online ad targeting or marketing campaigns. This gives rise to the following 3 methods of selecting the treatment and control groups:
Apply the targeting rule to get a population subset. Split this group into treatment and control, run the treatment and collect the results. In the next time period, keep those which remain in the control as the control and top up the group with a random sample to maintain a similar proportion of treated and control individuals.
Randomly split the population into treatment and control. For each period, do not vary the control group. Just administer the treatment on the treatment group. Evaluate the effectiveness of each period on the control group applying the targeting rule to subset the relevant control population.
For each period and campaign, apply the targeting rule and randomise the group into treatment and control.
Would these methods give equivalent results? I will use the Neyman-Rubin causal framework to formalise the intended goal and outcomes. Let
Y_{i}
denote the outcome of an individual (e.g. total spending). The fundamental problem of inference is that one would never be able to observe the spending of an individual if he was administered the treatment
Y_{1i}
or if he was not
Y_{0i}
Y_{1i}
Y_{0i}
are referred to as potential outcomes as only one outcome can be observed but not the other.
The average effect of a treatment on an individual is given by:
E[Y_{1i} - Y_{0i}]
D_{i}=1
denote being treated and
D_{i}=0
being not treated. We can look at the difference in average outcomes based on treatment status:
E[Y_{i} \vert D_{i}=1] - E[Y_{i} \vert D_{i}=0] = E[Y_{1i} \vert D_{i}=1] - E[Y_{0i} \vert D_{i}=0]
If the treatment is not randomly assigned (e.g. people can choose to take-up the treatment), the above expression can be written as:
\begin{aligned} E[Y_{i} \vert D_{i}=1] - E[Y_{i} \vert D_{i}=0] &= E[Y_{1i} \vert D_{i}=1] - E[Y_{0i} \vert D_{i}=1] \\ &+ E[Y_{0i} \vert D_{i}=1] - E[Y_{0i} \vert D_{i}=0] \end{aligned}
The first term on the right is the average treatment effect on treated while the second is the selection bias. For example, if the advertisement has a positive impact on spending then we would expect the second term to be positive leading to an upward bias in its estimated effect.
And that's precisely why to evaluate the effectiveness of a treatment, we have to randomise people into treatment and control groups. Under randomisation, the potential outcomes are independent of the treatment1,
\{Y_{1i}, Y_{0i}\} \perp D_{i}
E[Y_{1i} \vert D_{i}=1] - E[Y_{0i} \vert D_{i}=0] = E[Y_{1i} - Y_{0i}]
This implies that taking the difference between the average across the treated and control group will give us the Average Treatment Effect (ATE). In many situations, we relax the assumption by only allowing the mean of non-treated individuals to be independent of treatment status:
E[Y_{1i} \vert D_{i}=1] - E[Y_{0i} \vert D_{i}=0] = E[Y_{1i} - Y_{0i} \vert D_{i}=1]
This gives the Average Treatment on Treated (ATT).
To consider the various scenarios outlined above, let me setup a little thought experiment. In my world, there are two types of customers, high type or low type, which I denote by
X_{i}
. Low type customers,
X_{i} = L
, spend
\alpha + \epsilon_{it}
dollars while high type customers,
X_{i} = H
\alpha + \beta + \epsilon_{it}
dollars, where
\epsilon_{it}
is a drawn from a normal distribution. The treatment of interest is a marketing promotion which is targeted at high spending individuals. Assume low type customers are not affected by the marketing promotion while high type customers have a
p\%
probability of spending an additional
\delta
dollars which persist for the rest of the periods. Having taken up the treatment, the high type individual will no longer subscribe to future promotions. I ignore any changes in spending across time periods, though in practice one way to account for such changes is to consider the first difference.
To check on the effectiveness of the 3 methods of selecting a control group, let's do a little simulation with the following parameters:
\begin{aligned} \alpha &= 3, \\ \beta &=2, \\ \delta &=1, \\ p &=0.3, \\ \epsilon_{it} &\sim N(0,1) ~\forall i \end{aligned}
To start, let's build a 3 period model with 100,000 people in the population (half high type and half low type). I consider observations in 3 period,
t=1,2,3
and split the population into 80% treatment and 20% control. The treatment is targeted towards higher spending individuals. However, one cannot observe the underlying type distribution and has to segment the population by the amount which they spend. In the simulation, I use a spending rule (
Y_{i} > 4
), which covers approximately 50% of the initial population.
n= 1e5
df = data.frame(ind = seq(1, n),
type = rep(c(0,1), n/2),
epsilon = rnorm(n, 0, 1),
unif = runif(n, 0, 1),
unif2 = runif(n, 0, 1))
df$spend = ifelse(df$type==0, 3, 5) + df$epsilon
### Select treatment and control using unif
df$target = ifelse(df$spend>4 , 1, 0)
df$treat = ifelse(df$target==1 & df$unif<0.8, 1, 0)
df$control = ifelse(df$target==1 & df$unif>=0.8, 1, 0)
Despite covering 50% of the population, randomness in spending patterns implies that the target group would still consist of both low and high types. This means that the outcome of our experiment would only yield an ATT effect, or the effect on the sub-population who spend more than 4. Let us calculate this effect before using the simulation to verify the results. We are interested in finding the fraction of population who are high type conditional on spending more than 4. First, let us calculate the probability that a high and low type individual spend more than 4 using r's pnorm function before calculating the conditional probability:
1-pnorm(4, 3, 1)
\begin{aligned} P(X_{i}=H \vert Y_{i}>4) &= \frac{P(X_{i}=H, Y_{i}>4)}{P(Y_{i} >4)} \\ &= \frac{0.841*0.5}{0.159*0.5 + 0.841 *0.5} \\ &= 0.841 \end{aligned}
Since only 0.841 of the sub-population would be affected by the treatment, we would expect that the average treatment effect would be
0.841*0.3 = 0.25\%
### Add in treatment effect to treated
df$delta = ifelse(df$treat==1 & df$type==1 & runif(n, 0, 1)<=p, d, 0)
df$spend2 = df$delta + df$spend
### Average treatment effect on treated
df_subset = df[df$target==1,]
mean(df[df$treat==1,]$spend2) - mean(df[df$control==1,]$spend2)
lm(spend2 ~ treat, data=df_subset)
## lm(formula = spend2 ~ treat, data = df_subset)
## (Intercept) treat
More generally, a better approach to check our result would be to loop over many random samples and find the central tendency of the parameter estimate:
att <- function(n=1e5, p=0.3, d=1){
mod <- lm(spend2 ~ treat, data=df_subset)
return(coef(mod)["treat"])
coef_list = list()
coef_list[[b]] <- att()
hist(unlist(coef_list))
mean(unlist(coef_list))
Unsurprisingly, the empirical results tally with our mathematical derivation.
2nd period ATT
Now, we are ready to evaluate the various proposed control groups. To keep things simple, the 2nd marketing promotion will be the same as the first and target individuals who spend above 4. However, this time to evaluate the results we need to consider 3 groups - low type, high type takers and high type non-takers - where takers and non-takers refer to whether they responded positively to the treatment in the first period. Repeating the above calculations and focusing on the share of non-takers in the sub-population:
\begin{aligned} P(X_{i,t=2}=H_{nt} \vert Y_{i,t=2}>4) &= P(X_{i,t=2}=H_{nt} \vert Y_{i,t=2}>4, i \in treated_{t=1}) + P(X_{i,t=2}=H_{nt} \vert Y_{i,t=2}>4, i \in control_{t=1})\\ &=\frac{P(X_{i,t=2}=H_{nt}, Y_{i,t=2}>4)}{P(Y_{i,t=2} >4)}*0.8 + 0.841*0.2 \\ &= \frac{0.841*0.5*0.7}{0.159*0.5 + 0.841 *0.5*0.7 + + 0.841 *0.5*0.3} *0.8 + 0.841*0.2\\ &= 0.639 \\ \\ ATT_{t=2} &= 0.639*0.3 \\ &= 0.192 \end{aligned}
The calculations make intuitive sense. With a smaller pool of customers who would respond positively to the treatment, the ATT in the second period is lower than the first.
Targeting rule with top-up
Here's a few lines of code to implement the idea of trying to keep the members of the control group relatively similar and do a random top-up where necessary.
### 2nd time period
df$target2 = ifelse(df$spend2>4, 1, 0)
n_control2 = round(sum(df$target2) * 0.2)
n_control_remain = sum(df$target2 & df$control==1)
unif_threshold = (n_control2 - n_control_remain) / (sum(df$target2) - n_control_remain)
df$control2 = ifelse(df$target2==1 & (df$control==1 | df$unif2<=unif_threshold), 1, 0)
### Approximately fill up
df$treat2 = ifelse(df$target2==1 & df$control2==0, 1, 0)
df$delta2 = ifelse(df$treat2==1 & df$type==1 & df$delta==0 & runif(n, 0, 1)<=p, d, 0)
df$spend3 = df$delta2 + df$spend2 + rnorm(n,0,1)
### Average treatment effect 2 (Less than predicted!)
df_subset2 = df[df$target2==1,]
lm(spend3 ~ treat2, data=df_subset2)
I show the results from 500 runs of the above code extracting the coefficient of the supposed treatment effect as well as the proportion of high non-treated individuals from the treatment and control group.
mean(unlist(prop_control_list))
mean(unlist(prop_treat_list))
Notice that the proportion of high non-treated individuals are no longer the same across the groups and the estimated effect is much larger than the calculated value. Almost no one has been treated in the control group. This leads to an upwards bias in the estimated treatment effect since the coefficient estimate is combining the effect of both the first and second treatment together.
More generally, the extent and direction of bias cannot be so easily quantified. If one allows the spending amounts to have a component that evolves randomly across time, it is possible for the estimate to be smaller than its actual value.2
Method 2 of having a universal control group is actually a special case of the above problem, where the control group does not vary at all. Under the assumption that each treatment would have a positive effect, the estimated effect for each subsequent treatment would always be overstated.
Only method 3 would give us a sensible result across both periods of the treatment. Here's a fun little exercise - try to implement a random sample on the second period after subsetting the population using the targeting rule. Do you get a result similar to the calculated ATT above?
In short, when it comes to choosing a random control group in a policy evaluation setting with multiple treatments and periods, the best option is the simplest one. Random assignment always works, no need to over complicate things.
More accurately only mean independence is required. ↩
Keeping only members that were present in the control group of the previous time period introduces a selection bias. ↩
Rnotessimulationmetrics
|
Effect of Component Misalignment on Human Tissue Temperatures Associated With Recharging Neuromodulation Devices | J. Med. Devices | ASME Digital Collection
Ryan Lovik,
Eph M. Sparrow,
Eph M. Sparrow
John P. Abraham,
Cody Zelmer,
Cody Zelmer
Seong Oh,
Kyle Friend,
Lovik, R., Sparrow, E. M., Abraham, J. P., Zelmer, C., Oh, S., Friend, K., and Smith, D. K. (June 13, 2011). "Effect of Component Misalignment on Human Tissue Temperatures Associated With Recharging Neuromodulation Devices." ASME. J. Med. Devices. June 2011; 5(2): 027516. https://doi.org/10.1115/1.3589907
antennas, biological tissues, biomedical equipment, biothermics, neurophysiology, prosthetics
Biological tissues, Temperature, Artificial limbs, Biomedical equipment, Neurophysiology, Prostheses
A synergistic experimental and numerical investigation has provided quantitative information on the response of human tissue temperatures to misalignment of the implant and antenna of neuromodulation devices during recharging. It was found that misalignment increases tissue temperatures for all of the investigated devices. These increases ranged from
0.5 °C
2.7 °C
. Notwithstanding these increases, the lowest temperatures were attained by the Restore Ultra device for all operating conditions. The temperature levels achieved by the Precision Plus and Eon Mini devices were found to be greater than those for the Restore Ultra, but their relative rankings depend on the thermal boundary conditions and the duration of the recharging period.
|
Classical Mechanics Problem on Newton's Law of Gravity - Problem Solving: How many black holes can fit on the head of a pin? - David Mattingly | Brilliant
c
R=2 GM/c^2
G
R_s
R_s
\mu g
G=6.67 \times 10^{-11} \text{ m}^3\text{/kg s}^2
c=3 \times 10^8\text{ m/s}
h=6.63 \times 10^{-34} \text{ kgm}^2\text{/s}
1 ~\mu g = 10^{-6} \text{ g} = 10^{-9} \text{ kg}
|
PreComprehensiveTriangularize - Maple Help
Home : Support : Online Help : Mathematics : Factorization and Solving Equations : RegularChains : ParametricSystemTools Subpackage : PreComprehensiveTriangularize
compute a pre-comprehensive triangular decomposition
PreComprehensiveTriangularize(sys, d, R)
The command PreComprehensiveTriangularize(sys, d, R) returns a pre-comprehensive triangular decomposition of sys, with respect to the last d variables of R.
A pre-comprehensive triangular decomposition is a refined triangular decomposition (in the Lazard sense) with additional properties, aiming at studying parametric polynomial systems.
U
be the last d variables of R, which we regard as parameters. A finite set
S
of regular chains of R forms a pre-comprehensive triangular decomposition of F with respect to U, if for every parameter value
u
, there exists a subset
S\left(u\right)
S
(1) the regular chains of
S\left(u\right)
specialize well at
u
(2) after specialization at
u
, these chains form a triangular decomposition (in the Lazard sense) of the polynomial system
F
specialized at
u
. See the command DefiningSet for the term specialize well.
\mathrm{with}\left(\mathrm{RegularChains}\right):
\mathrm{with}\left(\mathrm{ConstructibleSetTools}\right):
\mathrm{with}\left(\mathrm{ParametricSystemTools}\right):
R≔\mathrm{PolynomialRing}\left([x,y,s]\right)
\textcolor[rgb]{0,0,1}{R}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{polynomial_ring}}
F≔[s-\left(y+1\right)x,s-\left(x+1\right)y]
\textcolor[rgb]{0,0,1}{F}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{-}\left(\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{-}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}]
A pre-comprehensive triangular decomposition of
F
consists of three regular chains.
\mathrm{pctd}≔\mathrm{PreComprehensiveTriangularize}\left(F,1,R\right)
\textcolor[rgb]{0,0,1}{\mathrm{pctd}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}]
\mathrm{map}\left(\mathrm{Info},\mathrm{pctd},R\right)
[[\left(\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{s}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{s}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{s}]]
Compare it with the output of Triangularize.
\mathrm{dec}≔\mathrm{Triangularize}\left(F,R,\mathrm{output}=\mathrm{lazard}\right)
\textcolor[rgb]{0,0,1}{\mathrm{dec}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{regular_chain}}]
\mathrm{map}\left(\mathrm{Info},\mathrm{dec},R\right)
[[\left(\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{s}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{s}]]
|
1 Department of Electrical and Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh.
2 Department of Electrical and Electronic Engineering, Bangladesh Army University of Engineering & Technology, Natore, Bangladesh.
3 Institute of Information and Technology, Jahangirnagar University, Savar, Bangladesh.
Abstract: This paper presents the design of a high performance robust resonant controller for the islanded single-phase microgrid operation on different loads conditions. The design of the controller is done using the results of Negative Imaginary approach. The performance of the proposed controller has been found much effective to track the instantaneous reference grid voltage. The simulation work has been done with the help of MATLAB/SimPower System toolbox. This shows that the proposed controller provides effective control of voltage against the uncertain load conditions.
Keywords: Controller Design, Islanded Microgrid, Robust Voltage Control
dq
{V}_{sw}=m\left(s\right)\ast {V}_{dc}
m\left(s\right)\in \left[-1,+1\right]
{V}_{dc}
{V}_{g}
{V}_{ref}
{I}_{L}={I}_{G}+{I}_{C}
{I}_{G}
{I}_{C}
{V}_{L}
{V}_{L}=L\frac{\text{d}{I}_{L}}{\text{d}t}
\frac{\text{d}{I}_{L}}{\text{d}t}=\frac{{V}_{L}}{L}=\frac{{V}_{sw}-{V}_{G}}{L}
{V}_{L}\left(s\right)=sL{I}_{L}\left(s\right)
s{I}_{L}\left(s\right)=\frac{{V}_{sw}\left(s\right)-{V}_{G}\left(s\right)}{L}
s{I}_{L}\left(s\right)=\frac{{V}_{sw}\left(s\right)}{L}-\frac{{V}_{G}\left(s\right)}{L}
{V}_{sw}
{V}_{dc}
{V}_{sw}=m\left(s\right){V}_{dc}
C\frac{\text{d}{V}_{G}}{\text{d}t}={I}_{C}
\frac{\text{d}{V}_{G}}{\text{d}t}=\frac{{I}_{C}}{C}
\frac{\text{d}{V}_{G}}{\text{d}t}=\frac{{I}_{L}-{I}_{G}}{C}=\frac{{I}_{L}}{C}-\frac{{I}_{G}}{C}
{I}_{C}
s{V}_{G}\left(s\right)=\frac{{I}_{L}\left(s\right)}{C}-\frac{{I}_{G}\left(s\right)}{C}
\frac{\text{d}}{\text{d}t}\left[\begin{array}{c}{I}_{L}\\ {V}_{G}\end{array}\right]=\left[\begin{array}{cc}0& -\frac{1}{L}\\ \frac{1}{C}& 0\end{array}\right]\left[\begin{array}{c}{I}_{L}\\ {V}_{G}\end{array}\right]+\left[\begin{array}{c}\frac{1}{C}\\ 0\end{array}\right]\left[{V}_{sw}\right]+\left[\begin{array}{c}0\\ -\frac{1}{C}\end{array}\right]\left[{I}_{G}\right]
y=\left[{V}_{G}\right]=\left[\begin{array}{cc}0& 1\end{array}\right]\left[\begin{array}{c}{I}_{L}\\ {V}_{G}\end{array}\right]
\frac{\text{d}x}{\text{d}t}=Ax+Bu
y=Cx+Du
A=\left[\begin{array}{cc}0& -\frac{1}{L}\\ \frac{1}{C}& 0\end{array}\right]
B=\left[\begin{array}{c}\frac{1}{C}\\ 0\end{array}\right]
C=\left[\begin{array}{cc}0& 1\end{array}\right]
D=0
C\left(s\right)=-{k}_{f}\frac{{s}^{2}}{{s}^{2}+2{\zeta }_{f}{\omega }_{f}s+{\omega }_{f}^{2}}
{k}_{f},{\zeta }_{f}
{\omega }_{f}
C\left(s\right)
\stackrel{¯}{C}\left(s\right)={k}_{f}\frac{{\varphi }_{f}\left(s\right)}{{s}^{2}+2{\zeta }_{f}{\omega }_{f}s+{\omega }_{f}^{2}}
{\varphi }_{f}\left(s\right)
{\varphi }_{f}\left(s\right)
\left(s+2{\zeta }_{f}{\omega }_{f}\right)
\stackrel{¯}{C}\left(s\right)={k}_{f}\frac{s+2{\zeta }_{f}{\omega }_{f}}{{s}^{2}+2{\zeta }_{f}{\omega }_{f}s+{\omega }_{f}^{2}}
C\left(s\right)=-{s}^{2}\stackrel{¯}{C}\left(s\right)
C\left(s\right)=-{k}_{f}\frac{s\left(s+2{\zeta }_{f}{\omega }_{f}\right)}{{s}^{2}+2{\zeta }_{f}{\omega }_{f}s+{\omega }_{f}^{2}}
{\zeta }_{f}
C\left(s\right)
{G}_{cl}\left(s\right)=\frac{{G}_{i}\left(s\right)}{1-\left({G}_{i}\left(s\right)\ast C\left(s\right)\right)}
{G}_{i}\left(s\right)\ast C\left(s\right)
R=45\text{\hspace{0.17em}}\Omega
R=85\text{\hspace{0.17em}}\Omega
C=5{e}^{-6}\text{\hspace{0.17em}}\text{F}
{G}_{i}\left(s\right)\ast C\left(s\right)
C\left(s\right)
{G}_{i}\left(s\right)
Cite this paper: Bairagi, A. , Habib, A. , Rahman, R. , Rahman, M. and Jewel, M. (2018) Negative Imaginary Approached High Performance Robust Resonant Controller Design for Single-Phase Islanded Microgrid and Its Voltage Observation on Different Load Condition. Intelligent Control and Automation, 9, 52-63. doi: 10.4236/ica.2018.92004.
[1] Borazjani, P., Wahab, N.I.A., Hizam, H.B. and Soh, A.B.C. (2014) A Review on Microgrid Control Techniques. 2014 IEEE Innovative Smart Grid Technologies-Asia (ISGT ASIA), Kuala Lumpur, 749-753.
[2] Yazdanian, M. and Mehrizi-Sani, A. (2014) Distributed Control Techniques in Microgrids. IEEE Transactions on Smart Grid, 5, 2901-2909.
[3] Le Minh, P., Vo Duc, D.H., Xuan, H.P.T. and Minh, H.N. (2017) A New Control Strategy of Power Sharing in Islanded Microgrids. 2017 International Conference on System Science and Engineering (ICSSE), Ho Chi Minh City, 175-179.
[4] Ramezani, M. and Li, S. (2016) Voltage and Frequency Control of Islanded Microgrid Based on Combined Direct Current Vector Control and Droop Control. 2016 IEEE Power and Energy Society General Meeting (PESGM), Boston, 1-5.
[6] Azim, M.I., Hossain, M.A., Hossain, M.J. and Pota, H.R. (2015) Effective Power Sharing Approach for Islanded Microgrids. 2015 IEEE Innovative Smart Grid Technologies-Asia (ISGT ASIA), Bangkok, 1-4.
[7] Vandoorn, T.L., Renders, B., Degroote, L., Meersman, B. and Vandevelde, L. (2011) Active Load Control in Islanded Microgrids Based on the Grid Voltage. IEEE Transactions on Smart Grid, 2, 139-151.
[8] Sortomme, E., Mapes, G.J., Foster, B.A. and Venkata, S.S. (2008) Fault Analysis and Protection of a Microgrid. 2008 40th North American Power Symposium, Calgary, 1-6.
[9] Hooshyar, A., El-Saadany, E.F. and Sanaye-Pasand, M. (2016) Fault Type Classification in Microgrids including Photovoltaic DGs. IEEE Transactions on Smart Grid, 7, 2218-2229.
[10] Raj, D.C. and Gaonkar, D.N. (2016) Frequency and Voltage Droop Control of Parallel Inverters in Microgrid. 2016 2nd International Conference on Control, Instrumentation, Energy & Communication (CIEC), Kolkata, 407-411.
[11] Araújo, L.S., Narváez, D.I., Siqueira, T.G. and Villalva, M.G. (2016) Modified Droop Control for Low Voltage Single Phase Isolated Microgrids. IEEE International Conference on Automatica (ICA-ACCA), Curico, 1-6.
[12] El Moubarek Bouzid, A., Sicard, P., Yamane, A. and Paquin, J.N. (2016) Simulation of Droop Control Strategy for Parallel Inverters in Autonomous AC Microgrids. 8th International Conference on Modelling, Identification and Control (ICMIC), Algiers, 701-706.
[13] Kumar, B. and Bhongade, S. (2016) Load Disturbance Rejection Based PID Controller for Frequency Regulation of a Microgrid. Biennial International Conference on Power and Energy Systems: Towards Sustainable Energy (PESTSE), Bangalore, 1-6.
[14] Parise, G., Martirano, L., Kermani, M. and Kermani, M. (2017) Designing a Power Control Strategy in a Microgrid Using PID/Fuzzy Controller Based on Battery Energy Storage. IEEE International Conference on Environment and Electrical Engineering and 2017 IEEE Industrial and Commercial Power Systems Europe (EEEIC/I&CPS Europe), Milan, 1-5.
[15] Harmouch, F.Z., Krami, N., Benhaddou, D., Hmina, N., Zayer, E. and Margoum, E.H. (2016) Survey of Multiagents Systems Application in Microgrids. International Conference on Electrical and Information Technologies (ICEIT), Tangiers, 270-275.
[16] Liu, W., Gu, W., Sheng, W., Meng, X., Wu, Z. and Chen, W. (2014) Decentralized Multi-Agent System-Based Cooperative Frequency Control for Autonomous Microgrids with Communication Constraints. IEEE Transactions on Sustainable Energy, 5, 446-456.
[17] Wu, X., Shen, C. and Iravani, R. (2016) A Distributed, Cooperative Frequency and Voltage Control for Microgrids. IEEE Transactions on Smart Grid, 1.
[18] Manshadi, S.D. and Khodayar, M.E. (2018) Expansion of Autonomous Microgrids in Active Distribution Networks. IEEE Transactions on Smart Grid, 9, 1878-1888.
[19] Bidram, A. and Davoudi, A. (2012) Hierarchical Structure of Microgrids Control System. IEEE Transactions on Smart Grid, 3, 1963-1976.
[20] Parhizi, S., Lotfi, H., Khodaei, A. and Bahramirad, S. (2015) State of the Art in Research on Microgrids: A Review. IEEE Access, 3, 890-925.
[21] Petersen, I.R. and Lanzon, A. (2010) Feedback Control of Negative-Imaginary Systems. IEEE Control Systems, 30, 54-72.
[22] Micallef, A., Apap, M., Spiteri-Staines, C. and Guerrero, J.M. (2015) Single-Phase Microgrid with Seamless Transition Capabilities between Modes of Operation. IEEE Transactions on Smart Grid, 6, 2736-2745.
|
Patrias, Rebecca1; Pylyavskyy, Pavlo2
1 Laboratoire de Combinatoire et d’Informatique Mathématique Université du Québec à Montréal 201 Président-Kennedy Montréal, Québec H2X 3Y7, Canada
2 Department of Mathematics University of Minnesota 127 Vincent Hall 206 Church Street Minneapolis, MN 5545, USA
We define a
K
-theoretic analogue of Fomin’s dual graded graphs, which we call dual filtered graphs. The key formula in the definition is
DU-UD=D+I
. Our major examples are
K
-theoretic analogues of Young’s lattice, of shifted Young’s lattice, and of the Young–Fibonacci lattice. We suggest notions of tableaux, insertion algorithms, and growth rules whenever such objects are not already present in the literature. (See the table below.) We also provide a large number of other examples. Most of our examples arise via two constructions, which we call the Pieri construction and the Möbius construction. The Pieri construction is closely related to the construction of dual graded graphs from a graded Hopf algebra, as described in [, , ]. The Möbius construction is more mysterious but also potentially more important, as it corresponds to natural insertion algorithms.
Keywords: dual graded graphs, insertion algorithms,
K
-theory, symmetric functions
Patrias, Rebecca 1; Pylyavskyy, Pavlo 2
author = {Patrias, Rebecca and Pylyavskyy, Pavlo},
title = {Dual filtered graphs},
TI - Dual filtered graphs
%T Dual filtered graphs
Patrias, Rebecca; Pylyavskyy, Pavlo. Dual filtered graphs. Algebraic Combinatorics, Volume 1 (2018) no. 4, pp. 441-500. doi : 10.5802/alco.21. https://alco.centre-mersenne.org/articles/10.5802/alco.21/
[1] Bergeron, Nantel; Lam, Thomas; Li, Huilan Combinatorial Hopf algebras and towers of algebras–dimension, quantization and functorality, Algebr. Represent. Theory, Volume 15 (2012) no. 4, pp. 675-696 | Article | MR: 2944437 | Zbl: 1281.16036
[2] Björk, Jan-Erik Rings of differential operators, North-Holland mathematical Library, 21, North-Holland, 1979, xvii+374 pages | MR: 549189
[3] Björner, Anders The Möbius function of subword order, Invariant theory and tableaux (Minneapolis, USA, 1988) (The IMA Volumes in Mathematics and its Applications), Volume 19, Springer, 1990, pp. 118-124 | Zbl: 0706.06007
[4] Björner, Anders; Stanley, Richard P. An analogue of Young’s lattice for compositions (2005) (https://arxiv.org/abs/math/0508043)
[5] Buch, Anders Skovsted A Littlewood–Richardson rule for the K-theory of Grassmannians, Acta Math., Volume 189 (2002) no. 1, pp. 37-78 | Article | MR: 1946917 | Zbl: 1090.14015
[6] Buch, Anders Skovsted; Kresch, Andrew; Shimozono, Mark; Tamvakis, Harry; Yong, Alexander Stable Grothendieck polynomials and K-theoretic factor sequences, Math. Ann., Volume 340 (2008) no. 2, pp. 359-382 | Article | MR: 2368984 | Zbl: 1157.14036
[7] Buch, Anders Skovsted; Samuel, Matthew J K-theory of minuscule varieties, J. Reine Angew. Math., Volume 719 (2016), pp. 133-171 | MR: 3552494 | Zbl: 06636676
[8] Clifford, Edward; Thomas, Hugh; Yong, Alexander K-theoretic Schubert calculus for OG
\left(n,2n+1\right)
and jeu de taquin for shifted increasing tableaux, J. Reine Angew. Math., Volume 690 (2014), pp. 51-63 | MR: 3200334 | Zbl: 1348.14127
[9] Fomin, Sergei Vladimirovich Generalized Robinson–Schensted–Knuth correspondence, J. Sov. Math., Volume 41 (1988) no. 2, pp. 979-991 | Article | MR: 869582 | Zbl: 0698.05003
[10] Fomin, Sergey Duality of graded graphs, J. Algebr. Comb., Volume 3 (1994) no. 4, pp. 357-404 | Article | MR: 1293822 | Zbl: 0810.05005
[11] Fomin, Sergey Schensted algorithms for dual graded graphs, J. Algebr. Comb., Volume 4 (1995) no. 1, pp. 5-45 | Article | MR: 1314558 | Zbl: 0817.05077
[12] Hamaker, Zachary; Keilthy, Adam; Patrias, Rebecca; Webster, Lillian; Zhang, Yinuo; Zhou, Shuqi Shifted Hecke insertion and the K-theory of OG
\left(n,2n+1\right)
[13] Knuth, Donald Permutations, matrices, and generalized Young tableaux, Pac. J. Math., Volume 34 (1970) no. 3, pp. 709-727 | Article | MR: 272654 | Zbl: 0199.31901
[14] Lam, Thomas Quantized dual graded graphs, Electron. J. Comb., Volume 17 (2010) no. 1, Paper no. R88, 11 pages | MR: 2661391 | Zbl: 1230.05163
[15] Lam, Thomas; Pylyavskyy, Pavlo Combinatorial Hopf algebras and
K
-homology of Grassmanians, Int. Math. Res. Not., Volume 2007 (2007) no. 24, Paper no. rnm125, 48 pages | Zbl: 1134.16017
[16] Lam, Thomas; Shimozono, Mark (unpublished)
[17] Lam, Thomas; Shimozono, Mark Dual graded graphs for Kac–Moody algebras, Algebra Number Theory, Volume 1 (2007) no. 4, pp. 451-488 | Article | MR: 2368957 | Zbl: 1200.05249
[18] Macdonald, Ian Grant Symmetric functions and Hall polynomials, Oxford Science Publications, Clarendon Press, 1998, x+475 pages | Zbl: 0899.05068
[19] Nzeutchap, Janvier Dual graded graphs and Fomin’s
r
-correspondences associated to the Hopf algebras of planar binary trees, quasi-symmetric functions and noncommutative symmetric functions (2006) in Formal Power Series and Algebraic Combinatorics (San Diego, 2006), available at http://garsia.math.yorku.ca/fpsac06/papers/53.pdf
[20] Ore, Oystein Theory of non-commutative polynomials, Ann. Math., Volume 34 (1933), pp. 480-508 | Article | MR: 1503119 | Zbl: 0007.15101
[21] Patrias, Rebecca; Pylyavskyy, Pavlo Combinatorics of K-theory via a K-theoretic Poirier–Reutenauer bialgebra, Discrete Mathematics, Volume 339 (2016) no. 3, pp. 1095-1115 | Article | MR: 3433916 | Zbl: 1328.05193
[22] Poirier, Stéphane; Reutenauer, Christophe Algèbres de Hopf de tableaux, Ann. Sci. Math. Qué., Volume 19 (1995) no. 1, pp. 79-90 | Zbl: 0835.16035
[23] Robinson, Gilbert de B. On the representations of the symmetric group, Am. J. Math., Volume 60 (1938), pp. 745-760 | Article | Zbl: 0019.25102
[24] Sagan, Bruce E. Shifted tableaux, Schur
Q
-functions, and a conjecture of R. Stanley, J. Comb. Theory, Ser. A, Volume 45 (1987) no. 1, pp. 62-103 | Article | MR: 883894 | Zbl: 0661.05010
[25] Schensted, Craige Longest increasing and decreasing subsequences, Classic Papers in Combinatorics (Modern Birkhäuser Classics), Birkhäuser, 2009, pp. 299-311 | Article | Zbl: 1154.05001
[26] Stanley, Richard P. Differential posets, J. Am. Math. Soc., Volume 1 (1988) no. 4, pp. 919-961 | Article | MR: 941434 | Zbl: 0658.05006
[27] Stanley, Richard P. Enumerative Combinatorics. Vol. 2, Cambridge Studies in Advanced Mathematics, 62, Cambridge University Press, 1999, xii+581 pages | MR: 1676282 | Zbl: 0928.05001
[28] Stanley, Richard P. Enumerative Combinatorics. Vol. 1, Cambridge Studies in Advanced Mathematics, 49, Cambridge University Press, 2012, xiii+626 pages | Zbl: 1247.05003
[29] Thomas, Hugh; Yong, Alexander A jeu de taquin theory for increasing tableaux, with applications to K-theoretic Schubert calculus, Algebra Number Theory, Volume 3 (2009) no. 2, pp. 121-148 | Article | MR: 2491941 | Zbl: 1229.05285
[30] Thomas, Hugh; Yong, Alexander The direct sum map on Grassmannians and jeu de taquin for increasing tableaux, Int. Math. Res. Not., Volume 2011 (2011) no. 12, pp. 2766-2793 | MR: 2806593 | Zbl: 1231.05280
[31] Thomas, Hugh; Yong, Alexander Longest increasing subsequences, Plancherel-type measure and the Hecke insertion algorithm, Adv. Appl. Math., Volume 46 (2011) no. 1-4, pp. 610-642 | Article | MR: 2794040 | Zbl: 1227.05262
[32] Worley, Dale Raymond A theory of shifted Young tableaux (1984) (Ph. D. Thesis) | MR: 2941073
[33] Young, Alfred Qualitative substitutional analysis (third paper), Proc. Lond. Math. Soc., Volume 28 (1927), pp. 255-292 | MR: 1575854 | Zbl: 54.0150.01
|
Corrosion - Course Hero
General Chemistry/Electrochemistry/Corrosion
Corrosion is the process by which metals oxidize and return from a reduced form to their natural oxidation state. Common examples of corrosion are the rusting of iron and the tarnishing of copper and silver.
The corrosion process is similar to the process that underlies galvanic cells. The metal acts as the anode because it loses electrons and is oxidized, forming metal ions. Atmospheric oxygen acts as the cathode. The electrons reduce atmospheric oxygen. The metal ions and the oxygen combine to form corroded metal. This process happens in the presence of an electrolyte, an ionic solution that can be decomposed by electricity, which is typically water.
Corrosion of iron in the presence of water involves multiple reactions. The iron metal loses electrons and forms metal ions in the water.
{\rm{Fe}}(s)\rightarrow{\rm{Fe}}^{2+}(aq)+2{\rm{e}}^-
The electrons reduce the atmospheric oxygen, forming hydroxide ions.
\tfrac{1}{2}{\rm{O}}_2(g)+{\rm{H}}_2{\rm{O}}(l)+2{\rm{e}}^-\rightarrow2{\rm{OH}}^-(aq)
Iron ions and hydroxide ions react to form iron(II) hydroxide (Fe(OH)2).
{\rm{Fe}}^{2+}(aq)+2{\rm{OH}}^-(aq)\rightarrow{\rm{Fe(OH)}}_2(s)
In the presence of oxygen, the iron(II) hydroxide (Fe(OH)2) forms iron rust (Fe2O3).
4{\rm{Fe(OH)}}_2(s)+{\rm{O}}_2(g)\rightarrow2{\rm{Fe}}_2{\rm{O}}_3\!\cdot\!{\rm{H}}_2{\rm{O}}(s)+2{\rm{H}}_2{\rm{O}}(l)
Because the water sits on top of the metal, the metal ions can have one of two fates. They might go into solution in the water, leaving behind a pit in the metal surface. This allows more metal to become corroded in a process known as pitting. Alternatively, the metal ions can form a solid compound on the surface of the metal. This can form a barrier that prevents further corrosion of the metal surface. The path taken by the metal ion depends on the exact metal. Iron pits and rusts completely over time. Copper tarnish forms a barrier that prevents further contact with oxygen, stopping the corrosion.
Corrosion is a naturally occurring galvanic cell. The metal (in this case iron) acts as an anode, while atmospheric oxygen acts as a cathode in the presence of an electrolyte (usually water).
A great deal of energy and resources are put into forming metals for human use, so many methods to protect against corrosion have been developed. These methods generally rely on galvanic corrosion—corrosion between two different metals that favors one metal over the other. In galvanic corrosion, one of the metals acts as the cathode and reduces the ions in water to (OH–) ions. This causes the half-reaction on the anode to proceed more quickly than it would under other circumstances. The metal that forms the cathode corrodes before the other metal corrodes. People have harnessed this phenomenon in a process called galvanization. Galvanization is the process that coats a metal with zinc as a sacrificial anode. A sacrificial anode is an anode made of a metal coupled to a more valuable metal, which it protects as part of a galvanic cell that undergoes galvanic corrosion. The zinc corrodes preferentially, protecting the metal underneath it. Thus, galvanization is a type of cathodic protection, a method of protecting a metal by sacrificing another metal as an anode in a galvanic cell.
Galvanization, a form of cathodic protection, uses zinc as a sacrificial anode. Zinc corrodes preferentially to iron when the two metals touch, slowing the corrosion of iron in favor of the corrosion of zinc.
<Galvanic Cells>Electrolysis
|
Alexander and Santiago put five pattern blocks together with one vertex of each block touching, as shown in the diagram at right. Find the measure of the smaller angle of a beige rhombus.
Look at the information below from the Math Box in Lesson 8.3.2.
How are the angles in the diagram similar to the angles in the example?
If two angles have measures that add up to
180°
, they are called supplementary angles. For example, in the diagram at right,
\angle EFG
\angle GFH
are supplementary because together they form a straight angle.
Because all the angles add up to
180°
, you can set up the equation:
x+x+x+x+60°=180°
4x+60°=180°
x
x=30°
|
W and Z bosons - Simple English Wikipedia, the free encyclopedia
W and Z bosons are a group of elementary particles. They are bosons, which means that they have a spin of 0 or 1. Both had been found in experiments by the year 1983. Together, they are responsible for a force known as "weak force." Weak force is called weak because it is not as strong as the strong force. There are two W bosons with different charges, the normal W+, and its antiparticle, the W –. Z bosons are their own antiparticle.
2 Creation of W and Z bosons
2.2 W boson decay
2.3 Z boson decay
W bosons are named after the weak force that they are responsible for. Weak force is what physicists believe is responsible for the breaking down of some radioactive elements, in the form of Beta decay. In the late '70s, scientists managed to combine early versions of the weak force with electromagnetism, and called it the electroweak force.
Creation of W and Z bosons[change | change source]
W and Z bosons are only known to be created under Beta decay, which is a form of radioactive decay.
This is a diagram of Beta Decay. "udd" and "n" refer to a neutron, made of one up quark and two down quarks. "udu" and "p" refer to a proton, made of two up quarks and one down quark. W– refers to a W– boson, which decays into an e– (electron) and a ve with a line over it (a electron antineutrino). "t" refers to time.
Beta Decay[change | change source]
Beta decay occurs when there are a lot of neutrons in an atom. Simplified diagram hints that neutron corresponds to one proton and one electron. When there are too many neutrons in one atom nucleus, one neutron will split and form a proton and an electron. The proton stays where it is, and the electron will be launched out of the atom. Resulting Beta radiation is harmful to humans.
Weak force is believed to be able to change the flavour of a quark. For example, when it changes a down quark in a neutron into an up quark, the charge of the neutron becomes +1, since it would have the same arrangement of quarks as a proton. The three-quark neutron with a charge of +1 is no longer a neutron after this, as it fulfills all of the requirements to be a proton. Therefore, Beta decay will cause a neutron to become a proton (along with some other end-products).
W boson decay[change | change source]
When a quark changes flavour, as it does in Beta decay, it releases a W boson. On average, W bosons only last for 3x10-25 seconds before themselves decaying into other particles, which is why we had not discovered them until less than half a century ago. Surprisingly, W bosons have a mass of about 80 times that of a proton. Keep in mind that the neutron that it came from has almost the same weight as the proton. In the quantum world, it is not uncommon for a more massive particle to come from a less massive particle; the extra mass comes from stored energy via Einstein's famous formula,
{\displaystyle E=mc^{2}}
. After the 3x10-25 seconds has passed, a W boson decays into one electron and one neutrino. Since neutrinos rarely interact with matter, we can ignore them from now on. The electron is propelled out of the atom at a high speed. The proton that was produced by the Beta decay stays in the atom nucleus, and raises the atomic number by one.
Z boson decay[change | change source]
Z bosons are also predicted in the Standard Model of physics, which successfully predicted the existence of W bosons. Z Bosons decay into a fermion and its antiparticle, which are particles such as electrons and quarks which have spin in units of half of the reduced planks constant.
Close, Frank (2004). Particle Physics. Oxford. ISBN 0-19-280434-0.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=W_and_Z_bosons&oldid=7959747"
|
Week 3. Complexity Analysis | Algorithms and Data Structures
Week 3. Complexity Analysis
4 Week 3. Complexity Analysis
Reading 3 Goodrich, Tamassia, & Goldwasser: Chapter 4
Common functions with examples
Algorithm Theory focuses on infinitely large datasets: asymptotics
Recursion and loop
Mathematical Induction and Loop Invariants
TODO video on Induction and recursion
4.2.1 Wednesday Group Work
This week, I ask you not to start with the compulsory projects. The Wednesday Group Work section contains two warm-up problems, and then a discussion exercise which I want to discuss in the plenary debrief, after your group discussion.
You should be able to complete those three problems in the group session, and you may have time to proceed with the compulsory projects.
Problem 4.1 (Goodrich et al R-4.1 rephrased) The number of operations executed by algortihms A and B is
8n\mathrm{log}n
2{n}^{3}
, respectively. Determine
{n}_{0}
such that A| is better than B for
n\ge {n}_{0}
Problem 4.2 (Goodrich et al R-4.7 rephrased) Order the following functions by asymptotic growth rate:
\[ 4n\mathrm{log}n+2n,{2}^{65},{2}^{\mathrm{log}n},4n+100\mathrm{log}n,4n,1.{1}^{n},{n}^{2}+10n,{n}^{3},n\mathrm{log}n,{n}^{32}. \]
Problem 4.3 (Goodrich et al C-4.43 rephrased) Claim: In any flock of sheep, all sheep have the same colour
An alleged proof goes as follows.
n=1
A single sheep clearly has the same colour as itself.
Induction Step: Consider a flock of
n>1
sheep. Take one sheep
a
out. The remaining
n-1
sheep have the same colour by induction. Now, replace
and take a different sheep
b
out. Again, the remaining
n-1
sheep have the same colour by induction. Hence, all the sheep in the flock have the same colour.
In reality, we know that a flock with black and white sheep has been observed, so what is wrong with the argument?
Problem 4.4 (Goodrich et al C-4.49 rephrased) Let
p\left(x\right)
be a polynomial, i.e.
p\left(x\right)=\sum _{i=0}^{n}{a}_{i}{x}^{i}.
Describe an
O\left({n}^{2}\right)
-time algorithm to compute
p\left(x\right)
Improve the algorithm to
(n\mathrm{log}n)
time by improving the calculation of
{x}^{i}
Now consider rewriting as
p\left(x\right)={a}_{0}+x\left({a}_{1}+x\left({a}_{2}+x\left({a}_{3}+\cdots +x\left({a}_{n-1}+x{a}_{n}\right)\cdots \phantom{\rule{0.17em}{0ex}}\right)\right)\right).
How many arithmetic operations do you need to calculate this (in Big-O notation).
Problem 4.5 (Goodrich et al R-4.31 rephrased) Al and Bob are arguing about their algorithms. Al claims his
O(n\mathrm{log}n)
-time algorithm is always faster than Bob’S
O\left({n}^{2}\right)
-time algorithm. To settle the issue they run a set of experiments. To Al’s dismay, they find that if
n<100
O\left({n}^{2}\right)
-algorithm runs faster, and only when
n\ge 100
O(n\mathrm{log}n)
-time one better. Explain how this is possible.
Problem 4.6 (Goodrich et al C-4.35 rephrased) Show that
\sum _{i=1}^{n}{i}^{2}=O\left({n}^{3}\right).
Problem 4.7 (Goodrich et al R-4.28 rephrased) Given an
-element array
X
. Algorithm A chooses
\mathrm{log}n
X
at random and executes an
O\left(n\right)
-time calculation for each. What is the worst-cae running time of B.
Problem 4.8 (Goodrich et al R-4.30 rephrased) Given an
X
. Algorithm D calls Algorithm E on each element
{X}_{i}
. Algorithm E runs in
O\left(i\right)
time when called on
{X}_{i}
. What is the worst-case run time of D?
Problem 4.9 (Goodrich et al P-4.55 rephrased) Make an experimental analysis to test the hypothesis that Java’s Array.sort() method runs in
O\left(n\cdot logn\right)
time on average.
Proof techniques C-4.
x
|
Create inflationcurve object for interest-rate curve from dates and data - MATLAB - MathWorks América Latina
\begin{array}{l}I\left(0,{T}_{1Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ I\left(0,{T}_{2Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ I\left(0,{T}_{3Y}\right)=I\left({T}_{0}\right){\left(}^{1}\\ ...\\ I\left(0,{T}_{i}\right)=I\left({T}_{0}\right){\left(}^{1}\end{array}
I\left(0,{T}_{i}\right)
I\left({T}_{0}\right)
b\left(0;{T}_{0},{T}_{i}\right)
{f}_{i}=\frac{1}{\left({T}_{i}-{T}_{i-1}\right)}\mathrm{log}\left(\frac{I\left(0,{T}_{i}\right)}{I\left(0,{T}_{i-1}\right)}\right)
\begin{array}{l}I\left(0,{T}_{i}\right)=I\left({T}_{0}\right)\mathrm{exp}\left(\underset{{T}_{0}}{\overset{{T}_{i}}{\int }}f\left(u\right)du\right)\right)\mathrm{exp}\left(\underset{{T}_{0}}{\overset{{T}_{i}}{\int }}s\left(u\right)du\right)\right)\\ I\left(0,{T}_{i}\right)=I\left(0,{T}_{i-1}\right)\mathrm{exp}\left(\left({T}_{i}-{T}_{i-1}\right)\left({f}_{i}+{s}_{i}\right)\right)\end{array}
I\left(0,{T}_{i}\right)
I\left(0,{T}_{i-1}\right)
\left[{T}_{i-1},{T}_{i}\right]
|
Nuclear Research Institute, DaLat, Viet Nam.
Abstract: The outstanding advantage of digital signal processing (DSP) techniques and Field Programmable Gate Array (FPGA) technology is capable of improving the quality of the experimental measurements for nuclear radiation. In this article, a compact DMCA 8 K was designed and manufactured using DSP technique based on FPGA technology. In particular, the output of the preamplifier is completely processed by digital techniques which are obtained from the analog-to-digital converter (ADC) to calculate the baseline, DC offset, energy peaks, pile up, threshold discrimination and then the form of energy spectrum. The Spartan-6 board is used as a hardware for the development of the digital multichannel analyzer (DMCA), which is equipped with the 14-bit AD6645 with 62.5 Msps sample rate. The application software for instrument control, data acquisition and data processing was written under C++ builder via the RS-232 interface. The designed DMCA system has been tested with a HPGe detector using gamma sources of 60Co and 137Cs and a reference pulser.
Keywords: Field Programmable Gate Arrays, Digital Multichannel Analyzer, Integral Nonlinearity, Digital Signal Processing
IN{L}_{DMCA}=\frac{\Delta {Y}_{\mathrm{max}}}{{Y}_{\mathrm{max}}}\times 100\%=\frac{22.354}{7777}=0.28\%
Cite this paper: Quy, D. , Tuan, P. and Dien, N. (2019) Design and Construction of a Digital Multichannel Analyzer for HPGe Detector Using Digital Signal Processing Technique. Journal of Analytical Sciences, Methods and Instrumentation, 9, 22-29. doi: 10.4236/jasmi.2019.92003.
[1] Lanh, D., Son, P.N. and Son, N.A. (2014) In-House Development of an FPGA-Based MCA8K for the Gamma-Ray Spectrometer. SpringerPlus, 3, 665. https://doi.org/10.1186/2193-1801-3-665
[2] Jordanov, V.T. and Knoll, G.F. (1994) Digital Synthesis of Pulse Shapes in Real Time for High Resolution Radiation Spectroscopy. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 345, 337-345. https://doi.org/10.1016/0168-9002(94)91011-1
[3] XILINX (2011) PicoBlaze 8-Bit Embedded Microcontroller User Guide.
|
Understanding Compound protocol's interest rates | Ian Macalinao
Understanding Compound protocol's interest rates
by Ian Macalinao on December 20, 2020
The Compound protocol is an unprecedented technological advancement in the history of finance: for the first time in history, one can borrow money and earn interest with no humans, governments, or credit involved.
Rates vary frequently, though, and it is important to understand how the protocol determines interest rates.
All interest rates in Compound are determined as a function of a metric known as the utilization rate. The utilization rate
U_a
a
is defined1 as:
U_a = Borrows_a / (Cash_a + Borrows_a - Reserves_a)
Borrows_a
a
Cash_a
a
Reserves_a
a
that Compound keeps as profit.
For example: given that reserves are 0, if Alice supplies $500 USDC and Bob supplies $500 USDC, but Charles borrows $100 USDC, the total borrows is $100 and the total cash is
500 + 500 - 100 = 900
, so the utilization rate is
100 / (900 + 100) = 10\%
A high ratio signifies that a lot of borrowing is taking place, so interest rates go up to get more people to inject cash into the system. A low ratio signifies that demand for borrowing is low, so interest rates go down to encourage more people to borrow cash from the system. This follows economic theory's idea of price (the "price" of money is its interest rate) relative to supply and demand.
Borrow and supply rates
The supply rate is calculated as follows:
\text{Supply Interest Rate}_a = \text{Borrowing Interest Rate}_a * U_a * (1 - \text{Reserve Factor}_a)
U_a
is the utilization rate of
a
\text{Reserve Factor}_a
is the percentage of the spread between the supply and borrow rates that the protocol keeps as profit2
\text{Borrowing Interest Rate}_a
is the interest rate that borrowers should pay for
a
The Compound Standard Interest Rate Model
The borrowing rate's calculation depends on something called an interest rate model -- the algorithmic model to determine a money market's borrow and supply rates. In this post, I'll go over the interest rate model used for USDC and most other coins: the Compound Standard Interest Rate Model.
This interest rate model takes in two parameters:
Base rate per year, the minimum borrowing rate
Multiplier per year, the rate of increase in interest rate with respect to utilization
The graph is linear:
\text{Borrow Interest Rate} = \text{Multiplier} * \text{Utilization Rate} + \text{Base Rate}
Example: WBTC rates
The WBTC rate model.
Let's say the protocol has 10,000 WBTC supplied to it and users are borrowing 1,000 WBTC. The utilization rate is thus 10%. What should the rates be?
The cWBTC market uses a model where the base rate is 2% and the multiplier is 30%. To calculate our borrow interest rate:
\text{Borrow Interest Rate} = 30\% * 10\% + 2\% = 5.0\%
Next, let's assume the reserve factor of the WBTC market is 20%. We can now calculate the supply interest rate:
\text{Supply Interest Rate}_a = 5.0\% * 10\% * (1 - 20\%) = 0.4\%
For a sanity check, let's make sure that there is a net positive cash flow; that is, the protocol is not losing money:
\begin{aligned} \text{Supply} * \text{Supply Interest Rate} &\leq \text{Borrows} * \text{Borrow Interest Rate} \\ 10,000 * 0.4\% &\leq 1,000 * 5.0\% \\ 40 &\leq 50 \end{aligned}
You can view the effect of utilization rate on the supply using this interactive graph on the Compound WBTC market's summary page. Note that the numbers are slightly different due to the graph on the website including COMP rewards APY.
Here is the underlying smart contract of the rate model.
Example: Effects of large withdraws of WBTC on interest rates
Let's say that with the previous example, a whale decided to withdraw an additional 8,000 WBTC from the protocol. What happens to rates?
The new utilization factor is
(8,000+1,000)/10,000 = 90\%
, up 9x from 10%.
\text{Borrow Interest Rate} = 30\% * 90\% + 2\% = 29\%
\text{Supply Interest Rate}_a = 29\% * 90\% * (1 - 20\%) = 20.88\%
An interesting thing to note here is that the supply interest rate increased dramatically (up 5120%), but the borrow rate only increased by 480%. This is because the supply interest rate is proportional to the square of the utilization rate, whereas the borrow interest rate is only linearly proportional to the utilization rate.
The Jump Rate model
Some markets follow what is known as the "Jump Rate" model. This model has the standard parameters:
but it also introduces two new parameters:
Kink, the point in the model in which the model follows the jump multiplier
Jump Multiplier per year, the rate of increase in interest rate with respect to utilization after the "kink"
\begin{aligned} \text{Borrow Interest Rate} &= \text{Multiplier} * min(U_a, \text{Kink}) \\ &+ \text{Jump Multiplier} * max(0, U_a - \text{Kink}) \\ &+ \text{Base Rate} \end{aligned}
Example: USDC rate model
The USDC rate model is a jump rate model.
The USDC market also has a reserve factor of 7%.
U_a = \$180M/(\$180M + \$20M) = 90\%
\text{Borrow Interest Rate} = 5\% * 80\% + 109\% * (90\% - 80\%) + 0\% = 14.9\%
\text{Supply Interest Rate}_a = 14.9\% * 90\% * (1 - 7\%) = 12.5\%
In Compound, interest rate is not locked at the price of borrowing: it continuously fluctuates based on changes in the utilization rate.
Another way interest rates could spike is if the Chief Economist decides that the interest rates should go up. This has already happened in the case of MakerDAO, where the stability fee has ranged between 0% and 8%. Fortunately, both Compound and MakerDAO have transparent processes when changing interest rates with beautiful governance dashboards and decisions voted on by governance token holders.
This is in stark contrast to the current quarterly speculation on the Fed/FOMC's decisions. Compound's decision making process on the other hand is transparent and decentralized, protecting the interests of financiers (pun intended).3
Predicting accrued interest
One can compute the total interest they will pay on a principal balance
P
for a duration in days
t
with the following equation:
\text{Total Interest} = P(1 + \frac{r}{B_y})^{B_y / 365 * t}
B_y = 2102400
, the number of blocks in the year, and
r
is the expected value of the APR of the interest over the given period.
The number 2,102,400 assumes 15 second blocks.4
Hedging against rate hikes
Since your interest rate isn't locked in at the time of borrowing, you are be vulnerable to interest rate changes. It is important to understand how the interest rate model of your selected currency works to ensure that there are no surprise massive interest rate changes.
There are several protocols attempting to build interest rate swaps for Compound, e.g. swaprate.finance and Opium Protocol. Interest rate swaps are essentially a way to lock in a borrowing interest rate (usually higher than spot) for a fixed period of time. This is useful if you want to remove more variables from a trading strategy.
Aave, another pool-based lending protocol, has implemented pseudo-fixed rate swaps using an oracle to determine a likely upper bound for what average interest rates could look like over the term of the loan. This has its own problems though and an independent swap market is likely the best solution to this problem.
Compound is a very powerful building block of the Ethereum DeFi ecosystem. Understanding the ways rates change is important in evaluating the potential performance of any leveraged position.
If you enjoyed this post and/or would like to hear more, please leave a comment below!
Source: Compound whitepaper↩
According to proposal 31, the Reserve Factor is a percentage of the borrowers paid interest which can be used by the governance or act as an insurance against borrower default which protects all the suppliers.↩
Technically, this system is not immune to insider trading. A member of governance can trade before voting. Also since COMP can be borrowed on Compound, one can stake a lot of ETH, borrow COMP, place a large vote, then repay all of the COMP.↩
At the time of writing Ethereum has been producing ~13 second blocks, so all annualized rates in Compound should be multiplied by approximately 15/13. Although this number is hardcoded into the smart contract, the rate model may be updated in the future via governance.↩
|
(Redirected from HD Photo)
JPEG XR[4] (JPEG extended range[5]) is an image compression standard for continuous tone photographic images, based on the HD Photo (formerly Windows Media Photo) specifications that Microsoft originally developed and patented.[6] It supports both lossy and lossless compression, and is the preferred image format for Ecma-388 Open XML Paper Specification documents.
Support for the format was made available in Adobe Flash Player 11.0, Adobe AIR 3.0, Sumatra PDF 2.1, Windows Imaging Component, .NET Framework 3.0, Windows Vista, Windows 7, Windows 8, Internet Explorer 9, Internet Explorer 10, Internet Explorer 11, Pale Moon 27.2.[7][8][9] As of January 2021, there were still no cameras that shoot photos in the JPEG XR (.JXR) format.
Microsoft first announced Windows Media Photo at WinHEC 2006,[10] and then renamed it to HD Photo in November of that year. In July 2007, the Joint Photographic Experts Group and Microsoft announced HD Photo to be under consideration to become a JPEG standard known as JPEG XR.[11][12] On 16 March 2009, JPEG XR was given final approval as ITU-T Recommendation T.832 and starting in April 2009, it became available from the ITU-T in "pre-published" form.[1] On 19 June 2009, it passed an ISO/IEC Final Draft International Standard (FDIS) ballot, resulting in final approval as International Standard ISO/IEC 29199-2.[13][14] The ITU-T updated its publication with a corrigendum approved in December 2009,[1] and ISO/IEC issued a new edition with similar corrections on 30 September 2010.[15]
For support of images using an RGB color space, JPEG XR includes an internal conversion to the YCoCg color space, and supports a variety of bit depth and color representation packing schemes. These can be used with and without an accompanying alpha channel for shape masking and semi-transparency support, and some of them have much higher precision than what has typically been used for image coding. They include:
The shared-exponent floating point color format known as RGBE (Radiance) is also supported, enabling more faithful storage of high-dynamic-range (HDR) images.
Being TIFF-based, this format inherits all of the limitations of the TIFF format including the 4 GB file-size limit, which according to the HD Photo specification "will be addressed in a future update".[18]
JFIF and other typical image encoding practices specify a linear transformation from RGB to YCbCr, which is slightly lossy in practice because of roundoff error. JPEG XR specifies a lossless colorspace transformation, namely YCoCg-R,[21][22] given (for RGB) by:[23]
{\displaystyle V=B-R\,}
{\displaystyle U=G-R-\left\lceil {\frac {V}{2}}\right\rceil }
{\displaystyle Y=G-\left\lceil {\frac {U}{2}}\right\rceil }
The DCT, the frequency transformation used by JPEG, is slightly lossy because of roundoff error. JPEG XR uses a type of integer transform employing a lifting scheme.[25] The required transform, called the Photo Core Transform (PCT), resembles a 4 × 4 DCT but is lossless (exactly invertible). In fact, it is a particular realization of a larger family of binary-friendly multiplier-less transforms called the binDCT.[26]
JPEG XR allows an optional overlap prefiltering step, called the Photo Overlap Transform (POT), before each of its 4 × 4 core transform PCT stages.[25] The filter operates on 4 × 4 blocks which are offset by 2 samples in each direction from the 4 × 4 core transform blocks. Its purpose is to improve compression capability and reduce block-boundary artifacts at low bitrates. At high bitrates, where such artifacts are typically not a problem, the prefiltering can be omitted to reduce encoding and decoding time. The overlap filtering is constructed using integer operations following a lifting scheme, so that it is also lossless. When appropriately combined, the POT and the PCT in JPEG-XR form a lapped transform.[27]
Capture One 7 or later Phase One Yes Yes
Corel Paint Shop Pro X2 or later Corel Yes Yes [28]
Paint.NET Rick Brewster Yes Yes [35]
Pale Moon (web browser) Moonchild productions Yes N/A [36]
Zoner Photo Studio 13 or later Zoner Software Yes Yes
IrfanView 4.25 and later Irfan Skiljan HDP version 4.26 Irfan Skiljan Yes No [43]
The 2011 video game Rage employs JPEG XR compression to compress its textures.[49]
AVIF, a compression format by Google, Mozilla and others in a group called the Alliance for Open Media[55]
JPEG, an image format used for lossy compression (JPEG XR lossy is comparable with it.)
JPEG XS, format for image and video with very low latency, more efficient for streaming high quality video
JPEG XL, is a royalty-free raster-graphics file format that supports both lossy and lossless compression. It is designed to outperform existing raster formats and thus to become their universal replacement.
WebP, a format with lossy or lossless compression, proposed by Google in 2010
HEIF, a 2015 format based on MPEG-H Part 12 (ISO/IEC 23008-12) and HEVC. Implemented by Apple as the basis for their single-image format .HEIC on iPhone 7.
^ a b c d "Recommendation T.832 (06/2019): Information technology - JPEG XR image coding system - Image coding specification". International Telecommunication Union - Standardization sector (ITU-T). June 2019. Retrieved 3 March 2020.
^ a b "Microsoft Device Porting Kit Specification". Microsoft Corporation. 7 November 2006. Retrieved 8 November 2009.
^ "Provisional Standard Media Type Registry". IANA. 12 December 2014. Retrieved 12 January 2015.
^ Bill, Crow (17 November 2006). "Introducing HD Photo". Bill Crow's Digital Imaging & Photography Blog. Microsoft.
^ Bill, Crow (31 July 2007). "Industry Standardization for HD Photo". Bill Crow's Digital Imaging & Photography Blog. Microsoft.
^ "HD Photo, Version 1.0 (Windows Media Photo)". Digital Preservation. Library of Congress. 19 February 2008.
^ matthewu (31 January 2014). "Readme". jxrlib repo. Retrieved 15 March 2014 – via CodePlex. The JPEG XR format replaces the HD Photo/Windows Media™ Photo format in both Windows 8 and the Windows Image Component (WIC). WIC accompanies the Internet Explorer 10 redistributable packages for down-level versions of Windows.
^ "Platform update for Windows 7 Service Pack 1 (SP1) and Windows Server 2008 R2 SP1". Microsoft Support. 26 February 2013. Retrieved 3 June 2021.
^ "Pale Moon Release Notes". Moonchild Productions.
^ Microsoft shows off JPEG rival
^ "Microsoft's HD Photo Technology Is Considered for Standardization by JPEG". Microsoft Corporation. 31 July 2007. Archived from the original on 8 August 2010. Retrieved 31 July 2007.
^ "JPEG 2000 Digital Cinema Successes and Proposed Standardization of JPEG XR". Join Photographic Experts Group. 6 July 2007. Archived from the original on 17 March 2009. Retrieved 31 July 2009.
^ a b Sharpe, Louis (17 July 2009). "Press Release – 49th WG1 Sardinia Meeting". Joint Photographic Experts Group. Archived from the original on 1 September 2009. Retrieved 24 October 2009.
^ "ISO/IEC 29199-2:2009 Information technology - JPEG XR image coding system - Part 2: Image coding specification". International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). 14 August 2009. Retrieved 18 December 2009.
^ "ISO/IEC 29199-2:2010 Information technology - JPEG XR image coding system - Part 2: Image coding specification". International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). 30 September 2010. Retrieved 18 December 2010.
^ Bill, Crow (30 July 2009). "JPEG XR is Now an International Standard". Microsoft Developer Network blogs, Bill Crow's blog. Microsoft Corporation. Retrieved 24 October 2009.
^ Crow, Bill (1 June 2006). "Pixel Formats (Part 1: Unsigned Integers)". Bill Crow's Digital Imaging & Photography Blog. Microsoft Developer Network. Retrieved 26 October 2009.
^ "Windows Media Photo Specification". Microsoft. Retrieved 5 October 2016.
^ "JPEG launches Innovations group, new book " JPEG 2000 Suite " published". jpeg.org. 19 March 2010. Archived from the original on 25 September 2010.
^ S. Srinivasan, C. Tu, S. L. Regunathan, and G. J. Sullivan, "HD Photo: A New Image Coding Technology for Digital Photography", SPIE Applications of Digital Image Processing XXX, SPIE Proceedings, volume 6696, paper 66960A, September 2007.
^ "Analysis of BSDL-based content adaptation for JPEG 2000 and HD Photo (JPEG XR)". 8 September 2010. p. 7.
^ "JPEG XR - Microsoft Research". Microsoft.
^ "Recommendation T.832 (06/2019)". p. 185 Table D.6 – Pseudocode for function FwdColorFmtConvert1().
^ "JPEG XR Device Porting Kit Specification". JPEGXR_DPK_Spec_1.0.doc. Microsoft. 2013. Retrieved 15 March 2014.
^ a b C. Tu, S. Srinivasan, G. J. Sullivan, S. Regunathan, and H. S. Malvar, "Low-complexity Hierarchical Lapped Transform for Lossy-to-Lossless Image Coding in JPEG XR / HD Photo", SPIE Applications of Digital Image Processing XXXI, SPIE Proceedings, volume 7073, paper 70730C, August 2008.
^ Liang, Jie; Trac D. Tran (2001). "Fast multiplierless approximations of the DCT with the lifting scheme". IEEE Transactions on Signal Processing. 49 (12): 3032–3044. Bibcode:2001ITSP...49.3032L. CiteSeerX 10.1.1.7.4480. doi:10.1109/78.969511.
^ Tran, Trac D.; Jie Liang; Chengjie Tu (2003). "Lapped transform via time-domain pre- and post-filtering". IEEE Transactions on Signal Processing. 51 (6): 1557–1571. Bibcode:2003ITSP...51.1557T. CiteSeerX 10.1.1.7.8314. doi:10.1109/TSP.2003.811222.
^ "Corel Paint Shop Pro® Photo X2 Introduces Integrated Support for the Microsoft HD Photo Format". 20 November 2007. Retrieved 14 July 2011.
^ "FastPictureViewer's format compatibility chart".
^ "ImageMagick Image Formatssite". ImageMagick Studio LLC. Retrieved 6 May 2013.
^ "Image Support". Microsoft Corporation. 2010. Archived from the original on 12 April 2010. Retrieved 29 May 2010.
^ Olivier, Frank (9 April 2010). "Benefits of GPU-powered HTML5". Microsoft Corporation. Retrieved 29 May 2010.
^ Crow, Bill (27 March 2007). "Expression Design Includes HD Photo Support". Microsoft Corporation. Retrieved 1 June 2010.
^ "Microsoft Research Image Composite Editor". Microsoft Research. Retrieved 9 March 2011.
^ "paint.net 4.2.1 is now available!". 7 August 2019. Retrieved 8 August 2019.
^ "Pale Moon 27.2 released!". Retrieved 18 March 2017.
^ "Advanced Features: HD Photo import". Xara Group. Retrieved 10 September 2010.
^ Gougelet, Pierre E. "Formats". Retrieved 10 September 2010.
^ Gougelet, Pierre E. "Added/Changed Features to XnView". Retrieved 11 May 2011.
^ "HD Photo Plug-ins for Photoshop are Released". Bill Crow's Digital Imaging & Photography Blog. MSDN Blogs. 6 December 2007. Retrieved 6 December 2007.
^ "JPEG XR File Format Plug-in for Photoshop". Microsoft Research. 30 January 2013. Retrieved 14 April 2013.
^ "chausner/gimp-jxr". GitHub. Retrieved 29 March 2018.
^ "IrfanView PlugIns". www.irfanview.com. Retrieved 29 March 2018.
^ "CodePlex Archive". CodePlex Archive. Retrieved 29 March 2018.
^ a b "Flash Player 11 and AIR 3 Release Notes for Adobe Labs" (PDF). 12 July 2011. Archived from the original (PDF) on 14 July 2011. Retrieved 14 July 2011.
^ Product Brief: Intel Integrated Performance Primitives 7.0, 2010.
^ JPEG XR Codec support in Intel IPP - an Introduction, features and advantages, 23 August 2010.
^ Carmack, John (29 October 2010). "John Carmack discusses RAGE on iPhone/iPad/iPod". Bethesda Blog. ZeniMax Media Inc. Retrieved 8 March 2011.
^ Stephen Shankland (23 January 2007). "Vista to give HD Photo format more exposure". CNET. Retrieved 9 March 2007.
^ a b "Microsoft Community Promise". Microsoft. Retrieved 16 July 2011.
^ "JPEG XR Photoshop Plugin and Source Code". Microsoft. 11 April 2013. Retrieved 6 July 2013.
^ "jxrlib JPEG-XR library". Microsoft. 1 April 2013. Retrieved 16 April 2013.
^ "HD Photo Device Porting Kit 1.0". Microsoft. 21 December 2006. Archived from the original on 7 February 2013. Retrieved 9 August 2007.
^ "Apple wants to shrink your photos, but a new format from Google and Mozilla could go even farther". CNET. 19 January 2018. Retrieved 1 February 2018.
"Download: HD Photo Feature Spec 1.0". Microsoft Download Center. Microsoft. 16 November 2006. Archived from the original (DOC) on 8 March 2012. Retrieved 19 March 2012.
"Download: Windows Imaging Component". Microsoft Download Center. Microsoft. 23 November 2009. Retrieved 19 March 2012.
"JPEG XR WIC Codec Overview". 3 February 2012. Retrieved 19 March 2012.
"JPEG XR Photoshop Plugin and Source Code". 11 April 2013. Retrieved 16 April 2013.
"JPEG XR Plug-in v1.1 for Photoshop (Windows)". Microsoft Research. 7 June 2013.
Joris Evers (24 May 2006). "Microsoft shows off JPEG rival". CNET. Retrieved 7 April 2016.
Retrieved from "https://en.wikipedia.org/w/index.php?title=JPEG_XR&oldid=1076511294"
ITU-T T Series Recommendations
|
Brick - Ring of Brodgar
Skill(s) Required None
Object(s) Required Any Clay
Required By Brickwall, Coade Clay, Crucible, Finery Forge, Mine Hole, Ore Smelter, Oven, Potter's Clay, Smoke Shed, Steel Crucible
Stockpile Brick (80)
Brick is a material commonly required for industrial constructions, such as Ore Smelters and Ovens. You must discover it by removal from a Kiln before you are able to build them.
Bricks are produced by firing Clay in a Kiln. All types of clay can be fired, with these effects:
Ball Clay turns into
Acre Clay turns into
Gray Clay turns into
Cave Clay turns into
Bone Clay turns into
Pit Clay turns into
Potter's Clay turns into
Soap Clay turns into
Coade Clay turns into
Kiln requires fuel equal to two branches to turn clay (excluding Coade Clay) into bricks lasting approximately 9 minutes.
Brick Quality =
{\displaystyle 2*_{q}Clay+_{q}Fuel+_{q}Kiln \over 4}
Coade Clay seems to require notably more fuel than other clay types reaching 8% progression with two branches, more exact fuel requirement needs to be verified
early testing shows about 12 coal (24 branches) to be required to turn Coade Clay into Coade Stone, which takes about 1hr 30min real-time (feel free to corroborate/edit these values)
Bumble Pyre (2021-05-16) >"Added "Coade Stone", a highly specific clay/ceramic which, in its brick form, can be used as stone."
World 13 (2021-04-02) >"Acre- & Ball-brick colors where swapped. HnH Topic: Acre and Ball clay brick colors swtiched. (Apr 06, 2021)"
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Brick&oldid=92791"
|
Matrix function - Simple English Wikipedia, the free encyclopedia
Function that maps matrices to matrices
In mathematics, a function maps an input value to an output value. In the case of a matrix function, the input and the output values are matrices. One example of a matrix function occurs with the Algebraic Riccati equation, which is used to solve certain optimal control problems.
Matrix functions are special functions made by matrices.[1]
Most functions like
{\displaystyle \exp(x)}
are defined as a solution of a differential equation.[2] But matrix functions will use a different way.
{\displaystyle z}
{\displaystyle A}
is a square matrix. If you have a polynomial:
{\displaystyle f(z):=c_{0}+c_{1}z+\cdots +c_{m}z^{m}}
then it is reasonable to define
{\displaystyle f(A):=c_{0}I+c_{1}A+\cdots +c_{m}A^{m}.}
Let's use this idea. When you have
{\displaystyle f(z):=\sum _{k=0}^{\infty }c_{k}z^{k}}
then you can introduce
{\displaystyle f(A):=\sum _{k=0}^{\infty }c_{k}A^{k}.}
For example, the matrix version of the exponential function and the trigonometric functions are defined as follows:[1]
{\displaystyle \exp A:=\sum _{k=0}^{\infty }{\frac {1}{k!}}A^{k},}
{\displaystyle \sin A:=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(2k+1)!}}A^{2k+1},\quad \cos A:=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(2k)!}}A^{2k}.}
Matrix functions are used at numerical methods for ordinary differential equations[3][4][5] and statistics.[1][6] This is why numerical analysts are studying how to compute them.[1] For example, the following functions are studied:
Matrix exponential[7][8][9][10][11]
Root of a matrix[12]
Matrix cosine and sine[13]
Logarithm of a matrix[14]
Validated numerics for the functions above[15][16][17]
Matrix version of the gamma function[18]
↑ 1.0 1.1 1.2 1.3 Higham, Nicholas J. (2008). Functions of matrices theory and computation. Philadelphia: Society for Industrial and Applied Mathematics.
↑ Andrews, G. E., Askey, R., & Roy, R. (1999). Special functions (Vol. 71). Cambridge University Press.
↑ Del Buono, N., & Lopez, L. (2003, June). A survey on methods for computing matrix exponentials in numerical schemes for ODEs. In International Conference on Computational Science (pp. 111-120). Springer, Berlin, Heidelberg.
↑ James, A. T. (1975). Special functions of matrix and single argument in statistics. In Theory and Application of Special Functions (pp. 497-520). Academic Press.
↑ Moler, C., & Van Loan, C. (1978). Nineteen dubious ways to compute the exponential of a matrix. SIAM review, 20(4), 801-836.
↑ Moler, C., & Van Loan, C. (2003). Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM review, 45(1), 3-49.
↑ Higham, N. J. (2005). The scaling and squaring method for the matrix exponential revisited. SIAM Journal on Matrix Analysis and Applications, 26(4), 1179-1193.
↑ Sidje, R. B. (1998). Expokit: A software package for computing matrix exponentials. ACM Transactions on Mathematical Software (TOMS), 24(1), 130-156.
↑ Yuka Hashimoto,Takashi Nodera, Double-shift-invert Arnoldi method for computing the matrix exponential, Japan J. Indust. Appl. Math, pp727-738, 2018.
↑ Hale, N., Higham, N. J., & Trefethen, L. N. (2008). Computing
{\displaystyle A^{\alpha },\log(A)}
, and related matrix functions by contour integrals. SIAM Journal on Numerical Analysis, 46(5), 2505-2523.
↑ Joao R. Cardoso, Amir Sadeghi, Computation of matrix gamma function, BIT Numerical Mathematics, (2019)
A Survey of the Matrix Exponential Formulae with Some Applications (2016), Baoying Zheng, Lin Zhang, Minhyung Cho, and Junde Wu. J. Math. Study Vol. 49, No. 4, pp. 393-428.
Higham, N. J. (2006). Functions of matrices. Manchester Institute for Mathematical Sciences, School of Mathematics, The University of Manchester.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Matrix_function&oldid=7204192"
|
Stokes' theorem is a generalization of Green’s theorem to higher dimensions. While Green's theorem equates a two-dimensional area integral with a corresponding line integral, Stokes' theorem takes an integral over an
-dimensional area and reduces it to an integral over an
(n-1)
-dimensional boundary, including the 1-dimensional case, where it is called the Fundamental Theorem of Calculus. This allows a proof by induction.
In many applications, "Stokes' theorem" is used to refer specifically to the classical Stokes' theorem, namely the case of Stokes' theorem for
n = 3
, which equates an integral over a two-dimensional surface (embedded in
\mathbb R^3
) with an integral over a one-dimensional boundary curve. This article follows that convention and focuses on the classical Stokes' theorem. A discussion of the generalized theorem is left to the references at the end of this article.
Suppose some surface
S
is bounded by a closed path
C
. Consider a line integral through a vector field
\mathbf{F}
taken about
C
\int_C \mathbf{F} \cdot d\mathbf{s},
If the surface is divided into two closed paths, it is easy to see that the sum of the circulation of each path is the same as for the undivided path
C
. No matter where the "cut" is made to divide up the surface, the line integral over the boundary between the divided regions is traversed in opposite directions in the calculation of each circulation. Meanwhile, the total line integral over the paths that were originally part of
C
By the same reasoning, one can conjecture that the total circulation never changes, no matter how many divisions of the surface are made. For many closed paths
C_i
\int_C \mathbf{F} \cdot d\mathbf{s} = \sum_i \int_{C_i} \mathbf{F} \cdot d\mathbf{s}
Clearly, the circulation of each patch depends on some extent to its size. One can continue to divide the surface indefinitely, such that area
a_i
of each patch becomes arbitrarily small. As the area of each patch vanishes to a point, one obtains a local vector-valued property at each point on the surface called the curl, denoted
\nabla \times \mathbf{F}
(\nabla \times \mathbf{F}) \cdot \hat{\mathbf{n}} = \lim_{a_i \rightarrow 0} \left( \frac{1}{a_i} \int_{C_i} \mathbf{F} \cdot d\mathbf{s} \right).
The dot product with the normal unit vector
\hat{\mathbf{n}}
is taken so that one only has to be concerned with the normal component of the curl.
\int_S \nabla \times \mathbf{F} \cdot d\mathbf{a}.
But the sum over all of the circulations is simply the circulation over the entire surface
\int_C \mathbf{F} \cdot d\mathbf{s}.
\boxed{\displaystyle\int_C \mathbf{F} \cdot d\mathbf{s} = \displaystyle\int_S \nabla \times \mathbf{F} \cdot d\mathbf{a}.}
Suppose one has such a patch whose lower left corner is the point
(x, y)
. To first order, the line integral of a vector field
\mathbf{F} = F_x \hat{\mathbf{x}} + F_y \hat{\mathbf{y}} + F_z \hat{\mathbf{z}}
over some horizontal path
\Delta x
(\partial F_y/\partial x) \Delta x
. Thus, if one moves in a counterclockwise direction, traversing first
\Delta x
horizontally,
\Delta y
vertically,
-\Delta x
horizontally, and finally
-\Delta y
vertically, one finds that the line integral over the boundary of the patch over some vector field
\mathbf{F}
simply reduces to
\left( \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right) \Delta x \Delta y.
The same computation done over all three directions
x
y
z
\nabla \times \mathbf{F} = \left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right) \hat{\mathbf{x}} + \left( \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) \hat{\mathbf{y}} + \left( \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right) \hat{\mathbf{z}},
which is the expression for the curl in Cartesian coordinates. In the matrix notation, the curl of the function
\mathbf F
can also be denoted as follows:
\nabla \times \mathbf F = \begin{vmatrix} \mathbf x & \mathbf y & \mathbf z\\ \dfrac{\partial}{\partial x} & \dfrac{\partial}{\partial x} & \dfrac{\partial}{\partial x}\\ M & N & P\\ \end{vmatrix}
The reason for the choice of notation
\nabla \times
lies in some slight abuse of notation. One can "define"
\nabla
as the following quantity:
\nabla = \frac{\partial}{\partial x} \hat{\mathbf{x}} + \frac{\partial}{\partial y} \hat{\mathbf{y}} + \frac{\partial}{\partial z} \hat{\mathbf{z}},
in which case the curl can then be computed by taking the "cross product" of
\nabla
\mathbf{F}
Find the curl of
\mathbf{F}(x,y,z) = (x+y) \hat{\mathbf{x}} + (y+z) \hat{\mathbf{y}} + (x+z) \hat{\mathbf{z}}.
\nabla \times \mathbf{F} = (0 - 1) \hat{\mathbf{x}} + (0 - 1) \hat{\mathbf{y}} + (0 - 1) \hat{\mathbf{z}} = -(\hat{\mathbf{x}} + \hat{\mathbf{y}} + \hat{\mathbf{z}}).
\mathbf{F}(x, y) = - \frac{y}{x^2 + y^2} \hat{\mathbf{x}} + \frac{x}{x^2 + y^2} \hat{\mathbf{y}}
By inspection, the curl is zero in the
\hat{\mathbf{x}}
\hat{\mathbf{y}}
components. It remains to compute the
\hat{\mathbf{z}}
\left( \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right) \hat{\mathbf{z}} = \left(\frac{y^2 - x^2}{(x^2 + y^2)^2} - \frac{y^2 - x^2}{(x^2 + y^2)^2} \right) \hat{\mathbf{z}} = 0.
\nabla \times \mathbf{F} = 0
\mathbf{F}
(In general, for a two-dimensional field, it suffices to check whether
\partial F_y/\partial x = \partial F_x/\partial y
C
be a counterclockwise circular path of radius
R
xy
plane and centered about the origin, and let
\mathbf{F}
be a field defined by
\mathbf{F}(x, y) = - \frac{y}{x^2 + y^2} \hat{\mathbf{x}} + \frac{x}{x^2 + y^2} \hat{\mathbf{y}}.
\displaystyle\int_C \mathbf{F} \cdot d\mathbf{s}.
\displaystyle\int_C \mathbf{F} \cdot d\mathbf{s} = \displaystyle\int_S \nabla \times \mathbf{F} \cdot d\mathbf{a}.
But by the previous example, we know that
\nabla \times \mathbf{F} = 0
, so the integral evaluates to zero.
Ampère's law states that the line integral over the magnetic field
\mathbf{B}
is proportional to the total current
I_\text{encl}
that passes through the path over which the integral is taken:
\int_{\text{loop}} \mathbf{B} \cdot d\mathbf{s} = \mu_0 I_\text{encl}.
\int \mathbf{B} \cdot d\mathbf{s}
\mathbf{B}
\int_\text{loop} \mathbf{B} \cdot d\mathbf{s} = \int_\text{surface} \nabla \times \mathbf{B} \cdot d\mathbf{a}.
\int_\text{loop} \mathbf{B} \cdot d\mathbf{s} = \mu_0 I_\text{encl} = \mu_0 \int_\text{surface} \mathbf{J} \cdot d\mathbf{a},
\mathbf{J}
I
\nabla \times \mathbf{B} = \mu_0 \mathbf{J}.
Similarly, Faraday's law states the following relationship between the electric field
\mathbf{E}
\mathbf{B}
, which varies in time
t
\int_\text{loop} \mathbf{E} \cdot d\mathbf{s} = - \frac{d}{dt} \int_S \mathbf{B} \cdot d\mathbf{a}.
\int_S \nabla \times \mathbf{E} \cdot d\mathbf{a} = - \frac{d}{dt} \int_S \mathbf{B} \cdot d\mathbf{a}.
Again, one argue that since the relationship must hold true for any arbitrary surface
S
\nabla \times \mathbf{E} = -\frac{d\mathbf{B}}{dt}.
|
NauruGraph - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : SpecialGraphs : NauruGraph
construct Nauru graph
The NauruGraph() command returns the Nauru graph, a symmetric bipartite cubic graph with 24 vertices and 36 edges.
It is named after the nation of Nauru, the flag of which features a twelve-pointed star.
\mathrm{with}\left(\mathrm{GraphTheory}\right):
\mathrm{with}\left(\mathrm{SpecialGraphs}\right):
G≔\mathrm{NauruGraph}\left(\right)
\textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Graph 1: an undirected unweighted graph with 24 vertices and 36 edge\left(s\right)}}
\mathrm{IsBipartite}\left(G\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
\mathrm{DrawGraph}\left(G\right)
"Nauru graph", Wikipedia. http://en.wikipedia.org/wiki/Nauru_graph
The GraphTheory[SpecialGraphs][NauruGraph] command was introduced in Maple 2018.
|
Kiln - Ring of Brodgar
Object(s) Required Clay x45
Required By Ashes, Bone Ash, Brick, Clay Jar, Clay Pipe, Earthenware Platter, Fishwrap, Fruitroast, Garden Pot, Hand Impression, Laurel-Crowned Roast, Malted Barley, Malted Wheat, Mug, Mushroom-Burst Glutton, Nutjerky, Porcelain Plate, Stoneware Vase, Teapot, Toy Chariot, Treeplanter's Pot, Urn
Repaired With Clay
Build > Buildings & Construction > Furnaces & Fireplaces > Kiln
A kiln is required mainly to burn clay goods to ceramics. It will hold up to 25 (5 x 5) unburnt items.
Keep in mind that you need certain ceramics for prospecting, cheesemaking, masonry and planting trees, and other kiln products to make tea and glass and to process iron.
Pottery skill
a 3x3 paved(see Lay Stone in Terraforming)/grassland area to place it
Plow a 3x3 area.
Lay Stone on the plowed area.
Acquire clay. Keep in mind that the kiln has a quality value, determined as the average of the used clay quality.
Build -> Buildings & Construction -> Furnaces and Fireplaces -> Kiln to place the construction site.
When built, the Kiln will need fuel for the fire; constructing your Kiln near trees is a good idea.
Many Kiln products require further clay. As clay quality may vary within a short trip distance, both building a permanent settlement next to a high quality clay node and running clay digging trips are considerable actions.
The quality of the kiln is the average quality of the clay used to make it, and is not softcapped.
The quality of a kiln's products is determined by a weighted average:
Product Quality =
{\displaystyle 2*_{q}UnburntProduct+_{q}Fuel+_{q}Kiln \over 4}
Kilns are fueled by fuel, which burns away when the kiln is lit. To add this fuel:
Left-click fuel, e.g. a Branch in your inventory.
Right-click the Kiln on the map.(Hold down shift to avoid repeating step one.)
Continue adding fuel until you have as much as you'll need. Different items require different amounts of fuel. Bricks require two ticks(branches), and most other things require more.
Right-click the Kiln to see the Kiln's inventory and fuel indicator (the bar on the right.)
Within this inventory you may place unburnt objects which you wish to fire in the kiln. These include:
Objects like Pot, Treeplanter's Pot, Urn, Clay Jar , Toy Chariot
Raw clay, which becomes colored bricks
Bones , which become Bone Ash
The amount of objects in the kiln has no effect on the amount of fuel needed, but the type of objects may require different firing times.
Ashes 0:54:43 3:00:01 12 made from Pitbaked Goods
0:13:41 0:45:00 3 made from Board
0:36:29 2:00:00 8 made from Block of Wood
Bone Ash 0:25:12 1:22:54 6
Branding Iron 0:04:33 0:14:58 1 warm up
Brick 0:08:58 0:29:30 2 Coade Clay excluded
1:49:25 6:00:00 23 made from Coade Clay
Clay Jar 0:54:33 2:59:28 12
Clay Pipe 0:21:07 1:09:28 5
Earthenware Platter 0:36:18 1:59:26 8
Fishwrap 0:18:07 0:59:36 4
Fruitroast 0:18:07 0:59:36 4
Garden Pot 1:49:25 6:00:00 23
Hand Impression 0:21:10 1:09:38 5
Malted Barley 0:04:33 0:14:58 1
Malted Wheat 0:04:33 0:14:58 1
Mushroom Burst Glutton 0:18:07 0:59:36 4
Mug 0:54:33 2:59:28 12
Nutjerky 0:18:07 0:59:36 4
Pot 1:49:25 6:00:00 23
Porcelain Plate 0:36:18 1:59:26 8
Stoneware Vase 0:36:18 1:59:26 8
Stuffed Bird 0:18:07 0:59:36 4
Teapot 0:54:33 2:59:28 12
Toy Chariot 0:21:10 1:09:38 5
Treeplanter's Pot 0:36:18 1:59:26 8
Urn 1:49:25 6:00:00 23
Kiln made from every kind of clay.
Merry Igloo! (2021-12-19) >"While holding an item and right-clicking a container, you can now hold Alt to only take from the container, rather than first attempting to transfer to the container. Useful, for example, when you want to light a branch on a fireplace without first filling the fireplace up with fuel (e.g. branches)."
Pink Angler (2020-07-19) >"You can now burn woodblocks and boards to produce ash in a kiln or fireplace."
Market Garden (2016-10-05) >"Added/Re-added Gray, Cave, and Bone clays from Legacy, and implemented variable materials for existing clay objects -- pot, urn, kiln, and clay cauldron -- along with that."
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Kiln&oldid=92566"
|
Hyperbolic_sector Knowpia
A hyperbolic sector is a region of the Cartesian plane {(x,y)} bounded by rays from the origin to two points (a, 1/a) and (b, 1/b) and by the rectangular hyperbola xy = 1 (or the corresponding region when this hyperbola is rescaled and its orientation is altered by a rotation leaving the center at the origin, as with the unit hyperbola). A hyperbolic sector in standard position has a = 1 and b > 1.
Hyperbolic sectors are the basis for the hyperbolic functions.
Hyperbolic sector area is preserved by squeeze mapping, shown squeezing rectangles and rotating a hyperbolic sector
The area of a hyperbolic sector in standard position is natural logarithm of b .
Proof: Integrate under 1/x from 1 to b, add triangle {(0, 0), (1, 0), (1, 1)}, and subtract triangle {(0, 0), (b, 0), (b, 1/b)}. [1]
When in standard position, a hyperbolic sector corresponds to a positive hyperbolic angle at the origin, with the measure of the latter being defined as the area of the former.
Hyperbolic triangleEdit
Hyperbolic triangle (yellow) and hyperbolic sector (red) corresponding to hyperbolic angle u, to the rectangular hyperbola (equation y = 1/x). The legs of the triangle are √2 times the hyperbolic cosine and sine functions.
When in standard position, a hyperbolic sector determines a hyperbolic triangle, the right triangle with one vertex at the origin, base on the diagonal ray y = x, and third vertex on the hyperbola
{\displaystyle xy=1,\,}
with the hypotenuse being the segment from the origin to the point (x, y) on the hyperbola. The length of the base of this triangle is
{\displaystyle {\sqrt {2}}\cosh u,\,}
and the altitude is
{\displaystyle {\sqrt {2}}\sinh u,\,}
where u is the appropriate hyperbolic angle.
The analogy between circular and hyperbolic functions was described by Augustus De Morgan in his Trigonometry and Double Algebra (1849).[2] William Burnside used such triangles, projecting from a point on the hyperbola xy = 1 onto the main diagonal, in his article "Note on the addition theorem for hyperbolic functions".[3]
Hyperbolic logarithmEdit
Unit area when b = e as exploited by Euler.
It is known that f(x) = xp has an algebraic antiderivative except in the case p = –1 corresponding to the quadrature of the hyperbola. The other cases are given by Cavalieri's quadrature formula. Whereas quadrature of the parabola had been accomplished by Archimedes in the third century BC (in The Quadrature of the Parabola), the hyperbolic quadrature required the invention in 1647 of a new function: Gregoire de Saint-Vincent addressed the problem of computing the areas bounded by a hyperbola. His findings led to the natural logarithm function, once called the hyperbolic logarithm since it is obtained by integrating, or finding the area, under the hyperbola.[4]
Before 1748 and the publication of Introduction to the Analysis of the Infinite, the natural logarithm was known in terms of the area of a hyperbolic sector. Leonhard Euler changed that when he introduced transcendental functions such as 10x. Euler identified e as the value of b producing a unit of area (under the hyperbola or in a hyperbolic sector in standard position). Then the natural logarithm could be recognized as the inverse function to the transcendental function ex.
When Felix Klein wrote his book on non-Euclidean geometry in 1928, he provided a foundation for the subject by reference to projective geometry. To establish hyperbolic measure on a line, he noted that the area of a hyperbolic sector provided visual illustration of the concept.[5]
Hyperbolic sectors can also be drawn to the hyperbola
{\displaystyle y={\sqrt {1+x^{2}}}}
. The area of such hyperbolic sectors has been used to define hyperbolic distance in a geometry textbook.[6]
^ V.G. Ashkinuse & Isaak Yaglom (1962) Ideas and Methods of Affine and Projective Geometry (in Russian), page 151, Ministry of Education, Moscow
^ Augustus De Morgan (1849) Trigonometry and Double Algebra, Chapter VI: "On the connection of common and hyperbolic trigonometry"
^ William Burnside (1890) Messenger of Mathematics 20:145–8, see diagram page 146
^ Martin Flashman The History of Logarithms from Humboldt State University
^ Felix Klein (1928) Vorlesungen über Nicht-Euklidische Geometrie, p. 173, figure 113, Julius Springer, Berlin
^ Jürgen Richter-Gebert (2011) Perspectives on Projective Geometry, p. 385, ISBN 9783642172854 MR2791970
|
Topic: Multiplexers
Data selectors, more commonly called multiplexers (or just muxes), function by connecting one of their input signals to their output signal, as directed by their select or control input signals. Muxes have N data inputs and
log2^N
select inputs, and a single output. In operation, the select inputs determine which data input drives the output, and whatever voltage appears on the selected input is driven on the output. All non-selected data inputs are ignored. As an example, if the select inputs of a 4:1 mux are ‘1’ and ‘0’, then the output Y will be driven to the same voltage present on input I2.
Common mux sizes are 2:1 (1 select input), 4:1 (2 select inputs), and 8:1 (3 select inputs). The truth table in Fig. 1 below specifies the behavior of a 4:1 mux. Note the use of entered variables in the truth table if entered variables were not used, the truth table would require six columns and 26 or 64 rows. In general, when entered-variable truth tables are used to define a circuit, control inputs are shown as column-heading variables, and data inputs are used as entered variables.
Figure 1. Truth Table, Logic Graph, and Block Diagram of a 4-to-1 Multiplexer
The truth table can easily be modified for muxes that handle different numbers of inputs by adding or removing control input columns. A minimal mux circuit can be designed by transferring the information in the truth table to a K-map, or by simply inspecting the truth table and writing an SOP equation directly. A minimal equation for the 4:1 mux is as follows (you are encouraged to verify that this is a minimal equation):
Y = \overline{S1} \cdot \overline{S0} \cdot I0 + \overline{S1} \cdot S0 \cdot I1 + S1 \cdot \overline{S0} \cdot I2 + S1 \cdot S0 \cdot I3
An N-input mux is a simple SOP circuit constructed from N AND gates each with
log2^N + 1
inputs, and a single output OR gate. The AND gates combine the
log2^N
select inputs with a data input, such that only one AND gate output is asserted at any time, and the OR output stage simply combines the outputs of the AND gates (you will complete the sketch for a mux circuit in the exercises). As an example, to select input I2 in a 4 input mux, the two select lines are set to S1 = 1 and S0 = 0, and the input AND stage would use a three input AND gate combining S1, not (S0), and I2.
Mux circuits often use an enable input in addition to the other inputs. The enable input functions as a sort of global on/off switch, driving the output to logic ‘0’ when it is de-asserted, and allowing normal mux operation when it is asserted. Figure 2 below shows the block diagram of mux with enable.
Figure 2. Block Diagram of Mux With Enable
Figure 3. Bus Mux
Since this most common application of multiplexers is beyond our current presentation, we will consider a less common, somewhat contrived application. Consider the K-map representation of a given logic function, where each K-map cell contains a ‘0’, ‘1’, or an entered variable expression. Each unique combination of K-map index variables selects a particular K-map cell (e.g., cell 6 of an 8 cell K-map is selected when A=1, B=1, C=0). Now consider a mux, where each unique combination of select inputs selects a particular data input to be passed to the output (e.g., I6 of an 8 input mux can be selected by setting the select inputs to A=1, B=1, C=0). It follows that if the input signals in a given logic function are connected to the select inputs of a mux, and those same input signals are used as K-map index variables, then each cell in the K-map corresponds to a particular mux data input. This suggests a mux can be used to implement a logic function by connecting the K-map cell connects to the data lines of the mux, and connecting the K-map index variables to the select lines of the mux. Mux data inputs are connected to: ‘0’ (or ground) when the corresponding K-map cell contains a ‘0’; ‘1’ (or Vdd) when the corresponding K-map cell contains a ‘1’; and if a K-map cell contains an entered variable expression, then a circuit implementing that expression is connected to the corresponding mux data input. Note that when a mux is used to implement a logic circuit directly from a truth table or K-map, logic minimization is not performed. This saves design time, but usually creates a less efficient circuit (however, a logic synthesizer would remove the inefficiencies before such a circuit was implemented in a programmable device).
|
Here are the problems and rules of the contest in English and in Russian.
Problem 1. 123456789
\longrightarrow
Problem 10. An
t\times n
X
t>n
, in which each element is zero or one is such that each column contains exactly
s+1
ones...
Problem 10.
n> (s+1)^2
\longrightarrow
n\geq (s+1)^2
The olympiad is mainly aimed at undergraduate students, but it is also open to other participants (including high-school students).
We recommend sending solutions in PDF format. Please write your name, email, university and year of university education (if you are a student) on the first page of the document with solutions. The solutions can be sent to comb.olymp@phystech.edu before 15.05.2019.
The full solution of each problem will be graded by 10 points, the partial solutions will be also graded.
If you have any questions, please email us at comb.olymp@phystech.edu.
The results will be available here: https://polyanskii.com/other/combolymp/
The olympiad is organized by the Department of Discrete Mathematics of Moscow Institute of Physics and Technology (State University). Here is information about our international master’s programs and other opportunities:
Advanced Combinatorics: https://advcombi.org/
Contemporary Combinatorics: https://comb-mipt.ru/
Computer Science: https://cs-mipt.ru/
Deep Learning School: https://www.dlschool.org/?lang=en
If you have any questions about programs, please email Prof. Andrei Michailovich Raigorodskii at mraigor@yandex.ru.
|
n
n
numeric values specifying how large the interval between computed values should be along each dimension of the data
n+1
n
\mathrm{points},\mathrm{data}≔\mathrm{Interpolation}:-\mathrm{Kriging}:-\mathrm{GenerateSpatialData}\left(\mathrm{Spherical}\left(1,10,1\right)\right)
\textcolor[rgb]{0,0,1}{\mathrm{points}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{data}}\textcolor[rgb]{0,0,1}{≔}\begin{array}{c}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{0.814723686393179}& \textcolor[rgb]{0,0,1}{0.706046088019609}\\ \textcolor[rgb]{0,0,1}{0.905791937075619}& \textcolor[rgb]{0,0,1}{0.0318328463774207}\\ \textcolor[rgb]{0,0,1}{0.126986816293506}& \textcolor[rgb]{0,0,1}{0.276922984960890}\\ \textcolor[rgb]{0,0,1}{0.913375856139019}& \textcolor[rgb]{0,0,1}{0.0461713906311539}\\ \textcolor[rgb]{0,0,1}{0.632359246225410}& \textcolor[rgb]{0,0,1}{0.0971317812358475}\\ \textcolor[rgb]{0,0,1}{0.0975404049994095}& \textcolor[rgb]{0,0,1}{0.823457828327293}\\ \textcolor[rgb]{0,0,1}{0.278498218867048}& \textcolor[rgb]{0,0,1}{0.694828622975817}\\ \textcolor[rgb]{0,0,1}{0.546881519204984}& \textcolor[rgb]{0,0,1}{0.317099480060861}\\ \textcolor[rgb]{0,0,1}{0.957506835434298}& \textcolor[rgb]{0,0,1}{0.950222048838355}\\ \textcolor[rgb]{0,0,1}{0.964888535199277}& \textcolor[rgb]{0,0,1}{0.0344460805029088}\\ \textcolor[rgb]{0,0,1}{⋮}& \textcolor[rgb]{0,0,1}{⋮}\end{array}]\\ \hfill \textcolor[rgb]{0,0,1}{\text{30 × 2 Matrix}}\end{array}\textcolor[rgb]{0,0,1}{,}\begin{array}{c}[\begin{array}{c}\textcolor[rgb]{0,0,1}{-1.31317888309841}\\ \textcolor[rgb]{0,0,1}{3.78399452938781}\\ \textcolor[rgb]{0,0,1}{-4.07906747556730}\\ \textcolor[rgb]{0,0,1}{2.81033657021080}\\ \textcolor[rgb]{0,0,1}{3.07159908082332}\\ \textcolor[rgb]{0,0,1}{0.128958765233144}\\ \textcolor[rgb]{0,0,1}{-3.21737272238246}\\ \textcolor[rgb]{0,0,1}{0.707245165710619}\\ \textcolor[rgb]{0,0,1}{0.0877877303791926}\\ \textcolor[rgb]{0,0,1}{0.937296621856498}\\ \textcolor[rgb]{0,0,1}{⋮}\end{array}]\\ \hfill \textcolor[rgb]{0,0,1}{\text{30 element Vector[column]}}\end{array}
k≔\mathrm{Interpolation}:-\mathrm{Kriging}\left(\mathrm{points},\mathrm{data}\right)
\textcolor[rgb]{0,0,1}{k}\textcolor[rgb]{0,0,1}{≔}\left(\begin{array}{c}\textcolor[rgb]{0,0,1}{Krⅈgⅈng ⅈntⅇrpolatⅈon obȷⅇct wⅈth 30 samplⅇ poⅈnts}\\ \textcolor[rgb]{0,0,1}{Varⅈogram: Sphⅇrⅈcal(1.25259453854482,13.6487615617247,.5525536774)}\end{array}\right)
\mathrm{SetVariogram}\left(k,\mathrm{Spherical}\left(1,10,1\right)\right)
\left(\begin{array}{c}\textcolor[rgb]{0,0,1}{Krⅈgⅈng ⅈntⅇrpolatⅈon obȷⅇct wⅈth 30 samplⅇ poⅈnts}\\ \textcolor[rgb]{0,0,1}{Varⅈogram: Sphⅇrⅈcal(1,10,1)}\end{array}\right)
\mathrm{ComputeGrid}\left(k,[0..5,0..5],0.1,\mathrm{output}=\mathrm{plot}\right)
|
Walsh permutation; bit permutation - Wikiversity
Walsh permutation; bit permutation
The term bit permutation was chosen by the author for Walsh permutations whose compression matrices are permutation matrices.
Ordered in rows like the corresponding finite permutations ( A055089) they form the infinite array A195665.
An extract of the table of 3-bit Walsh permutations:
Inversion Vector
0 0 0 0 0 0 0 0 0
&1 0
− 3 722 0 2 1 3 4 6 5 7
&2 3 (2+2)>
+ 3 1032 0 2 4 6 1 3 5 7
− 3 2354 0 4 2 6 1 5 3 7
Applying a permutation pn of 0,...,3 on the reverse binary digits of numbers 0,...,15 gives a permutation Pn of these 16 elements.
p23 permutes (0,1,2,3) into (3,2,1,0). P23 = wp(23,22,21,20) = wp(8,4,2,1) is the 4-bit bit-reversal permutation.
Applying all permutations p0,...,p23 of 0,...,3 on the digits of 0,...,15 gives the bit permutations P0,...,P23, which also form the symmetric group S4.
But the big permutations have to be composed in a different way, to get corresponding results:
{\displaystyle p_{a}*p_{b}*...=p_{x}~~~~\Leftrightarrow ~~~~...*P_{b}*P_{a}=P_{x}}
See example calculations.
Cycle graphs of S4
The 4×4 matrices of the permutations pn are the transposed compression matrices
of the Walsh permutations Pn, here represented by their 16×16 matrices.
With compression vectors
Magnification of permutation 7
Cycle graph of S4 with permutations Pn
This is the top left 24x16 submatrix of
Below the dual matrix.
0 2 1 3 4 6 5 7 8 10 9 11 12 14 13 15
0 2 4 6 1 3 5 7 8 10 12 14 9 11 13 15
0 1 2 3 8 9 10 11 4 5 6 7 12 13 14 15
0 2 1 3 8 10 9 11 4 6 5 7 12 14 13 15
0 2 4 6 8 10 12 14 1 3 5 7 9 11 13 15
0 1 8 9 2 3 10 11 4 5 12 13 6 7 14 15
0 2 8 10 1 3 9 11 4 6 12 14 5 7 13 15
0 2 8 10 4 6 12 14 1 3 9 11 5 7 13 15
0 8 1 9 2 10 3 11 4 12 5 13 6 14 7 15
0 8 2 10 1 9 3 11 4 12 6 14 5 13 7 15
0 8 2 10 4 12 6 14 1 9 3 11 5 13 7 15
Walsh permutation; inversions#Bit permutations
Retrieved from "https://en.wikiversity.org/w/index.php?title=Walsh_permutation;_bit_permutation&oldid=2394917"
|
Composition of Substances and Solutions - Vocabulary - Course Hero
General Chemistry/Composition of Substances and Solutions/Vocabulary
solution in which the solvent is water
number of particles of a substance per mole of that substance (in atoms, ions, or molecules), equal to
6.022\times{10}^{23}
to let the solvent evaporate from a solution to increase its concentration
formation of a solid in which the particles form a highly organized structure
to decrease a solution's concentration by adding more solvent to it
less concentrated solution made by diluting a solution
to incorporate a substance into a liquid so as to form a solution
sum of the atomic weights of all atoms in a compound
analytical method in which a compound is vaporized and separated into its components
concentration of a solution expressed as the mass of the solute (in grams) divided by the mass of the solvent (in grams), multiplied by 100 to get a percentage
concentration of a solution expressed as the mass of the solute (in grams) divided by the volume of the solution (in milliliters), multiplied by 100 to get a percentage
number of moles of a solute dissolved in 1 liter of water
amount of a substance that contains as many particles as 12 grams of pure carbon-12, equal to equal to
6.022\times{10}^{23}
sum of the atomic weights of all atoms in a molecule
concentration expressed as mass of solute divided by mass of solution, multiplied by 109. For an aqueous solution, the parts per billion concentration is the mass (in micrograms, µg) of solute per liter of solution.
concentration expressed as mass of solute divided by mass of solution, multiplied by 106. For an aqueous solution, the parts per million concentration is the mass (in milligrams, mg) of solute per liter of solution.
percentage by mass of each element present in a compound
solution that contains the maximum amount of dissolved solute normally possible at a certain temperature
solution that contains more than the maximum amount of dissolved solute normally possible at a certain temperature
solution that contains less than the maximum amount of dissolved solute normally possible at a certain temperature
concentration of a solution expressed as the volume of the solute (in liters) divided by the volume of the solution (in liters), multiplied by 100 to get a percentage
<Overview>Molar Mass
|
Bayesian inference from single spikes | BMC Neuroscience | Full Text
Bayesian inference from single spikes
Travis Monk1 &
Michael Paulin1
Spiking neurons appear to have evolved concurrently with the advent of animal-on-animal predation, near the onset of the Cambrian explosion 543 million years ago. We hypothesize that strong selection pressures of predator-prey interactions can explain the evolution of spiking neurons. The fossil record and molecular phylogeny indicate that animals existed without neurons for at least 100 million years prior to the Cambrian explosion. The first animals with nervous systems may have been derived sponge larvae that started feeding in the water column [1].
We use models and computer simulations of predator-prey interactions to show that thresholding prey proximity detectors can greatly improve a predator's performance under certain ecological conditions. If a prey produces a stimulus, then there is a critical stimulus level at which a predator's expected energetic return for striking exceeds the expected return for not striking. A predator with a mechanism for triggering a strike when the stimulus reaches this critical level has a massive advantage over predators lacking such a mechanism. We suggest that the first neurons were threshold-detecting devices that served this function.
According to our model, neurons evolved as proximity detectors. We show that although these spiking detectors evolved to maximize the rate of prey capture, it is possible to use spikes to determine the location of prey by Bayesian inference. We show that inferring prey location from individual spikes has higher utility than using inter-spike intervals or rates. The conditional probability density function (pdf) of prey location given the output of such a detector necessarily has smaller entropy on average than the marginal pdf of prey location (Figure 1). Therefore, a single spike (or non-spike) from such a neuron can be interpreted not only as an assertion about the presence of prey in the vicinity, or as a command to strike, but also as an assertion about the location of the prey at that time. It follows that individual spikes from threshold-detectors can, in principle, be used to infer prey location by Bayes' rule [2] as shown in Figure 1. Bayes' rule is the best strategy to modify beliefs based on imperfect evidence or data, and any animal that evolved the capacity to infer prey location from streaming neuronal spikes would have outperformed its competitors.
Simulation of prey detection by spiking sensors. The state x is the distance from predator to prey on a time step. The predator has a sensor that fires spikes with an intensity that depends on the strength of some stimulus produced by the prey; when x is small, the predator's sensor fires with higher intensity (and vice versa, left subplot). In general, we can approximate this intensity from the biophysics of some real sensor and stimulus, but for simplicity we assume that the intensity falls as
1/{x}^{2}
. Pr(S = 1|X) and Pr(S = 0|X) are the probabilities of the sensor firing in some small window of time given x, and we consider sufficiently small time windows such that no more than one spike can occur in that window. Given these conditional pdfs, it is possible to infer prey location using the sensor's spikes by Bayes' rule (right subplot). The prior distribution of x is Pr(X). We can then calculate the posterior distribution of x given the output of a spiking sensor, Pr(X|S = 1) and Pr(X|S = 0). On the next time step, this posterior becomes the new prior and the process repeats. When spikes and non-spikes are streamed very quickly, these posterior pdfs are updated almost continuously and in real-time.
Deneve S: Bayesian spiking neurons I: Inference. Neural Comput. 2008, 20: 91-117. 10.1162/neco.2008.20.1.91.
Travis Monk & Michael Paulin
Correspondence to Travis Monk.
Monk, T., Paulin, M. Bayesian inference from single spikes. BMC Neurosci 14, P278 (2013). https://doi.org/10.1186/1471-2202-14-S1-P278
|
Chemistry Of Poisons | Brilliant Math & Science Wiki
Poisons are chemicals that can cause illness or death. The sixteenth century physician Paracelsus introduced the idea that all substances can be toxic in sufficient quantity, giving rise to this popular adage:
Poisons come in many different forms including liquids, gases, metals, and organic compounds. They can be absorbed through the skin, digestive system, or lungs. Some poisons work quickly, killing their victims within minutes. Others are undetectable and cause no immediate discomfort, but cause cancer later in life.
Drugs often have a lethal dose. Sometimes, that dose is very close to the therapeutic dose, making it difficult for a patient to get the benefit of their medication without risking adverse effects or organ damage.
Poisons are often organic compounds. Many animals, such as rattlesnakes, use venom to kill and digest their prey. Others, such as poison dart frogs, use a combination of toxic compounds and bright colors as a defense systems. Predators who eat these poisonous species become sick enough that they don't make the same mistake a second time. All spiders are carnivorous and venomous. Most are small enough not to cause any trouble for humans, but the bites of recluse spiders (genus
\textit{Loxosceles}
) and black widows (genus
\textit{Latrodectus}
) can cause painful medical complications.[1]
Antidotes and Antivenoms
Mustard gas,
\ce{(ClCH2CH2)2S}
, is a basic compound that disrupts the guanine nucleotides in DNA, inhibiting its replication and causing the damaged cell to self-destruct. Mustard gas was used as a weapon during World War I. Though soldiers wore gas masks, their exposed skin absorbed the mustard gas, causing oozing blisters. Their festering wounds were often a death sentence in the pre-antibiotic era.
Lithium has medical uses. It acts as a mood stabilizer in patients with psychiatric illnesses like schizophrenia and bipolar disorder. If the dose becomes too high, lithium starts causing damage to the kidneys.
Mercury is a unique metal that forms a liquid at room temperature. It acts as a neurotoxin. The phrase mad hatter originates from mercury poisoning. In 19th century England, felted material for hats was made by rubbing mercury into animal pelts. The hatters absorbed increasing amounts of mercury over time, leading to neurological disfunction.
Heavy metals are a problematic environmental toxin, because they do not degrade into less toxic materials. Instead, they tend to concentrate over time. A famous case of mass mercury poisoning happened in the Japanese town of Minamata in the 1970's. In the early 20th century, Minamata was a scenic and sleepy fishing village, but by the 1930's, the town became more industrialized, factories manufacturing acetaldehyde for plastics moved into the area and started pumping their waste, which included mercury, into the bay. The mercury was consumed by the fish, building up in higher concentrations over time, particularly in bigger fish that were higher up in the food chain. Species that ate the fish were also exposed to the mercury. Fisherman told stories of cats exhibiting bizarre behavior, jerking uncontrollably, and then falling into the ocean and drowning. By the 1950's, humans were starting to exhibit similar symptoms: constant tremors, stumbling as they walked, or being unable to perform simple tasks like hold a pencil or fasten a button. It took about 20 years for symptoms of mercury poisoning to emerge in humans.
Tetrodotoxin is a poison found in the lungs, sex organs, and skin of pufferfish. Tetrodotoxin blocks the voltage-gated ion channels in the nervous system, inhibiting communication between the brain and the muscles. This substance has a very low lethal dose (micrograms can be deadly), and eating incorrectly prepared fugu can lead to paralysis, heart arrhythmia, and death within minutes. Despite these risks, fugu is considered a delicacy in Japan, and some aficionados consider the tingling sensation of ingesting small amounts of tetrodotoxin an exciting part of the experience.
Pufferfish, or fugu, is usually served sashimi-style.[2]
Crotaline snakes are a venomous genus found in North America. They are also called pit vipers, because they have a heat-sensing organ called a pit under their noses that they use to detect the rodents they prey upon. Copperheads, cottonmouths, and all 20+ species of rattlesnakes are pit vipers. Their venom, a complex mixture of proteins, peptides, and enzymes, is neurotoxic and hemolytic. In other words, it allows the snake to paralyze its prey, and start breaking down the mouse's blood cells, essentially injecting its meal with digestive juices. The venom can do major damage to humans, as well as to mice.
Snake handling, including "snake kissing" and "snake sacking" competitions, are popular in some parts of the United States. These practices explain why so many snake bites occur on the arms and face, while relatively few involve the legs or feet. [3]
Rattlesnake bites are common, and the most typical cases involve young males bitten on the hands, arms, or face (suggesting the human is at least as responsible as the snake is for the encounter). Especially when small blood vessels are involved (such as those in the fingers), the victim is at high risk for extensive tissue death and may lose a digit or an appendage. Additionally, many snakebite victims have an allergic reaction to the snake's saliva or venom, which can lead to a life-threatening reaction.
Antidotes are often portrayed as a magic cure-all in fairytales and fantasy novels--after chasing a witch to the top of a mountain and begging her to brew a rare flower into a glowing potion, the hero takes a long drink and is immediately saved from impending organ failure, suffering no ill consequences from the antidote.
The reality of antidotes is a bit more complicated.
There is an antivenom for crotaline bites called CroFab. It is produced by injecting sheep with small amounts of rattlesnake venom until their immune systems start making an antibody that can bind to the venom and remove it from the body. CroFab is also a large protein, so it produces life-threatening allergic reactions in patients as well. Nearly everyone who receives it will develop an itchy, miserable rash at the very least.
Sometimes, the pros and cons of using an antivenom must be carefully weighed. Consider a black widow spider bite: the toxins cause excess neurotransmitter release, leading to excruciating muscle cramping, nausea, a racing heart, and breathing difficulties that can last for hours. The antivenom cures all of these symptoms. Despite the extreme pain caused by these spider bites, there are no confirmed deaths from black widows' venom. The only confirmed death related to a black widow bite took place when a patient died after having a severe allergic reaction to the antivenom.
Centers for Disease Control, . 5449. Retrieved from http://phil.cdc.gov/phil/details.asp
t-mizo, . Torafugu. Retrieved from https://www.flickr.com/photos/tmizo/6600128203/in/photolist-b4eoaR
That Other Paper, . Snakes gone wild. Retrieved from https://www.flickr.com/photos/austins_only_paper/446693434/in/album-72157600048399037/
Cite as: Chemistry Of Poisons. Brilliant.org. Retrieved from https://brilliant.org/wiki/chemistry-of-poisons/
|
Notes on Graphs and Spectral Properties
Here is the first series of a collection of notes which I jotted down over the past 2 months as I tried to make sense of algebraic graph theory. This one focuses on the basic definitions and some properties of matrices related to graphs. Having all the symbols and main properties in a single page is a useful reference as I delve deeper into the applications of the theories. Also, it saves me time from googling and checking the relationship between these objects.
n
be the number of vertices and
m
the number of edges. Then the adjacency matrix
A
n \times n
is a matrix where
a_{ij}=1
where there is an edge from vertex i to vertex j and zero otherwise. For a weighted adjacency matrix,
W
, we replace 1 with the weights,
w_{ij}
Here we consider the case of undirected graphs. This means that the adjacency matrix is symmetric which implies it has a complete set of real eigenvalues (not necessary positive) and an orthogonal eigenvector basis. The set of eigenvalues (
\alpha_{1} \geq \alpha_{2} \geq ... \geq \alpha_{n}
) is known as the spectrum of a graph.
The greatest eigenvalue,
\alpha_{1}
is bounded by the maximum degree.
Given two graphs with adjacency matrix
A_{1}
A_{2}
, the graphs are isomorphic iff there exist a permutation matrix
P
PA_{1}P^{-1}=A_{2}
. Implies same eigenvalue /eigenvectors/determinant/trace etc. Note: Two graphs may be isospectral (same set of eigenvalues) but NOT isomorphic.
An incidence matrix
\tilde{D}
n \times m
D_{ij}=1
e_{j} = (v_{i},v_{k})
-1
e_{j} = (v_{j},v_{i})
or zero otherwise. In other words, each column represents an edge that shows the vertex it is emitting from (1) and the vertex it is pointing to (-1).
For an undirected graph, there are two kinds of incidence matrix: oriented and unoriented. In the unoriented graph, we just put 1 for any vertex that is connected to an edge. The unoriented graph is similar to that of a directed graph (1 and -1) and is unique up to the negation of the columns.
Laplacian matrix is defined as
L = D - A = \tilde{D}\tilde{D}'
, or the degree matrix
D
minus the adjacency matrix
A
. Hence, the diagonals are the degree while
L_{ij}=-1
v_{i}
v_{j}
are connected, else 0.
\tilde{D}
is the unoriented incidence matrix.
The degree matrix is defined as
D = diag(W \cdot \mathbf{1})
For a weighted degree matrix, the diagonal element
d(i,i) = \sum_{j:(i,j)\in E} w_{ij}
The conventional ordering of eigenvalue is opposite to the adjacency matrix! (
0=\lambda_{1} \leq \lambda_{2} \leq ... \leq \lambda_{n}
A walk on a graph is an alternating path of vertex and series from one vertex to another. A walk between two vertices
u
v
u-v
walk. It's length is the number of edges.
Cool fact: Take the adjacency matrix and multiply it
n
a^{n}_{ij}
, an entry from the
A^{n}
matrix gives the number of
i-j
walks of length
n
. Divide the
i,j
entry by the degree of vertex
i
i,j
entry would give the probability that starting from
i
, you will end up at
j
n
Matrices as operators on the vertices
The adjacency and laplacian matrix can be interpreted as operators on functions of a graph. That is, given
Ax
x
can be interpreted as a function on the vertices, while
A
is a linear mapping of the function
x
Ax(i) = \sum_{j:(i,j)\in E} x_{j}
Or in other words, it is the sum of the elements of x that are connected to vertex
i
. It can also be viewed as a quadratic form:
x'Ax = \sum_{e_{ij}} x_{i}x_{j}
Similarly, expressing the weighted laplacian matrix as an operator:
\begin{aligned} Lx(i) &= Dx(i) - Wx(i) \\ &= \sum_{j:(i,j)\in E} w_{ij} x_{i} - \sum_{j:(i,j)\in E} w_{ij}x_{j} \\ &= \sum_{j:(i,j)\in E} w_{ij}(x_{i}-x_{j}) \end{aligned}
As a quadratic form
\begin{aligned} x'Lx &= x'Dx - x'Wx \\ &= \sum w_{ij}x_{i}^{2} - \sum_{e_{ij}} x_{i}w_{ij}x_{j} \\ &= \frac{1}{2}(\sum w_{ij}x_{i}^{2} - 2\sum_{e_{ij}} x_{i}w_{ij}x_{j} + \sum w_{ij}x_{j}^{2}) \\ &= \frac{1}{2}\sum_{e_{ij}} w_{ij}(x_{i}-x_{j})^{2} \end{aligned}
The symmetric normalised Laplacian matrix is defined as
L^{sym} = D^{-1/2}LD^{-1/2} = I - D^{-1/2}AD^{-1/2}
Since the degree matrix is a diagonal matrix,
D^{-1/2}
is just the
D
matrix with the diagonals square rooted.
L
L
is symmetry because
W
\mathbf{1}
is an eigenvector of the matrix (sum of any column=0), and
L\mathbf{1} = 0\mathbf{1}
, hence 0 is the smallest eigenvalue.
0=\lambda_{1} \leq \lambda_{2} \leq ... \leq \lambda_{n}
are real and non-negative.
Laplacian Matrix and Connectedness
Define a path as a walk without any repeated vertices. A graph is connected if any two of its vertices are contained in a path.
In a fully connected graph,
\lambda_{2}>0
. Proof that the only eigenvector is
\mathbf{1}
x
be the eigenvector associated with the eigenvalue 0. From the quadratic form:
x'Lx = x'0 = 0 = \sum_{e_{ij}} w_{ij}(x_{i}-x_{j})^{2}
This implies that for any
{i,j} \in E, x_{i} = x_{j}
. Since, there exist a path from any two vertices,
x_{i} = x_{j}
i,j \in V
x = \alpha \left[\begin{array}{c} 1 \\ 1 \\ . \\ . \\ 1 \end{array}\right]
Hence, the multiplicity (number of linearly independent) of eigenvalue 0 is 1, and
\lambda_{2} > 0
In fact, the multiplicity of the eigenvalue 0 tells us the number of connected components in the graph. For example, a graph with two connected components (the adjacency matrix and the laplacian matrix will have a block diagonal structure), you will get two eigenvectors associated with the eigenvalue 0. Something like
[1~1~1~0~0~0]'
[0~0~0~1~1~1]'
To summarise, the number of connected components is equal to the multiplicity of eigenvalue 0 which is equal to the dimension of the null space of
L
Normalised Symmetric Laplacian and Random walk matrix
The normalised symmetric laplacian is defined as:
L_{sym} = I - D^{-1/2}WD^{-1/2} = D^{-1/2}LD^{-1/2}
In other words, it has 1 on the diagonals and
-\frac{1}{\sqrt{deg(v_{i})deg(v_{j})}}
v_{i}
v_{j}
The random walk matrix is defined as:
L_{rw} = D^{-1}L = I - D^{-1}W = D^{-1/2}L_{sym}D^{1/2}
L_{sym}
L_{rw}
are similar matrices.
L
L_{sym}
L_{rw}
The three matrices are symmetric, positive, semidefinite.
L_{sym}
L_{rw}
share the same eigenvalues.
u
L_{rw}
D^{1/2}u
L_{sym}
u
is a solution of the eigenvalue problem
Lu = \lambda Du
D^{1/2}u
L_{sym}
for the eigenvalue
\lambda
u
L_{rw}
\lambda
A similar connection between the connected components and
L
can be made with
L_{sym}
L_{rw}
graph-theorynotes
|
617.3 Material Inspection for Sec 617 - Engineering_Policy_Guide
This article establishes procedures for inspecting, accepting and reporting of protective coating for concrete bents and piers (urethane and epoxy), concrete and masonry protective systems, sacrificial graffiti protection system and temporary coating for concrete bents and piers (weathering steel). This article establishes procedures for inspecting and reporting concrete median barriers. Reinforcing steel, concrete curing material, mortar for grout and portland cement concrete shall be inspected in accordance with the corresponding sections of the Engineering Policy Guide. Refer to Sec 617 for MoDOT’s specifications. Fabrication of cast-in-place median barriers is inspected and reported by Construction and Materials.
2.2 617.3.2.2 Forms
2.3 617.3.2.3 Placement of Reinforcement
2.4 617.3.2.4 Placing and Consolidating Concrete
2.5 617.3.2.5 Concrete Testing
2.6 617.3.2.6 Curing
2.7 617.3.2.7 Form Removal
2.8 617.3.2.8 Marking
2.9 617.3.2.9 Inspection of Completed Units
2.10 617.3.2.10 Records
3 617.3.3 Report
(a) Micrometer capable of measuring to 0.0001 in. and accurate to within at least 0.001 in.
(c) Magnetic gauge, reading range 0-40 mils (0-1000
{\displaystyle \mu \,}
(d) OK - MoDOT stamp.
(e) Weather resistant marking materials.
Acceptance of precast concrete median barriers is to be based on the inspection of materials being incorporated into the sections, compressive tests on cured concrete cylinders and inspection of the finished section, including amount and placement of reinforcement.
The inspector shall ensure that only inspected and approved materials are used in the fabrication of concrete median barriers. The fabricator shall submit a proposed portland cement concrete mixture for approval. The District Construction and Materials Engineer shall ensure that the proposed mix meets the designated requirements.
617.3.2.2 Forms
The inspector shall ensure that forms and formwork comply with the applicable requirements of Sec 0617, Sec 0703, Standard Plan 617.10 and Standard Plan 617.20.
617.3.2.3 Placement of Reinforcement
The inspector shall ensure that the placing and tying of reinforcing bars conform to the applicable requirements of Sec 706, Standard Plan 617.10, Standard Plan 617.20 and that all reinforcing steel is properly placed and are clear and free. All reinforcing steel shall be checked to ensure that it is free from form oil or other material that might serve as a bond breaker.
617.3.2.4 Placing and Consolidating Concrete
The inspector shall ensure that the placing, consolidating, and finishing of concrete is in accordance with the applicable provisions of Sec 703.
617.3.2.5 Concrete Testing
During the placement of the portland cement concrete, tests shall be performed to ensure that the requirements of Sec 617 are met. Portland cement concrete shall be inspected and reported in accordance with EPG 501.1 Construction Inspection and EPG 501.2 Materials Inspection. Tests for determining consistency and air entrainment shall be performed. At least 2 sets of specimens for compressive strength for each pour shall be prepared in accordance with EPG 501.1 Construction Inspection and cured in the same manner as the units.
617.3.2.6 Curing
The inspector shall observe the curing operation to verify that the requirements of Sec 617 and Sec 1026 are being met. Type 1-D liquid membraneforming compound may be used in accordance with Sec 1055.2. The Type 1-D pigmented cure shall be applied at the same rate as for paving. Type 2, white pigmented curing compound, is not allowed.
617.3.2.7 Form Removal
Precast concrete members shall be cured a minimum of 12 hours after pouring before forms can be removed. The inspector at the fabricating plant is responsible for making and testing standard compressive specimens to determine when forms may be removed and curing discontinued in accordance with Sec 617. Compressive tests should be performed at the fabricating plant or an approved commercial testing laboratory. At the discretion of the District Construction and Materials Engineer, the specimens may be tested using district equipment. If questionable results are being obtained, the compressive specimens shall be submitted to the Laboratory for testing. At least one set of regular 28-day cylinders should be submitted to the Laboratory for each project to check the portland cement concrete mix design with a minimum of one set per month of operation of precast plant or as often deemed necessary. Form C-701 is to be used as an identification sheet with the distribution and title modified as required.
617.3.2.8 Marking
The inspector shall ensure that each precast unit is identified with the manufacturer, location of manufacturer and year of manufacture clearly and permanently marked by indentation, plates or other suitable methods. The permanent markings should be identifiable after installation. New barriers are also required to be marked with the day and month of manufacture for yard tracking. The day and month markings may be by permanent marker that will wear off over time and will not be required when the barrier is re-used. These markings shall be located as shown on the plans. If the location of the markings is not shown on the plans, markings shall be located in such a manner that they will not be obvious or objectionable to the traveling public after the unit has been placed in its final position. The barriers shall not be accepted or shipped new from the yard if the year, month and day of manufacture are not identifiable. The barriers shall not be accepted or used on the project if the manufacturer, manufacturer location, and year of manufacture are not identifiable.
617.3.2.9 Inspection of Completed Units
The finished units are to be examined for conformance to dimensions, workmanship, and marking in accordance with Sec 617, Standard Plan 617.10 and Standard Plan 617.20. Accepted sections are to be stamped with an "OK - MoDOT" by the inspector. Minor patching will be permitted to obtain the specified texture so long as the defects to be patched do not harm the structural integrity of the unit.
617.3.2.10 Records
The plant inspector shall maintain a complete file of all data pertaining to the manufacturer of the concrete units, either at the manufacturing plant or in the district office. Complete and accurate records of each manufacturing operation should be kept in a field book. All pertinent data that in any way affects or influences the construction procedures or completed members shall also be recorded in the field book. Data shall be entered in the field book as soon as it is known. Field notes shall not be copied but shall be kept exactly as they are originally recorded.
Materials shall be reported as shown in the applicable sections of this article. Appropriate remarks, as described in EPG 106.20 Reporting are to be included in the report to clarify conditions of acceptance or rejection.
When precast median barriers are delivered to a given project, the units shall be reported through AASHTOWARE Project (AWP) and shall contain one of the following statements as appropriate:
"These median barriers have been surface sealed."
"These median barriers have not been surface sealed."
Report copies normally forwarded to the Resident Engineer shall be furnished to the Inspector at the precast manufacturing plant. The reports shall be marked "General" and the name of the precast concrete manufacturer must be shown. The material is to be shown for use in "Precast Concrete Units." The plant inspector located at the concrete proportioning plant shall furnish a plant inspector's daily report through AWP for each day's pour identifying the units manufactured from that pour. The name of the manufacturer or set-up must be shown. The original copy of this report shall be forwarded to the State Construction and Materials Engineer. The duplicate copy shall be furnished to the District Construction and Materials Engineer supervising operations at the manufacturing plant. The plant inspector shall retain the triplicate copy of the plant records.
When precast concrete units are delivered to a given project, the units shall be reported through AWP with distribution, per Class A in EPG 106.20.8 Rejected Reports.
Retrieved from "https://epg.modot.org/index.php?title=617.3_Material_Inspection_for_Sec_617&oldid=50201"
|
How can NIR spectroscopy be used to analyze beverages? | Hamamatsu Photonics
How can NIR spectroscopy be used to analyze beverages?
Jay R. Powell, PhD, Analytical Answers Inc.
Spectroscopy is a branch of the physical sciences which studies and applies the interactions between electromagnetic radiation and matter. This field is further divided based on the wavelength range, such as X-ray spectroscopy, visible spectroscopy, infrared spectroscopy, and microwave spectroscopy, to name just a few. Hamamatsu offers instruments and components covering both the visible range, i.e., light we can "see" (300-700 nm), and the near-infrared range (700-2500 nm). This last near-infrared, or NIR, range has found numerous applications in a wide range of different chemical, material, environmental, and other analysis.
In the visible range, light can be absorbed by atoms and molecules through electronic transitions. Absorption of light of the right wavelength or energy by an atom or molecule leads to a decrease in the amount of light transmitted. In the condensed phase (liquids and solids), visible light absorption occurs from complex conjugated double bonds and aromatic rings in organic molecules, although many aqueous transition metal ions, through complexion with associated water molecules, can also form strongly-light-absorbing species. This absorption of light through electronic transitions produces the broad range of colors we perceive from solids and liquids.
In the NIR range, the energy carried by the NIR light is too low to excite the electronic transitions noted above, and thus the intensely colored species observed in the visible range have less of an influence in the NIR. In the NIR range, light absorption occurs through vibrational transitions, where absorption of light of the right wavelength or energy increases the vibrational mode between two atoms bonded together in a molecule. As NIR absorption does not require conjugated bonds or solvent complexes as in the visible, it is more sensitive to both the structure of the molecule and how that molecule interacts with nearby molecules. It is this sensitivity to the molecular structure and molecular environment which gives NIR spectroscopy its power.
The most common solvent used, in fact often referred to as the "universal solvent," is water. Water, through interaction with O-H stretching and H-O-H "scissoring" vibrational modes, is a weak NIR absorber. Figure 1 shows the NIR absorbance profile of water, covering approximately 800 to almost 1400 nm in wavelength, as measured in a 1 cm (10 mm) thick quartz cuvette. The Y axis shows the absorbance value, commonly abbreviated "A," and which is the negative base 10 log of the percent of the light transmitted at that wavelength (A = -log10(%T/100), where %T, a.k.a. %Transmittance, is the ratio of the light measured by the instrument with and without the sample in place, or %T = I/I0). Absorbance values are often preferred in such applications, as a simple relationship between absorbance, concentration, and sample thickness (or pathlength) is given by:
{A}_{\lambda }={a}_{\lambda }bc
where Aλ = measured Absorbance at a given wavelength λ, c = concentration, b = sample thickness or "pathlength," and aλ = absorptivity of the species of interest at the given wavelength λ. Assuming that a constant pathlength cell is used, such as the common 1 cm cuvette described above, then the pathlength term can be ignored and the measured absorbance can be directly related to the concentration of the species of interest.
In Figure 1, we note three areas where water shows stronger absorbances: a broad band around 970 nanometers (nm), a stronger broad band around 1200 nm, and a very strong absorbance starting around 1300 nm and increasing to the end of the range near 1400 nm. These absorbance areas arise from the interactions between the O-H and H-O-H scissoring vibrations of the water molecules, and the broadness of the bands is due to the extended interactions, or "hydrogen bonding" between adjacent water molecules. These hydrogen bonding interactions can be used in characterizing aqueous systems.
Figure 1. NIR absorbance profile of water
In Figure 2, three plots are presented showing deionized (DI) water, a diet beverage, and a regular beverage sweetened with a relatively high loading (ca 10% w/w) of one or more sugars. Note that in contrast to Figure 1, the strong absorbances in the DI water at 970, 1200, and 1300+ are no longer observed here, as these spectra were collected ratioed to DI water as a background reference. Using DI water as the background reference, if there is no difference in the interactions between water molecules between a sample and the DI water reference, little to no change will be observed in these regions. Adding more components to the (mostly) water beverages causes larger and larger changes in these absorbance bands, as seen by comparing the traces from the regular (sugar sweetened) beverage and diet (no sugars). Thus, one simple application of NIR spectroscopy using Hamamatsu instrumentation is to differentiate between sugar-sweetened and artificially sweetened beverages.
Figure 2. NIR absorbance profiles of water (blue curve), diet beverage (purple curve), and beverage with sugar(s) (red curve)
Instrumentation components for a beverage analyzer
For NIR absorbance measurements, there are four major components required. An overview of these components is presented in Figure 3.
Figure 3. Components of Hamamatsu beverage analyzer
First, a strong and stable source of NIR light is required. A strong or "bright" source is desired, in order to minimize the impact of light transfer and sampling losses. A stable source, that is a source whose output does not measurably change over time, is necessary as such changes in source output from one measurement to the next will be indistinguishable from the differences we are trying to measure in our samples. Here, we have used the Hamamatsu L7893 D2 / Quartz Halogen UV-Vis-NIR light source. This switch-selectable light source allows selection of the D2 lamp for UV-Vis sampling, or the Quartz Halogen NIR source, which is used here.
Next, a method to transfer the light from the source to the sample is needed. Here, creativity in optical coupling can be used, often based on optical tables, mirrors, and focusing lenses to move the light from the source to the sample, and from the sample onwards. One easy method to do this in the NIR range is to use dedicated NIR fiber optics, which are based on fine silica fibers formulated to minimize light scattering and loss from other foreign species, such as hydroxyl (-O-H) groups. In addition, as a reproducible sample thickness or pathlength is required, a standard 1 cm (10 mm) pathlength quartz cuvette is used, which is held in a dedicated cuvette holder with fiber optic connections. Here, we have used a pair of ThorLabs TP01195070 optical fibers with industry-standard SMA connectors to take light from the Hamamatsu L7893 source (also with industry standard SMA connectors) to a ThorLabs CVH100 cuvette holder with a pair of SMA connectors (one for light going in, one for light going out), with our samples and water reference solutions held in Pike Technologies 162-0223 glass cuvettes with covers. This is shown in more detail in Figure 4. These cuvettes are the most common visible to NIR range sampling accessory, are available from a wide variety of sources, are easy to clean, and are relatively inexpensive.
Figure 4. Quartz cuvettes, holder, and fiber light guides
Finally, the NIR light, after passing through the sample, has to be measured. A dispersive spectrometer collects the light, and separates or disperses it into the individual wavelengths. Here, we have used a Hamamatsu C11118GA dispersive spectrometer, with a thermoelectrically cooled integrated array detector, covering the range of 850 nm out to 2,500 nm (or 2.5 μm, which is the edge of the mid-infrared region of 2.5 μm to 25 μm). Here, our spectrometer and detector range is well beyond the range needed for our beverage analyzer. However, substitution of other source, light transfer, and sampling components allows easy reconfiguration of this spectrometer for other applications.
Controlling the beverage analyzer spectrometer
While the selected hardware components offer a very powerful and capable combination, these components on their own will not supply us with the differentiation between the artificial and sugar sweetened beverages we wish to demonstrate. Here, additional communications, control, and data acquisition of the C11118GA system is provided through a Hamamatsu software interface package. This interface package allows easy definition, access, control, and acquisition of the C11118GA spectrometer through common development tools, such as Microsoft Visual Studio’s Visual Basic (VB). Figure 5 shows an example of using these interfacing tools in VB, where the program calls up the function to acquire the spectral data from the spectrometer.
Figure 5. Example of "GetSpectrum" data acquisition from Hamamatsu's spectrometer interface
VB then provides the tools to easily lay out windows, screens, menus, and function buttons as part of a dedicated graphical user interface (GUI) for our beverage analyzer (Figure 6).
Figure 6. Visual Basic layout for beverage analyzer
In modern programming environments, program control occurs through a combination of objects, properties, events, and methods. For example, Figure 7 shows code written for the "btnScan_Click," an event which is "fired" when the user clicks on the "Scan" button seen in Figure 6. Here, Figure 7 shows that only a small amount of code is needed to first define variables to hold information (the Dim statements), reset the display, and then get a spectrum from the C11118GA "GetSpectrum" method. The remainder of the code shown then ratios the collected spectrum to the DI water background, and calculates the absorbance spectrum from the ratio spectrum.
Figure 7. Calling Hamamatsu's data acquisition in Visual Basic
Once the spectral data has been acquired, the programmer can then manipulate the data as necessary to extract the desired chemical information from the data array. For example, Figure 2 shows the NIR spectra of sugar-sweetened and artificially sweetened beverages (along with water), and Figure 8 shows the programmer’s "cross-check" on the data acquired, where the data has been exported into an Excel format and plotted to verify the program calculation steps are operating correctly. Note that in Figure 8, the peaks located at the X-axis values of 15 and 40 correspond to the peaks shown in Figure 2 at 970 and 1200 nanometers. Thus, the acquired spectrum can now be measured by selecting important peaks (based on pixel number or wavelength), baselines determined, band areas calculated, and converted to meaningful information for presentation to the operator.
Figure 8. Excel cross-check on captured spectra
Once Hamamatsu’s interface package has been defined in a VB project, the developer can then concentrate on program design, flow, data manipulation, and the user interface. For our beverage analyzer, we first want to prompt the operator to collect a blank distilled or deionized (DI) water sample, which will be used as the background reference. This is shown in Figure 9, where all the other dialog box components shown in Figure 6 are automatically hidden until the DI water background is collected.
Figure 9. Initial screen, prompting operator to collect DI water background spectrum
Once the DI water background is collected, the operator can then choose if they believe the beverage is sweetened with sugar(s) or artificial sweetener(s), shown in Figure 10.
Figure 10. Prompt to choose sweetener and start scan
The spectrum of the beverage is then collected and measured based on the selected pixels or wavelengths as shown in Figure 8 and Figure 2. If the measurements show large peaks at these locations, the program can then display a message indicating sugar(s) present (Figure 11), or alternately, if the measurements show minor or no peaks in those locations, the program can display a message indicating no sugar(s) present (Figure 12).
Figure 11. Operator chose sugar(s) and the beverage analyzer also finds sugar(s)
Figure 12. Operator chose sugar(s) but the beverage analyzer detected no sugar(s)
Although the creation of this "beverage analyzer" was not intended to be a commercial product, it does show the relative ease in using a Hamamatsu spectrometer system and interface package. Here, in conjunction with common optical components and simple programming tools, relatively sophisticated UV-Vis-NIR analyzers can be easily designed and implemented.
|
Revision as of 08:23, 17 February 2017 by Smithk (talk | contribs) (→B3. Estimated Quantities Tables: Removed, per BR)
(2) For bridges and retaining walls use the minimum "45 lb/cf" unless the Ø angle requires using a larger value. For box culverts use "30 lb/cf (min.), 60 lb/cf (max.)".
(A1.3) Design Unit Stresses:
Class B Concrete (Substructure) fc = 1,200 psi f'c = 3,000 psi
Class B Concrete (Retaining Wall) fc = 1,200 psi f'c = 3,000 psi
Class B-2 Concrete (Drilled Shafts & Rock Sockets) fc = 1,600 psi f'c = 4,000 psi
Class B-1 Concrete (Superstructure) fc = 1,600 psi f'c = 4,000 psi
Median Barrier Curb) fc = 1,600 psi f'c = 4,000 psi
Class B-1 Concrete (Substructure) fc = 1,600 psi f'c = 4,000 psi
Class B-1 Concrete (Box Culvert) fc = 1,600 psi f'c = 4,000 psi
Safety Barrier and Median Barrier Curb) fc = 1,600 psi f'c = 4,000 psi (1)
Reinforcing Steel (Grade 40) fs = 20,000 psi fy = 40,000 psi
Structural Carbon Steel(ASTM A709 Grade 36) fs = 20,000 psi fy = 36,000 psi
Structural Steel (ASTM A709 Grade 50) fs = 27,000 psi fy = 50,000 psi
Structural Steel (ASTM A709 Grade 50W) fs = 27,000 psi fy = 50,000 psi
Structural Steel (ASTM A709 Grade HPS50W) fs = 27,000 psi fy = 50,000 psi
Steel Pile (ASTM A709 Grade 36 or 50) fb = (3) psi fy = 36,000 or 50,000 psi (2)
Steel Pipe Pile (ASTM A252 Grade 2 or 3) fy = 35,000 or 45,000 psi (2)
(2) Indicate higher grade and strength only when required by design.
(3) 6,000 9,000 12,000 Design bearing for point bearing piles which are to be driven to rock or other point bearing material shall be designed 9,000 psi, unless the Design Layout specifies otherwise.
{\displaystyle \Box }
{\displaystyle \Box }
Field Coat(s): The color of the field coat(s) shall be Gray (Federal Standard #26373) Brown (Federal Standard #30045) Black (Federal Standard #17038) Dark Blue (Federal Standard #25052) Bright Blue (Federal Standard #25095). The cost of the intermediate field coat will be considered completely covered by the contract lump sum unit price per sq. foot for "Intermediate Field Coat (System G)". The cost of the finish field coat will be considered completely covered by the contract lump sum unit price per sq. foot for "Finish Field Coat (System G)" "Finish Field Coat (System I)".
Surface Preparation: Surface preparation of the existing steel shall be in accordance with Sec 1081 for "Recoating of Structural Steel (System G, H or I)". The cost of surface preparation will be considered completely covered by the contract lump sum unit price per sq. foot for "Surface Preparation for Recoating Structural Steel".
Prime Coat: The cost of the prime coat will be considered completely covered by the contract lump sum unit price per sq. foot for “Field Application of Inorganic Zinc Primer". Tint of the prime coat for System G I shall be similar to the color of the field coat to be used.
Prime Coat: The cost of the prime coat will be considered completely covered by the contract lump sum unit price per sq. foot tons for "Calcium Sulfonate Primer".
Surface Preparation: Surface preparation of the existing steel shall be in accordance with Sec 1080 and Sec 1081 for "Recoating of Structural Steel (System G, H or I)". The cost of surface preparation will be considered completely covered by the contract lump sum unit price per sq. foot for "Surface Preparation for Recoating Structural Steel".
Prime Coat: The cost of the prime coat will be considered completely covered by the contract lump sum unit price per sq. foot for “Field Application of Inorganic Zinc Primer". Tint of the prime coat for System H shall be similar to the color of the field coat to be used.
Field Coats: The color of the field coats shall be Brown (Federal Standard #30045). The cost of the intermediate field coat will be considered completely covered by the contract lump sum unit price per sq. foot for "Intermediate Field Coat (System H)". The cost of the finish field coat will be considered completely covered by the contract lump sum unit price per sq. foot for "Finish Field Coat (System H)".
All reinforcement in the end bents (except detached wing walls) is included in the Estimated Quantities for Slab on Steel Slab on Concrete I-Girder Slab on Concrete Bulb-Tee Girder Slab on Concrete NU-Girder Slab on Concrete Beam Reinforced Concrete Slab Overlay.
{\displaystyle \,{\big \{}}
{\displaystyle \,{\big \{}*}
{\displaystyle \,{\Bigg \{}}
Cost of channel shear connectors (Pile Anchors) C4 x 5.4 (ASTM A709 Grade 36) in place will be considered completely covered by the contract unit price for Structural Steel Piles ( 10 in. 12 in. 14 in.).
{\displaystyle \,*}
{\displaystyle \,*}
{\displaystyle \,**}
{\displaystyle \,**}
{\displaystyle \,*}
{\displaystyle \geq }
{\displaystyle <}
(B3.22) Stay-in-place forms are not an option with panels.
Form sheets shall not rest directly on the top of girder beam or floorbeam flanges. Sheets shall be securely fastened to form supports with a minimum bearing length of one inch on each end. Form supports shall be placed in direct contact with the flange. Welding on or drilling holes in the girder beam or floorbeam flanges will not be permitted. All steel fabrication and construction shall be in accordance with Secs 1080 and 712. Certified field welders will not be required for welding of the form supports.
The design of stay-in-place corrugated steel forms is per manufacturer which shall be in accordance with Sec 703 for false work and forms. Maximum actual weight of corrugated steel forms allowed shall be 4 psf assumed for girder loading.
{\displaystyle {\sqrt {}}}
{\displaystyle \,*}
{\displaystyle \,*}
{\displaystyle \,*}
{\displaystyle \,*}
{\displaystyle {\sqrt {}}}
{\displaystyle \,*}
(E2.1) [MS Cell] (E2.1A, E2.1B, E2.1C and E2.1D) (Example: Use the underlined parts in the bent headings for bridges having detached wing walls at end bents only. Remove the rows CIP Type and CIP Standard Plan if CIP piles are not used.)
(On the plans, report the following definition(s) just below the foundation data table for the specific method(s) used.)
Manufactured pile point reinforcement shall be used on all piles in this structure at Bent(s) No. and .
{\displaystyle \,*}
{\displaystyle \,*}
{\displaystyle \,**}
{\displaystyle \,**}
{\displaystyle \,*}
Closure plates shall be required for tips of pipe piles and shall not project beyond the outside diameter of the pipe piles. Satisfactory weldments may be made by beveling tip ends of pipe or by use of inside backing rings. In either case, proper gaps shall be used to obtain weld penetration full thickness of pipe. Payment for furnishing and installing closure plates will be considered completely covered by the contract unit price for Cast-in-Place Concrete Piles.
{\displaystyle \,*}
{\displaystyle \,*}
{\displaystyle \,*}
{\displaystyle \,*}
{\displaystyle \,***}
{\displaystyle \,***}
{\displaystyle \,***}
{\displaystyle \,***}
Bolts for intermediate diaphragms and cross frames that connect girders stringers under different construction staged slab pours shall be installed snug tight, then tightened after both adjacent slab pours are completed.
Existing bolts rivets on intermediate diaphragms and cross frames that connect girders stringers under different construction staged slab pours shall be removed and replaced with new in kind high strength bolts installed snug tight and in accordance with Sec 712. The high strength bolts shall be tightened after both adjacent slab pours are completed. Cost will be considered incidental to other pay items.
(*) At the contractor's option, rectangular fill plates may be used in lieu of diamond fill plates as shown in Optional Detail "B".
Use notes H2c1.10 and H2c1.11 when steel intermediate diaphragms are present.
For details of diaphragms, see Sheet No. __.
For location of coil ties and #6 bars, see Sheets No. __and __.
The bracket assembly shall be galvanized in accordance with ASTM A123.
The drains shall be galvanized in accordance with ASTM A123.
(H7.3) Use with new wearing surface over new slab.
The drains pieces "A" and "B", (*) coil inserts, spacer and bracket assembly shall be galvanized in accordance with ASTM A123.
(H7.8) Use “coil insert required” for prestressed girders, “coil inserts required” for prestressed beams and “bolt hole” for steel structures.
Place the following notes (H7.10) and (H7.11) with prestressed girder and prestressed beam slab drain details.
(H7.11) Prestressed box and slab beams require two bolts.
The bolt hole for the bracket assembly attachment shall be located on the plate girder shop drawings.
Use following notes when Fiberglass Reinforced Polymer (FRP) slab drains are used.
The color of the slab drain shall be Gray (Federal Standard #26373). The color shall be uniform throughout the resin and any coating used.
No additional payment will be made for this substitution.
{\displaystyle \,**}
{\displaystyle \,*}
(H10.20) Add K13 bars with two different wing lengths. Will need to add more bars if more than two different wing lengths exists.
Use a minimum lap of 2'-0" between K9 and K10 or K13 bars.
{\displaystyle {\sqrt {}}}
{\displaystyle {\sqrt {}}}
{\displaystyle \,*}
{\displaystyle \,**}
{\displaystyle \,*}
Minimum 6” diameter perforated PVC or PE pipe, unless larger size pipes are required by design by wall manufacturer.
{\displaystyle \,*}
|
Physics/Essays/Fedosin/Quantum Gravitational Resonator - Wikiversity
Physics/Essays/Fedosin/Quantum Gravitational Resonator
< Physics | Essays | Fedosin(Redirected from Quantum Gravitational Resonator)
Quantum Gravitational Resonator (QGR) – closed topological object of the three dimensional space, in the general case – ‘’cavity’’ of arbitrary form, which has definite ‘’surface’’ and ‘’thickness’’. The QGR can have “infinite” phase shifted oscillations of gravitational field strength and gravitational torsion field, due to the quantum properties of QGR.
2 Classical gravitational resonator
3 Quantum general approach
3.1 Quantum gravitational LC circuit oscillator
3.2 Gravitational resonator as quantum LC circuit
4.1 Planckion resonator
4.2 Bohr atom as a gravitational quantum resonator
Considering that the theory of the gravitational resonator is based on the Maxwell-like gravitational equations and Quantum Electromagnetic Resonator (QER), therefore the QGR history is close connected with the QER history.
Classical gravitational resonatorEdit
The gravitational LC circuit can be composed by analogy with the electromagnetic LC circuit, and gravitational field strength and gravitational torsion field oscillate in the circuit as a result of oscillating mass current.
{\displaystyle V_{gL}=-L_{g}\cdot {\frac {dI_{gL}}{dt}}.\ }
{\displaystyle I_{gC}=C_{g}\cdot {\frac {dV_{gC}}{dt}}.\ }
{\displaystyle {\frac {dV_{gL}}{dt}}=-L_{g}{\frac {d^{2}I_{gL}}{dt^{2}}},\qquad {\frac {dI_{gC}}{dt}}=C_{g}{\frac {d^{2}V_{gC}}{dt^{2}}}.}
Considering the following relationships for voltages and currents:
{\displaystyle V_{gL}=V_{gC}=V_{g},\qquad I_{gL}=I_{gC}=I_{g},\ }
{\displaystyle ~{\frac {d^{2}I_{g}}{dt^{2}}}+{\frac {1}{L_{g}C_{g}}}I_{g}=0,\qquad {\frac {d^{2}V_{g}}{dt^{2}}}+{\frac {1}{L_{g}C_{g}}}V_{g}=0.\quad \quad \quad \quad \quad (1)\ }
Furthermore, considering the following relationships between voltage and mass, current and flux of gravitational torsion field:
{\displaystyle m=C_{g}V_{g},\qquad \Phi =L_{g}I_{g}}
the above oscillation equation can be rewritten in the form:
{\displaystyle {\frac {d^{2}m}{dt^{2}}}+{\frac {1}{L_{g}C_{g}}}m=0.\quad \quad \quad \quad \quad (2)\ }
{\displaystyle m(t)=m_{0}\sin(\omega _{g}t),\ }
{\displaystyle \omega _{g}={\frac {1}{\sqrt {L_{g}C_{g}}}}\ }
is the resonance frequency, and
{\displaystyle \rho _{LC}={\sqrt {\frac {L_{g}}{C_{g}}}},\ }
is the gravitational characteristic impedance.
For the sake of completeness we can present the differential equation for the flux of gravitational torsion field in the form:
{\displaystyle {\frac {d^{2}\Phi }{dt^{2}}}+{\frac {1}{L_{g}C_{g}}}\Phi =0.\quad \quad \quad \quad \quad (3)\ }
The realization of gravitational LC circuit is described in a section of maxwell-like gravitational equations.
Quantum general approachEdit
Quantum gravitational LC circuit oscillatorEdit
Inductance momentum quantum operator in the electric-like gravitational mass space can be presented in the following form:
{\displaystyle {\hat {p}}_{gm}=-i\hbar {\frac {d}{dm}},\quad \quad \quad \quad \quad {\hat {p}}_{gm}^{*}=i\hbar {\frac {d}{dm}},\quad \quad \quad \quad \quad (4a)\ }
{\displaystyle \hbar \ }
is reduced Plank constant,
{\displaystyle {\hat {p}}_{gm}^{*}\ }
is the complex-conjugate momentum operator,
{\displaystyle m\ }
is the induced mass.
Capacitance momentum quantum operator in the magnetic-like gravitational mass space can be presented in the following form:
{\displaystyle {\hat {p}}_{g\Phi }=-i\hbar {\frac {d}{d\Phi }},\quad \quad \quad \quad \quad {\hat {p}}_{g\Phi }^{*}=i\hbar {\frac {d}{d\Phi }},\quad \quad \quad \quad \quad (4b)\ }
{\displaystyle \Phi \ }
is the induced torsion field flux, which is imitated by electric-like gravitational mass current (
{\displaystyle i_{g}\ }
{\displaystyle \Phi =L_{g}\cdot i_{g}.\ }
We can introduce the third momentum quantum operator in the current form:
{\displaystyle {\hat {p}}_{gi}=-{\frac {i\hbar }{L_{g}}}{\frac {d}{di_{g}}},\quad \quad \quad \quad \quad {\hat {p}}_{gi}^{*}={\frac {i\hbar }{L_{g}}}{\frac {d}{di_{g}}},\quad \quad \quad \quad \quad (4c)\ }
These quantum momentum operators defines three Hamilton operators:
{\displaystyle {\hat {H}}_{gLm}=-{\frac {\hbar ^{2}}{2L_{g}}}\cdot {\frac {d^{2}}{dm^{2}}}+{\frac {L_{g}\omega _{0}^{2}}{2}}{\hat {m}}^{2}\quad \quad \quad \quad \quad (5a)\ }
{\displaystyle {\hat {H}}_{gC\Phi }=-{\frac {\hbar ^{2}}{2C_{g}}}\cdot {\frac {d^{2}}{d\Phi ^{2}}}+{\frac {C_{g}\omega _{0}^{2}}{2}}{\hat {\Phi }}^{2}\quad \quad \quad \quad \quad (5b)\ }
{\displaystyle {\hat {H}}_{gLi}=-{\frac {\hbar ^{2}\omega _{0}^{2}}{2L_{g}}}\cdot {\frac {d^{2}}{di_{g}^{2}}}+{\frac {L_{g}\omega _{0}}{2}}{\hat {i}}_{g}^{2},\quad \quad \quad \quad \quad (5c)\ }
{\displaystyle \omega _{0}={\frac {1}{\sqrt {L_{g}C_{g}}}}\ }
is the resonance frequency. We consider the case without dissipation (
{\displaystyle R_{g}=0\ }
). The only difference of the gravitational charge spaces and gravitational current spaces from the traditional 3D- coordinate space is that it is one dimensional (1D). Schrodinger equation for the gravitational quantum LC circuit could be defined in three form:
{\displaystyle -{\frac {\hbar ^{2}}{2L_{g}}}{\frac {d^{2}\Psi }{dm^{2}}}+{\frac {L_{g}\omega _{0}^{2}}{2}}m^{2}\Psi =W\Psi \quad \quad \quad \quad \quad (6a)\ }
{\displaystyle -{\frac {\hbar ^{2}}{2C_{g}}}{\frac {d^{2}\Psi }{d\Phi ^{2}}}+{\frac {C_{g}\omega _{0}^{2}}{2}}\Phi ^{2}\Psi =W\Psi \quad \quad \quad \quad \quad (6b)\ }
{\displaystyle -{\frac {\hbar ^{2}\omega _{0}^{2}}{2L_{g}}}{\frac {d^{2}\Psi }{di_{g}^{2}}}+{\frac {L_{g}\omega _{0}}{2}}i_{g}^{2}\Psi =W\Psi .\quad \quad \quad \quad \quad (6c)\ }
To solve these equations we should to introduce the following dimensionless variables:
{\displaystyle \xi _{m}={\frac {m}{m_{0}}};\quad \quad m_{0}={\sqrt {\frac {\hbar }{L_{g}\omega _{0}}}};\quad \quad \lambda _{m}={\frac {2W}{\hbar \omega _{0}}}\quad \quad (7a)\ }
{\displaystyle \xi _{\Phi }={\frac {\Phi }{\Phi _{0}}};\quad \quad \Phi _{0}={\sqrt {\frac {\hbar }{C_{g}\omega _{0}}}};\quad \quad \lambda _{\Phi }={\frac {2W}{\hbar \omega _{0}}}\quad \quad (7b)\ }
{\displaystyle \xi _{i}={\frac {i_{g}}{i_{g0}}};\quad \quad i_{g0}={\sqrt {\frac {\hbar \omega _{0}}{L_{g}}}};\quad \quad \lambda _{i}={\frac {2W}{\hbar \omega _{0}}}.\quad \quad (7c)\ }
{\displaystyle m_{0}\ }
is scaling induced electric-like gravitational mass;
{\displaystyle \Phi _{0}\ }
is scaling induced gravitational torsion field flux and
{\displaystyle i_{g0}\ }
is scaling induced mass current.
Then the Schrodinger equation will take the form of the differential equation of Chebyshev-Ermidt:
{\displaystyle \left({\frac {d^{2}}{d\xi ^{2}}}+\lambda -\xi ^{2}\right)\Psi =0.\ }
The eigen values of the Hamiltonian will be:
{\displaystyle W_{n}=\hbar \omega _{0}(n+1/2),\quad \quad n=0,1,2,\dots \ }
where at
{\displaystyle n=0\ }
we shall have zero oscillation:
{\displaystyle W_{0}=\hbar \omega _{0}/2.\ }
In the general case the scaling mass and torsion flux can be rewritten in the form:
{\displaystyle m_{0}={\sqrt {\frac {\hbar }{L_{g}\omega _{0}}}}={\frac {m_{P}}{\sqrt {4\pi }}}={\sqrt {\frac {\hbar c}{4\pi G}}},\ }
{\displaystyle \Phi _{0}={\sqrt {\frac {\hbar }{C_{g}\omega _{0}}}}={\frac {h}{m_{P}{\sqrt {\pi }}}}={\sqrt {\frac {4\pi G\hbar }{c}}},\ }
{\displaystyle m_{P}\ }
is the Planck mass,
{\displaystyle c\ }
{\displaystyle G\ }
These three equations (4) form the base of the nonrelativistic quantum gravidynamics, which considers elementary particles from the intrinsic point of view. Note that, the standard quantum electrodynamics considers elementary particles from the external point of view.
Gravitational resonator as quantum LC circuitEdit
Due to Luryi density of states (DOS) approach we can define gravitational quantum capacitance as:
{\displaystyle C_{g}=m_{g}^{2}\cdot D_{2D}\cdot S_{g},\ }
and quantum inductance as:
{\displaystyle L_{g}=\Phi _{g}^{2}\cdot D_{2D}\cdot S_{g},\ }
{\displaystyle S_{g}\ }
is the resonator surface area,
{\displaystyle D_{2D}={\frac {m_{0}}{\pi \hbar ^{2}}}\ }
is two dimensional (2D) DOS,
{\displaystyle m_{0}\ }
is the carrier mass,
{\displaystyle m_{g}\ }
is the induced gravitational mass, and
{\displaystyle \Phi _{g}\ }
is the gravitational torsion field flux.
Energy stored on quantum capacitance is:
{\displaystyle W_{Cg}={\frac {m_{g}^{2}}{2C_{g}}}={\frac {1}{2D_{2D}S_{g}}}.\ }
Energy stored on quantum inductance is:
{\displaystyle W_{Lg}={\frac {\Phi _{g}^{2}}{2L_{g}}}={\frac {1}{2D_{2D}S_{g}}}=W_{Cg}.\ }
Resonator angular frequency is:
{\displaystyle \omega _{gR}={\frac {1}{\sqrt {L_{g}C_{g}}}}={\frac {1}{m_{g}\Phi _{g}D_{2D}S_{g}}}.\ }
Energy conservation law for zero oscillation is:
{\displaystyle W_{gR}={\frac {1}{2}}\hbar \omega _{gR}={\frac {\hbar }{2m_{g}\Phi _{g}D_{2D}S_{g}}}=W_{Cg}=W_{Lg}.\ }
{\displaystyle m_{g}\Phi _{g}=\hbar .\ }
Characteristic gravitational resonator impedance is:
{\displaystyle \rho _{g}={\sqrt {\frac {L_{g}}{C_{g}}}}={\frac {\Phi _{g}}{m_{g}}}=2\alpha {\frac {\Phi _{g0}}{m_{S}}}=\rho _{g0},\ }
{\displaystyle \alpha \ }
{\displaystyle \Phi _{g0}=h/m_{S}\ }
is the gravitational torsion flux quantum,
{\displaystyle h\ }
{\displaystyle m_{S}\ }
is the Stoney mass,
{\displaystyle \rho _{g0}\ }
is the gravitational characteristic impedance of free space.
Considering above equations, we can find out the following induced mass and induced gravitational torsion flux:
{\displaystyle m_{g}={\frac {m_{S}}{\sqrt {4\pi \alpha }}},\ }
{\displaystyle \Phi _{g}={\sqrt {\frac {\alpha }{\pi }}}{\frac {h}{m_{S}}}.\ }
Note, that these induced quantities maintain the energy balance between resonator oscillation energy and total energy on capacitance and inductance
{\displaystyle \hbar \omega _{gR}=W_{gL}(t)+W_{gC}(t).\ }
Since capacitance oscillations are phase shifted (
{\displaystyle \psi =\pi /2\ }
) with respect to inductance oscillations, therefore we get:
{\displaystyle W_{gL}={\begin{cases}0,&{\mbox{at }}t=0;\psi =0{\mbox{ and}}\,t={\frac {T_{R}}{2}};\psi =\pi \\W_{L},&{\mbox{at }}t={\frac {T_{R}}{4}};\psi ={\frac {\pi }{4}}{\mbox{ and}}\,t={\frac {3T_{R}}{4}};\psi ={\frac {3\pi }{4}}\end{cases}}\ }
{\displaystyle W_{gC}={\begin{cases}W_{C},&{\mbox{at }}t=0;\psi =0{\mbox{ and}}\,t={\frac {T_{R}}{2}};\psi =\pi \\0,&{\mbox{at }}t={\frac {T_{R}}{4}};\psi ={\frac {\pi }{4}}{\mbox{ and}}\,t={\frac {3T_{R}}{4}};\psi ={\frac {3\pi }{4}}\end{cases}}\ }
{\displaystyle T_{R}={\frac {2\pi }{\omega _{gR}}}\ }
is the oscillation period.
Planckion resonatorEdit
Planckion radius is:
{\displaystyle r_{P}={\frac {\lambda _{P}}{2\pi }},\ }
{\displaystyle \lambda _{P}={\frac {h}{m_{P}c}}\ }
is the Compton wavelength of planckion,
{\displaystyle c\ }
{\displaystyle m_{P}\ }
Planckion surface scaling parameter is:
{\displaystyle S_{P}=2\pi r_{P}^{2}={\frac {\lambda _{P}^{2}}{2\pi }}.\ }
Planckion angular frequency is:
{\displaystyle \omega _{P}={\frac {m_{P}c^{2}}{\hbar }}={\frac {2\pi c}{\lambda _{P}}}.\ }
Planckion density of states is:
{\displaystyle D_{P}={\frac {1}{S_{P}W_{P}}}={\frac {1}{S_{P}\hbar \omega _{P}}}={\frac {m_{P}}{2\pi \hbar ^{2}}}.\ }
Standard DOS quantum resonator approach yields the following values for the gravitational reactive quantum parameters:
{\displaystyle C_{P}=m_{g}^{2}D_{P}S_{P}={\frac {m_{S}^{2}}{4\pi \alpha }}{\frac {m_{P}}{2\pi \hbar ^{2}}}{\frac {\lambda _{P}^{2}}{2\pi }}={\frac {\varepsilon _{g}\lambda _{P}}{2\pi }}={\frac {m_{P}}{4\pi c^{2}}},\ }
{\displaystyle \varepsilon _{g}={\frac {1}{4\pi G}}\ }
is the gravitoelectric gravitational constant in the set of selfconsistent gravitational constants, and
{\displaystyle L_{P}=\Phi _{g}^{2}D_{P}S_{P}={\frac {\Phi _{0}^{2}}{4\pi \beta }}D_{P}S_{P}={\frac {\alpha h^{2}}{\pi m_{S}^{2}}}D_{P}S_{P}={\frac {\alpha h^{2}}{\pi m_{S}^{2}}}{\frac {m_{P}}{2\pi \hbar ^{2}}}{\frac {\lambda _{P}^{2}}{2\pi }}={\frac {\mu _{g}\lambda _{P}}{2\pi }},\ }
{\displaystyle \mu _{g}={\frac {4\pi G}{c^{2}}}\ }
is the gravitomagnetic gravitational constant of selfconsistent gravitational constants,
{\displaystyle \beta ={\frac {1}{4\alpha }}\ }
is the gravitational torsion coupling constant, which is equal to magnetic coupling constant.
Thus, s.c. free planckion can be considered as discoid quantum resonator which has radius
{\displaystyle r_{P}\ }
Bohr atom as a gravitational quantum resonatorEdit
{\displaystyle C_{\Gamma }=m_{R}^{2}D_{B}S_{B}=\varepsilon _{\Gamma }a_{B},\ }
{\displaystyle a_{B}\ }
is the Bohr radius,
{\displaystyle S_{B}=\pi a_{B}^{2}\ }
is the flat surface area,
{\displaystyle ~m_{R}={\frac {\sqrt {m_{p}m_{e}}}{2{\sqrt {\pi }}}}}
{\displaystyle D_{B}={\frac {m_{e}}{\pi \hbar ^{2}}}\ }
is the density of states,
{\displaystyle \varepsilon _{\Gamma }={\frac {1}{4\pi \Gamma }}}
is the gravitoelectric gravitational constant of selfconsistent gravitational constants in the field of strong gravitation,
{\displaystyle \Gamma }
{\displaystyle m_{p}\ }
{\displaystyle m_{e}\ }
are masses of proton and electron.
{\displaystyle L_{\Gamma }=\phi _{\Gamma }^{2}D_{B}S_{B}=\mu _{\Gamma }a_{B},\ }
{\displaystyle \mu _{\Gamma }={\frac {4\pi \Gamma }{c^{2}}}}
is the gravitomagnetic gravitational constant of selfconsistent gravitational constants in the field of strong gravitation, and the induced gravitational torsion flux is:
{\displaystyle \phi _{\Gamma }={\frac {\alpha h}{\sqrt {\pi m_{p}m_{e}}}}={\frac {2\alpha {\sqrt {m_{e}}}}{\sqrt {\pi m_{p}}}}\sigma _{e}={\frac {2\alpha {\sqrt {m_{p}}}}{\sqrt {\pi m_{e}}}}\Phi _{\Gamma }={\frac {2{\sqrt {m_{p}}}}{\alpha {\sqrt {\pi m_{e}}}}}\Phi _{\Omega },}
{\displaystyle \sigma _{e}\ }
is the velocity circulation quantum,
{\displaystyle \Phi _{\Gamma }={\frac {h}{2m_{p}}}}
{\displaystyle m_{p}}
Here the strong gravitational electron torsion flux for the first energy level is:
{\displaystyle \Phi _{\Omega }=\Omega S_{B}={\frac {\mu _{\Gamma }m_{e}}{4\pi a_{B}}}\sigma _{e}={\frac {\Gamma m_{e}}{c^{2}a_{B}}}\sigma _{e}={\frac {\Gamma h}{2c^{2}a_{B}}}={\frac {\pi \alpha \Gamma m_{e}}{c}}=\alpha ^{2}\Phi _{\Gamma },\ }
{\displaystyle \Omega \ }
is the gravitational torsion field of strong gravitation in electron disc.
{\displaystyle \rho _{\Gamma }={\sqrt {\frac {L_{\Gamma }}{C_{\Gamma }}}}={\sqrt {\frac {\mu _{\Gamma }}{\varepsilon _{\Gamma }}}}={\frac {4\pi \Gamma }{c}}=6.346\cdot 10^{21}\,\mathrm {m^{2}/(s\cdot kg)} .\ }
{\displaystyle \omega _{\Gamma }={\frac {1}{\sqrt {L_{\Gamma }C_{\Gamma }}}}={\frac {c}{a_{B}}}={\frac {\omega _{B}}{\alpha }},\ }
{\displaystyle \omega _{B}={\frac {c\alpha }{a_{B}}}\ }
is the angular frequency of electron rotation in atom.
{\displaystyle W_{C}={\frac {m_{R}^{2}}{2C_{\Gamma }}}={\frac {\hbar \omega _{B}}{2}}={\frac {\alpha \hbar \omega _{\Gamma }}{2}}=W_{B},\ }
{\displaystyle W_{L}={\frac {\phi _{\Gamma }^{2}}{2L_{\Gamma }}}=W_{B}.\ }
{\displaystyle W_{B}\ }
{\displaystyle m_{Bmin}={\frac {W_{B}}{c^{2}}}={\frac {\hbar \omega _{B}}{2c^{2}}}={\frac {\alpha ^{2}}{2}}m_{e}<<m_{e},\ }
{\displaystyle m_{Bmin}}
Stratton J.A.(1941). Electromagnetic Theory. New York, London: McGraw-Hill.p.615. djvu
Детлаф А.А., Яворский Б.М., Милковская Л.Б.(1977). Курс физики. Том 2. Электричество и магнетизм (4-е издание). М.: Высшая школа, "Reference Book on Electricity" djvu
Гольдштейн Л.Д., Зернов Н.В. (1971). Электромагнитные поля. 2- издание. Москва: Советское Радио. 664с. "Electromagnetic Fields" djvu
Retrieved from "https://en.wikiversity.org/w/index.php?title=Physics/Essays/Fedosin/Quantum_Gravitational_Resonator&oldid=1862451"
|
Remote Sensing | Free Full-Text | Quantitative Retrieval of Volcanic Sulphate Aerosols from IASI Observations
Quantify the Contribution of Dust and Anthropogenic Sources to Aerosols in North China by Lidar and Validated with CALIPSO
Photometric Observations of Aerosol Optical Properties and Emission Flux Rates of Stromboli Volcano Plume during the PEACETIME Campaign
Seasonal Precipitation Variability and Non-Stationarity Based on the Evolution Pattern of the Indian Ocean Dipole over the East Asia Region
Guermazi, H.
Sellitto, P.
Eremenko, M.
Lachatre, M.
Carboni, E.
Caltabiano, T.
Serbaji, M. Moncef
Rekhiss, F.
Legras, B.
Mathieu Lachatre
Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA), UMR CNRS 7583, Institut Pierre Simon Laplace (IPSL), Université Paris-Est-Créteil, Université de Paris, 94000 Créteil, France
Laboratoire de Météorologie Dynamique (LMD), UMR CNRS 8539, Ecole Polytechnique, Institut Pierre Simon Laplace, Ecole Normale Supérieure, Université Paris-Saclay, Sorbonne Universités, Route de Saclay, 91128 Palaiseau, France
National School of Engineers of Sfax, Water, Energy and Environment Laboratory L3E, University of Sfax, B.P 1173, 3021 Sfax, Tunisia
COMET, Atmospheric, Oceanic and Planetary Physics, University of Oxford, Clarendon Laboratory, Parks Road, Oxford OX1 3PU, UK
UK Research and Innovation, Science and Technology Facilities Council, Rutherford Appleton Laboratory, Chilton OX11 0QX, UK
Istituto Nazionale di Geofisica e Vulcanologia (INGV), OE, 95123 Catania, Italy
Academic Editor: Juan Antonio Bravo-Aranda
(This article belongs to the Special Issue Remote Observation of Volcanic Emissions and Their Impacts on the Atmosphere, Biosphere and Environment)
We developed a new retrieval algorithm based on the Infrared Atmospheric Sounding Interferometer (IASI) observations, called AEROIASI-H2SO4, to measure the extinction and mass concentration of sulphate aerosols (binary solution droplets of sulphuric acid and water), with moderate random uncertainties (typically ∼35% total uncertainty for column mass concentration estimations). The algorithm is based on a self-adapting Tikhonov–Phillips regularization method. It is here tested over a moderate-intensity eruption of Mount Etna volcano (18 March 2012), Italy, and is used to characterise this event in terms of the spatial distribution of the retrieved plume. Comparisons with simultaneous and independent aerosol optical depth observations from MODIS (Moderate Resolution Imaging Spectroradiometer), SO
{}_{2}
plume observations from IASI and simulations with the CHIMERE chemistry/transport model show that AEROIASI-H2SO4 correctly identifies the volcanic plume horizontal morphology, thus providing crucial new information towards the study of volcanic emissions, volcanic sulphur cycle in the atmosphere, plume evolution processes, and their impacts. Insights are given on the possible spectroscopic evidence of the presence in the plume of larger-sized particles than previously reported for secondary sulphate aerosols from volcanic eruptions. View Full-Text
Keywords: volcanic plumes; IASI; sulphate aerosols; inverse problems in Earth observations volcanic plumes; IASI; sulphate aerosols; inverse problems in Earth observations
Guermazi, H.; Sellitto, P.; Cuesta, J.; Eremenko, M.; Lachatre, M.; Mailler, S.; Carboni, E.; Salerno, G.; Caltabiano, T.; Menut, L.; Serbaji, M.M.; Rekhiss, F.; Legras, B. Quantitative Retrieval of Volcanic Sulphate Aerosols from IASI Observations. Remote Sens. 2021, 13, 1808. https://doi.org/10.3390/rs13091808
Guermazi H, Sellitto P, Cuesta J, Eremenko M, Lachatre M, Mailler S, Carboni E, Salerno G, Caltabiano T, Menut L, Serbaji MM, Rekhiss F, Legras B. Quantitative Retrieval of Volcanic Sulphate Aerosols from IASI Observations. Remote Sensing. 2021; 13(9):1808. https://doi.org/10.3390/rs13091808
Guermazi, Henda, Pasquale Sellitto, Juan Cuesta, Maxim Eremenko, Mathieu Lachatre, Sylvain Mailler, Elisa Carboni, Giuseppe Salerno, Tommaso Caltabiano, Laurent Menut, Mohamed M. Serbaji, Farhat Rekhiss, and Bernard Legras. 2021. "Quantitative Retrieval of Volcanic Sulphate Aerosols from IASI Observations" Remote Sensing 13, no. 9: 1808. https://doi.org/10.3390/rs13091808
|
On Fractional Order Dengue Epidemic Model
Hamed Al-Sulami, Moustafa El-Shahed, Juan J. Nieto, Wafa Shammakh, "On Fractional Order Dengue Epidemic Model", Mathematical Problems in Engineering, vol. 2014, Article ID 456537, 6 pages, 2014. https://doi.org/10.1155/2014/456537
Hamed Al-Sulami,1 Moustafa El-Shahed,1,2 Juan J. Nieto,1,3 and Wafa Shammakh1
2Department of Mathematics, Faculty of Art and Sciences, Qassim University, P.O. Box 3771, Unaizah-Qassim 51911, Saudi Arabia
3Departamento de Analisis Mateatico, Facultad de Matematicas, Universidad de Santiago de Compostela, 15782 Santiago de Compostela, Spain
This paper deals with the fractional order dengue epidemic model. The stability of disease-free and positive fixed points is studied. Adams-Bashforth-Moulton algorithm has been used to solve and simulate the system of differential equations.
Dengue is a major public health problem in tropical and subtropical countries. It is a vector-borne disease transmitted by Aedes aegypti and Aedes albopictus mosquitoes. Four different serotypes can cause dengue fever. A human infected by one serotype, when recovers, gains total immunity to that serotype and only partial and transient immunity with respect to the other three.
Dengue can vary from mild to severe. The more severe forms of dengue include shock syndrome and dengue hemorrhagic fever (DHF). Patients who develop these more serious forms of dengue fever usually need to be hospitalized. The full life cycle of dengue fever virus involves the role of the mosquito as a transmitter (or vector) and humans as the main victim and source of infection. Preventing or reducing dengue virus transmission depends entirely on the control of mosquito vectors or interruption of human vector contact [1, 2].
In this paper we study the fractional order dengue epidemic model. The stability of equilibrium points is studied. Numerical solutions of this model are given. We like to argue that fractional order equations are more suitable than integer order ones in modeling biological, economic, and social systems (generally complex adaptive systems) where memory effects are important. Adams-Bashforth-Moulton algorithm has been used to solve and simulate the system of differential equations.
Esteva and Vargas [3] developed a dengue fever transmission model by assuming that, once a person recovers from the disease, he or she will not be reinfected by the disease. The model also assumes that the host population is constant, that is, the death rate and the birth rate equal . The host-vector model for the dengue transmission of Esteva and Vargas [3] is as follows: where is the recruitment rate of the host population, is the recruitment rate of the vector population, is the number of susceptible in the host population, is the number of infective in the host population, is the number of immunes in the host population, is the vector population, is the number of susceptible in the vector population, is the number of infective in the vector population, is the death rate in the vector population, is the transmission probability from vector to host, is the transmission probability from host to vector, is the recovery rate in the host population, is the biting rate of the vector.
The notion of fractional calculus was anticipated by Leibniz, one of the founders of standard calculus, in a letter written in 1695. Recently great considerations have been made to the models of FDEs in different aria of researches. The most essential property of these models is their nonlocal property which does not exist in the integer order differential operators. We mean by this property that the next state of a model depends not only upon its current state but also upon all of its historical states. There are many definitions of fractional derivatives [4, 5]. Perhaps the best-known is the Riemann-Liouvile definition. The Riemann-Liouville derivative of order is defined as where () is the gamma function and is an integer. An alternative definition was introduced by Caputo as follows, which is a sort of regularization of the Riemann-Liouville derivative: Pooseh et al. [6] introduced the notion of fractional derivative in the sense of Riemann-Liouville to reformulate the dynamics of the classical model (1) in terms of fractional derivatives. They applied a recent approximate technique to obtain numerical solutions to the fractional model. The system in this paper will be in the sense of Caputo fractional derivative by the following set of fractional order differential equations: Because model (4) monitors the dynamics of human populations, all the parameters are assumed to be nonnegative. Furthermore, it can be shown that all state variables of the model are nonnegative for all time (see, for instance, [7–9]).
Lemma 1. The closed set is positively invariant with respect to model (4).
Proof. The fractional derivative of the total population, obtained by adding all the equations of model (4), is given by
The solution to (5) is given by , where is the Mittag-Leffler function. Considering the fact that the Mittag-Leffler function has an asymptotic behavior [4, 10],
One can observe that as . The proof of vector population case is completely similar to that of host population and is therefore omitted. One can observe that . Therefore, all solutions of the model with initial conditions in remain in for all > 0. Thus, region is positively invariant with respect to model (4).
In the following, we will study the dynamics of system (4).
To evaluate the equilibrium points let
Then . By (4), a positive equilibrium satisfies The Jacobian matrix for the system given in (4) evaluated at the disease-free equilibrium is as follows:
Theorem 2. The disease-free equilibrium is locally asymptotically stable if and is unstable if .
Proof. The disease-free equilibrium is locally asymptotically stable if all the eigenvalues, , of the Jacobian matrix satisfy the following condition [11–14]:
The eigenvalues of the Jacobian matrix are , , and ; the other two roots are determined by the quadratic equation where . Hence is locally asymptotically stable if and is unstable if .
The quantity is called the basic reproductive number of the disease, since it represents the average number of secondary cases that one case can produce if introduced into a susceptible population.
We now discuss the asymptotic stability of the endemic (positive) equilibrium of the system given by (4). The Jacobian matrix evaluated at the endemic equilibrium is given as
The characteristic equation of is where
If . Let denote the discriminant of a polynomial ; then
Following [14–18], we have Proposition 3.
Proposition 3. One assumes that exists in .(i)If the discriminant of , , is positive and Routh-Hurwitz are satisfied, that is, , , , and , then is locally asymptotically stable.(ii)If , , , , and , then is locally asymptotically stable.(iii)If , , , and , then is unstable.(iv)The necessary condition for the equilibrium point , to be locally asymptotically stable, is .
4. Numerical Methods and Simulations
Since most of the fractional order differential equations do not have exact analytic solutions, so approximation and numerical techniques must be used. Several analytical and numerical methods have been proposed to solve the fractional order differential equations. For numerical solutions of the system (4) one can use the generalized Adams-Bashforth-Moulton method. To give the approximate solution by means of this algorithm, consider the following nonlinear fractional differential equation [19]: This equation is equivalent to Volterra integral equation:
Diethelm et al. used the predictor-correctors scheme [15, 16, 20] based on the Adams-Bashforth-Moulton algorithm to integrate (17). By applying this scheme to the fractional order dengue epidemic model and setting , , and , (17) can be discretized as follows [19]: where
In this paper, we have considered a fractional calculus model for dengue disease. Following [21], Figure 1 shows that drops significantly in a relatively small period of time. Both and increase significantly during the period of 30 days and then eventually oscillate around the endemic state (0.09529, 0.0.00029, and 0.00058). This seems unrealistic in the nature. With constant population of mosquitoes, this fluctuation (in a short period of time) cannot be shown to happen in the nature [21]. As mentioned by [6], Figures 2 and 3 show that even a simple fractional model may give surprisingly good results. However, the transformation of a classical model into a fractional one makes it very sensitive to the order of differentiation : a small change in may result in a big change in the final result. From the numerical results in Figures 2 and 3, it is clear that the approximate solutions depend continuously on the fractional derivative .
, , and for and ; ; ; ; ; ; ; .
, ,and for and ; ; ; ; ; ; ; .
, , and for and ; ; ; ; ; ; ; .
The approximate solutions , , and are displayed in Figures 4 and 5 with different values of . In each figure three different values of are considered. When , system (4) is the classical integer-order system (1). In Figure 4, the variation of versus time is shown for different values of by fixing other parameters. It is revealed that does not drop significantly in a relatively small period of time for small values. Figure 5 depicts versus time . As mentioned by [22, 23], one should note that although the equilibrium points are the same for both integer order and fractional order models the solution of the fractional order model tends to the fixed point over a longer period of time. One also needs to mention that when dealing with real life problems, the order of the system can be determined by using the collected data.
for and ; ; ; ; ; ; ; .
for , ; ; ; ; ; ; ; .
This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, under Grant no. 3-130/1433HiCi. The authors, therefore, acknowledge with thanks DSR technical and financial support.
H. S. Rodrigues, M. T. Monteiro, and D. F. M. Torres, “Sensitivity analysis in a dengue epidemiological model,” Conference Papers in Mathematics, vol. 2013, Article ID 721406, 7 pages, 2013. View at: Publisher Site | Google Scholar
WHO, Dengue: Guidelines for Diagnosis, Treatment, Prevention and Control, World Health Organization, Geneva, Switzerland, 2nd edition, 2009.
L. Esteva and C. Vargas, “Analysis of a dengue disease transmission model,” Mathematical Biosciences, vol. 150, no. 2, pp. 131–151, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH
A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Application Fractional Differential Equations, Elseviesr, Amsterdam, The Netherlands, 2006.
I. Podlubny, Fractional Differential Equations, Academic Press, New York, NY, USA, 1999. View at: Zentralblatt MATH
S. Pooseh, H. S. Rodrigues, and D. F. M. Torres, “Fractional derivatives in dengue epidemics,” in Proceedings of the International Conference on Numerical Analysis and Applied Mathematics: Numerical Analysis and Applied Mathematics (ICNAAM '11), pp. 739–742, September 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
R. Anderson and R. May, Infectious Disease of Humans, Dynamics and Control, Oxford University Press, Oxford, UK, 1995. View at: Zentralblatt MATH
E. H. Elbasha, C. N. Podder, and A. B. Gumel, “Analyzing the dynamics of an SIRS vaccination model with waning natural and vaccine-induced immunity,” Nonlinear Analysis: Real World Applications, vol. 12, no. 5, pp. 2692–2705, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
H. Hethcote, M. Zhien, and L. Shengbing, “Effects of quarantine in six endemic models for infectious diseases,” Mathematical Biosciences, vol. 180, pp. 141–160, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH
R. Gorenflo, J. Loutschko, and Y. Luchko, “Computation of the Mittag-Lefflerfunction
{E}_{\alpha ,\beta }\left(z\right)
and its derivatives,” Fractional Calculus and Applied Analysis, vol. 5, pp. 491–518, 2002. View at: Google Scholar
E. Ahmed, A. M. A. El-Sayed, and H. A. A. El-Saka, “On some Routh-Hurwitz conditions for fractional order differential equations and their applications in Lorenz, Rössler, Chua and Chen systems,” Physics Letters A: General, Atomic and Solid State Physics, vol. 358, no. 1, pp. 1–4, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
D. Matignon, “Stability results for fractional differential equations with applications to control processing,” in Proceedings of the Computational Engineering in Systems Applications, vol. 2, pp. 963–968, Lille, France, 1996. View at: Google Scholar
K. Diethelm, “An algorithm for the numerical solution of differential equations of fractional order,” Electronic Transactions on Numerical Analysis, vol. 5, pp. 1–6, 1997. View at: Google Scholar | Zentralblatt MATH
Y. Ding and H. Ye, “A fractional-order differential equation model of HIV infection of CD4+ T-cells,” Mathematical and Computer Modelling, vol. 50, no. 3-4, pp. 386–392, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
H. Ye and Y. Ding, “Nonlinear dynamics and chaos in a fractional-order HIV model,” Mathematical Problems in Engineering, vol. 2009, Article ID 378614, 12 pages, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
C. Li and C. Tao, “On the fractional Adams method,” Computers and Mathematics with Applications, vol. 58, no. 8, pp. 1573–1588, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
K. Diethelm, N. J. Ford, and A. D. Freed, “A predictor-corrector approach for the numerical solution of fractional differential equations,” Nonlinear Dynamics, vol. 29, no. 1–4, pp. 3–22, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH
E. Soewono and A. K. Supriatna, “A Two-dimensional model for transmission of dengue fever disease,” Bulletin of the Malaysian Mathematical Sciences Society, vol. 24, pp. 48–57, 2001. View at: Google Scholar | Zentralblatt MATH
E. Demirci, A. Unal, and N. Özalp, “A fractional order SEIR model with density dependent death rate,” Hacettepe Journal of Mathematics and Statistics, vol. 40, no. 2, pp. 287–295, 2011. View at: Google Scholar | Zentralblatt MATH
N. Özalp and E. Demiörciö, “A fractional order SEIR model with vertical transmission,” Mathematical and Computer Modelling, vol. 54, no. 1-2, pp. 1–6, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
Copyright © 2014 Hamed Al-Sulami et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
Generate VEC Model Impulse Responses - MATLAB & Simulink - MathWorks Deutschland
This example shows how to generate impulse responses from this vector error-correction model containing the first three lags (VEC(3), see [139], Ch. 6.7):
\begin{array}{rcl}\Delta {y}_{t}& =& \left[\begin{array}{cc}0.24& -0.08\\ 0& -0.31\end{array}\right]\Delta {y}_{t-1}+\left[\begin{array}{cc}0& -0.13\\ 0& -0.37\end{array}\right]\Delta {y}_{t-2}+\left[\begin{array}{cc}0.20& -0.06\\ 0& -0.34\end{array}\right]\Delta {y}_{t-3}\\ & +& \left[\begin{array}{c}-0.07\\ 0.17\end{array}\right]\left[\begin{array}{cc}1& -4\end{array}\right]{y}_{t-1}+{\epsilon }_{t}\end{array}
{y}_{t}
is a 2-D time series.
\Delta {y}_{t}={y}_{t}-{y}_{t-1}
{\epsilon }_{t}
is a 2-D series of mean zero Gaussian innovations with covariance matrix
\Sigma =1{0}^{-5}\left[\begin{array}{rc}2.61& -0.15\\ -0.15& 2.31\end{array}\right].
Specify the VEC(3) model autoregressive coefficient matrices
{B}_{1}
{B}_{2}
{B}_{3}
, the error-correction coefficient matrix
C
, and the innovations covariance matrix
\Sigma
B1 = [0.24 -0.08;
C = [-0.07; 0.17]*[1 -4];
Sigma = [ 2.61 -0.15;
-0.15 2.31]*1e-5;
Compute the autoregressive coefficient matrices in the VAR(4) model that is equivalent to the VEC(3) model.
B = {B1; B2; B3};
A = vec2var(B,C);
A is a 4-by-1 cell vector containing the 2-by-2 VAR(4) model autoregressive coefficient matrices. Cell A{j} contains the coefficient matrix for lag j in difference-equation notation. The VAR(4) model is in terms of
{y}_{t}
\Delta {y}_{t}
Compute the forecast error impulse responses (FEIRs) for the VAR(4) representation. That is, accept the default identity matrix for the innovations covariance. Store the impulse responses for the first 20 periods.
IR = cell(2,1); % Preallocation
IR{1} = armairf(A,[],'NumObs',numObs);
IR{1} is a 20-by-2-by-2 array of impulse responses of the VAR representation of the VEC model. Element t,j,k is the impulse response of variable k at time t - 1 in the forecast horizon when variable j received a shock at time 0.
To compute impulse responses, armairf filters a one-standard-deviation innovation shock from one series to itself and all other series. In this case, the magnitude of the shock is 1 for each series.
Compute orthogonalized impulse responses, and supply the innovations covariance matrix. Store the impulse responses for the first 20 periods.
IR{2} = armairf(A,[],'InnovCov',Sigma,'NumObs',numObs);
For orthogonalized impulse responses, the innovations covariance governs the magnitude of the filtered shock. IR{2} is commensurate with IR{1}.
Plot the FEIR and the orthogonalized impulse responses for all series.
type = {'FEIR','Orthogonalized'};
imp = IR{j};
plot(imp(:,1,1))
title(sprintf('%s: y_{1,t}',type{j}));
ylabel('y_{1,t}');
title(sprintf('%s: y_{1,t} \\rightarrow y_{2,t}',type{j}));
Because the innovations covariance is almost diagonal, the FEIR and orthogonalized impulse responses have similar dynamic behaviors ([139], Ch. 6.7). However, the scale of each plot is markedly different.
vec2var | armairf
|
Correspondence of Low Mean Shear and High Harmonic Content in the Porcine Iliac Arteries | J. Biomech Eng. | ASME Digital Collection
Heather A. Himburg,
Heather A. Himburg
Himburg, H. A., and Friedman, M. H. (May 16, 2006). "Correspondence of Low Mean Shear and High Harmonic Content in the Porcine Iliac Arteries." ASME. J Biomech Eng. December 2006; 128(6): 852–856. https://doi.org/10.1115/1.2354211
Background. Temporal variations in shear stress have been suggested to affect endothelial cell biology. To better quantify the range of dynamic shear forces that occur in vivo, the frequency content of shear variations that occur naturally over a cardiac cycle in the iliac arteries was determined. Method of Approach. Computational fluid dynamic calculations were performed in six iliac arteries from three juvenile swine. Fourier analysis of the time-varying shear stress computed at the arterial wall was performed to determine the prevalence of shear forces occurring at higher frequencies in these arteries. Results. While most of each artery experienced shear forces predominantly at the frequency of the heart rate, the frequency spectra at certain regions were dominated by shear forces at higher frequencies. Regions whose frequency spectra were dominated by higher harmonics generally experienced lower mean shear stress. The negative correlation between shear and dominant harmonic was significant
(p=0.002
). Conclusions. Since lesion development typically occurs in regions experiencing low time-average shear stress, this result suggests that the frequency content of the shear exposure may also be a contributing factor in lesion development. A better understanding of the vascular response to shear components of different frequencies might help rationalize the notion of "disturbed flow" as a hemodynamic entity.
haemodynamics, blood vessels, shear strength, cellular biophysics, cardiology, computational fluid dynamics, Fourier analysis, diseases, computational fluid dynamics, shear stress, frequency, harmonic content, atherosclerosis, swine, arteries
Computational fluid dynamics, Flow (Dynamics), Shear (Mechanics), Shear stress, Fourier analysis, Hemodynamics, Endothelial cells
. 4th ed., Arnold, London,
Effect of Periodic Alterations in Shear on Vascular Macromolecular Uptake
Modulation by Pathophysiological Stimuli of the Shear Stress-Induced up-Regulation of Endothelial Nitric Oxide Synthase Expression in Endothelial Cells
Tsukurov
The Effect of Combined Arterial Hemodynamics on Saphenous Venous Endothelial Nitric Oxide Production
Pulsatile and Steady Flow-Induced Calcium Oscillations in Single Cultured Endothelial Cells
Flow Pulsatility is a Critical Determinant of Oxidative Stress in Endothelial Cells
Oscillatory Shear Stress Stimulates Endothelial Production of O2- From P47phox-Dependent Nad(P)H Oxidases, Leading to Monocyte Adhesion
Distinct Endothelial Phenotypes Evoked by Arterial Waveforms Derived From Atherosclerosis-Susceptible and -Resistant Regions of Human Vasculature
Coexisting Proinflammatory and Antioxidative Endothelial Transcription Profiles in a Disturbed Flow Region of the Adult Porcine Aorta
|
Time Value of Money Formula - Course Hero
Introduction to Finance/Time Value of Money/Time Value of Money Formula
Present versus Future Value of Money
Calculating the Present and Future Values of a Single Sum
The future or present value of money can be calculated given the variables of the number of periods (time), interest or discount rates, the amount invested, or the future value of money.
The future value (FV) calculation allows investors to predict, with a very high degree of accuracy, the amount of profit that can be generated by varying investments. The amount of growth earned by holding a given amount in cash will most likely be different than if that same amount were invested in stocks or other equities. The FV formula is used to compare multiple options and scenarios.
When applying the FV formula, the Present Value (PV) must first be calculated and then the rate and time period in which interest is earned, or compounded, is required to complete the calculation. As a result, the PV is multiplied times a constant of 1 plus the rate of return with an exponent of how many periods of interest is earned or compounded.
There is a formula for calculating future value.
\text{FV}=\text{PV}\;(1+r)^n
\begin{aligned}\text{FV}&=\text{Future Value}\\\text{PV}&=\text{Present Value}\\r&=\text{Interest Rate}\\n&=\text{Time Period}\end{aligned}
For example, assume Mary Nelson wants to calculate the future value of an investment and puts $1,000 into a bank for five years at 5 percent interest compounding annually, or
\$1{,}000\;(1+0.05)^5=\$1{,}276.28
. This means that Mary knows that making the investment at 5 percent will leave her with $1,276.28 at the end of five years. There is a formula for present value.
\text{PV}=\frac{\text {FV}}{(1+r)^n}
\begin{aligned}\text{PV}&=\text{Present Value}\\\text{FV}&=\text{Future Value}\\r&=\text{Interest Rate}\\n&= \text{Time Period}\end{aligned}
The present value calculation is useful in planning for a future expense, such as college tuition. For example, Jim and Susan Smith might want to know how much they need to invest in a certificate of deposit (CD) that pays 5 percent interest compounding annually to have $25,000 at the end of 10 years, which hypothetically is what will be needed to pay for their child's first year of college at that time. Present value can be used to determine the amount that’s needed to invest at the current time to have the necessary funds in 10 years.
\begin{aligned}\text{PV}&=\frac{\$25{,}000}{(1\;+\;0.05)^{10}}\\\\&=\$15{,}347.83\end{aligned}
Investors can calculate the future value or present value of money to better understand their investment options or to make a decision on whether or not to take out a loan. For example, consider the time value of money impact of saving for retirement if the retiree wants to have a certain lump sum saved at their retirement age of 65. An investor that saves $2,400 annually from age 19 to 26 and then stops investing new money but lets the current investment continue to grow at a 12 percent annual growth rate will have a lump sum of $2,523,474 at age 65. The growth of the investment for ages 19 to 26 can be calculated using a spreadsheet by inputting numbers within the future value formula. Calculations can also be completed using a financial calculator or manually by using the future value formula.
Calculating Future Value Using Spreadsheets
Jan 1 − Age 19 $2,400
=\$2{,}400\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$2{,}688\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$5{,}698.56\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$9{,}070.39\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$12{,}846.84\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$17{,}076.46\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$21{,}813.64\;(1+0.12)^1
A variety of formulas can be used in spreadsheets that simplify calculations such as determining future value.
An investor who saves $2,400 annually from age 27 to age 65 and earns the same 12 percent annual growth rate will have a total of $1,838,619 by age 65. Even with years more of investing than the early investor, the late investor will have earned
\$2{,}523{,}449-\$1{,}838{,}619=\$684{,}830
less than if they had invested earlier. The growth of the investment for ages 27 to 65 can be calculated using a spreadsheet by inputting a numbers within the future value formula. Calculations can also be completed using a financial calculator or manually by using the future value formula.
Using Spreadsheets to Calculate Cumbersome Future Value Calculations
=\$2{,}400\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$2{,}688\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$5{,}698.56\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$9{,}070.39\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$12{,}846.84\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$17{,}076.46\;(1+0.12)^1
[As time passes, the formula is the same each year.]
=\$2{,}400\;(1+0.12)^1+\$1{,}461{,}193.28\;(1+0.12)^1
=\$2{,}400\;(1+0.12)^1+\$1{,}639{,}224.47\;(1+0.12)^1
Spreadsheets are an often used method to calculate cumbersome future value calculations to help investors make investment decisions.
Calculating the Present and Future Values of an Annuity
The annual payout of an annuity can be calculated as long as the principal and interest rate are known.
Calculating the future value of an annuity can also be helpful. An annuity is a fixed, regular payment stream paid in the same amount over a period of time. This is similar to how Social Security monthly payments work. When retirement planning, some people want to know how much they need to save to invest in an annuity in order to have a guaranteed monthly income stream at some point in the future.
There is a formula for calculating the annual payout of an annuity.
\text{AP}=\frac{\text{Principal}\times r\times(1+r)^n}{\lbrack(1+r)^n-1\rbrack}
\begin{aligned}\text{AP}&=\text{Annual Payout}\\\text{Principal}&=\text{Amount Initially Invested}\\r&=\text{Rate}\\n&=\text{Time Period}\end{aligned}
As an example, assume that a retiree would like to invest $100,000 and receive it in equal annual payments for the next 10 years. Also, assume that the funds currently return a 5 percent rate of interest.
\begin{aligned}\text {Annual Payout}&=\frac{\$100{,}000\times0.05\times(1+0.05)^{10}}{\lbrack(1+0.05)^{10}-1\rbrack}\\\\&=\$12{,}950.46\end{aligned}
The retiree can invest $100,000 now and withdraw $12,950.46 annually for the next 10 years while in retirement.
Using Spreadsheets to Perform Financial Calculations
Investors can use financial calculators to perform various financial calculations. These calculations can also be done easily using spreadsheets.
Using spreadsheets allows investors to try different savings amounts and interest rates to see how certain choices can affect their ultimate return. To use spreadsheets for these calculations, investors need to know which spreadsheet functions are equivalent to the financial calculator keys.
Formula Being Calculated
Determine future value FV (rate, nper, pmt, pv, type) FV
Determine present value PV (rate, nper, pmt, fv, type) PV
Determine annuity payment PMT (rate, nper, pv, fv, type) PMT
Spreadsheets can be used as financial calculators and can make it easier for investors to evaluate various investments.
For example, if Mary has $500 to invest for five years at an interest rate of 8 percent per year, how much will she accumulate by the end of the period? A spreadsheet can be used to easily calculate the result.
1 Present Value $500
2 # of Years 5
4 Future Value $735 = FV (B3, B2, B1)
Spreadsheets can be used to easily calculate future value using present value, number of years, and APR. In some situations “type” or when payments are due can be included in future value calculations, but it is not always relevant if payment due dates are not being considered.
Mary is able to make this calculation using the formula FV (rate, nper, pmt, pv, type). Typing in the formula and then selecting the appropriate cells and hitting enter returns the result. By using a spreadsheet, Mary can easily see what the impact would be of a different annual rate, a different present value, or saving for more years. This can aid in the decision-making process by allowing Mary to substitute in different numbers and easily to try out different scenarios when making an investment decision.
<Present versus Future Value of Money>Price Inflation
|
Thermal-Hydraulic Performance of MEMS-based Pin Fin Heat Sink | J. Heat Transfer | ASME Digital Collection
Ali Koşar,
e-mail: pelesy@rpi.edu
Koşar, A., and Peles, Y. (August 8, 2005). "Thermal-Hydraulic Performance of MEMS-based Pin Fin Heat Sink." ASME. J. Heat Transfer. February 2006; 128(2): 121–131. https://doi.org/10.1115/1.2137760
An experimental study on heat transfer and pressure drop of de-ionized water over a bank of shrouded staggered micro pin fins
243μm
long with hydraulic diameter of
99.5μm
has been performed. Average heat transfer coefficients have been obtained for effective heat fluxes ranging from 3.8 to
167W∕cm2
and Reynolds numbers from 14 to 112. The results were used to derive the Nusselt numbers, total thermal resistances, and friction factors. It has been found that for Reynolds numbers below
∼50
long tube correlations overpredicted the experimental Nusselt number, while at higher Reynolds numbers existing correlations predicted the results moderately well. Endwall effects, which diminish at high Reynolds numbers, and a delay in flow separation for compact pin fins were attributed to the obtained trend.
microfluidics, heat sinks, laminar flow, thermal resistance, friction, pipe flow, boundary layers, flow separation, MEMS, heat sink, pin fin, cross flow, friction factor, microchannel
Fins, Flow (Dynamics), Friction, Heat sinks, Heat transfer, Heat transfer coefficients, Microelectromechanical systems, Pressure drop, Reynolds number, Thermal resistance, Flow separation, Microchannels, Manufacturing, Heat, Temperature, Water, Uncertainty analysis
Evaluation of Single-Phase Flow in Microchannels for High Heat Flux Chip Cooling—Thermohydraulic Performance Enhancement and Fabrication Technology
Fluid Flow and Heat Transfer at Micro- and Meso-Scales with Application to Heat Exchanger Design
Owhaib
Experimental Investigation of Single-Phase Convective Heat Transfer in Circular Microchannels
Thermal-Hydraulic Characteristics of Single-Phase, Flow in Capillary Pipes
Forced Convection Boiling in Microchannel Heat Sink
Heat Transfer and Pressure Drop in Narrow Rectangular Channel
Flow Boiling Heat Transfer in Two-Phase Micro-Channel Heat Sink—I. Experimental Investigation and Assessment of Correlation Methods
Visualization and Measurements of Periodic Boiling in Silicon Microchannels
Bubble Dynamic in Microchannels. I. Single Microchannel
Boiling in Micro Channels: A Review of Experiment and Theory
Phase Change Phenomena in Silicon Microchannels
Evaporative Heat Transfer in Vertical Circular Microchannels
Boiling Heat Transfer in Rectangular Microchannels with Reentrant Cavities
Reduced Pressure Boiling Heat Transfer in Rectangular Microchannels with Interconnected Reentrant Cavities
Evaluation of Single Phase Flow in Microchannel for High Flux Chip Cooling—Thermohydraulic Performance Enhancement and Fabrication Technology
,” Paper No. ICMM2004-2321, Presented at the Second International Conference on Microchannels and Minichannels,
Forced Convective Heat Transfer Across a Pin Fin Micro Heat Exchanger
Laminar Flow Across a Bank of Low Aspect Ratio Micro Pin Fins
Archival Correlations for Average Heat Transfer Coefficients for Non-Circular and Circular Cylinders and for Spheres in Cross-Flow
Heat Transfer from Pin Fin Heat Sinks under Multiple Impinging Jets
Enhancement of Air Jet Impingement Heat Transfer Using Pin Fin Heat Sinks
Heat Transfer from Tubes in Cross Flow
Forced Convection Heat-Transfer Correlations for Flow in Pipes, Past Flat Plates, Single Cylinders, Single Spheres, and for Flow in Packed-Beds and Tube Bundles
Fluid Flow and Heat Transfer Around Rectangular Cylinders (the Case of a Width/Height Ratio of a Section of 0.33-1.5)
Der Wärmeübergang von Strömender Luft an Rohrbündel in Kreuzstrom
VDI-Forschungsh.
Der Wärmeabgabe von Geheizten Drahten und Rohren im Luftstorm
Experiments on a Cross Flow Heat Exchanger with Tubes of Lenticular Shape
Heat Transfer Adjacent to the Attached End of a Cylinder in Crossflow
Experimental and Analytical Investigation of the Coolant Flow Characteristics in Cooled Turbine Airfoils
NASA Contract Rep. NASA CR
, Munich, Germany, Vol.
Design of Cast Pin Fin Coldwalls for Air-Cooled Electronic Systems
Prandtl Number Effects and Generalized Correlations for Confined and Submerged Jet Impingement
Crossflow Heat Transfer in Tube Bundles at Low Reynolds Numbers
Fundamentals of Thermal-Fluids Sciences
The Effect of Confining Walls on the Stability of the Steady Wake Behind a Circular Cylinder
Measurements of Velocity Distributions in the Wake of a Circular Cylinder at Low Reynolds Number
Performance of Pin Fin Cast Aluminum Coldwalls. 2. Colburn j-Factor Correlations
Shell-and-Tube Heat Exchangers: Single Phase Flow
Handbook of Heat Exchanger Design
, New York, Chap. 3.3.
Gnielski
Pressure Drop in Horizontal Cross Flow Across Tube Bundles
|
Partition Data Using Spectral Clustering - MATLAB & Simulink - MathWorks France
Estimate Number of Clusters and Perform Spectral Clustering
Perform Spectral Clustering on Data
This topic provides an introduction to spectral clustering and an example that estimates the number of clusters and performs spectral clustering.
Spectral clustering is a graph-based algorithm for partitioning data points, or observations, into k clusters. The Statistics and Machine Learning Toolbox™ function spectralcluster performs clustering on an input data matrix or on a similarity matrix of a similarity graph derived from the data. spectralcluster returns the cluster indices, a matrix containing k eigenvectors of the Laplacian matrix, and a vector of eigenvalues corresponding to the eigenvectors.
spectralcluster requires you to specify the number of clusters k. However, you can verify that your estimate for k is correct by using one of these methods:
Count the number of zero eigenvalues of the Laplacian matrix. The multiplicity of the zero eigenvalues is an indicator of the number of clusters in your data.
Find the number of connected components in your similarity matrix by using the MATLAB® function conncomp.
Spectral clustering is a graph-based algorithm for finding k arbitrarily shaped clusters in data. The technique involves representing the data in a low dimension. In the low dimension, clusters in the data are more widely separated, enabling you to use algorithms such as k-means or k-medoids clustering. This low dimension is based on the eigenvectors corresponding to the k smallest eigenvalues of a Laplacian matrix. A Laplacian matrix is one way of representing a similarity graph that models the local neighborhood relationships between data points as an undirected graph. The spectral clustering algorithm derives a similarity matrix of a similarity graph from your data, finds the Laplacian matrix, and uses the Laplacian matrix to find k eigenvectors for splitting the similarity graph into k partitions. You can use spectral clustering when you know the number of clusters, but the algorithm also provides a way to estimate the number of clusters in your data.
By default, the algorithm for spectralcluster computes the normalized random-walk Laplacian matrix using the method described by Shi-Malik [1]. spectralcluster also supports the unnormalized Laplacian matrix and the normalized symmetric Laplacian matrix which uses the Ng-Jordan-Weiss method [2]. The spectralcluster function implements clustering as follows:
Dis{t}_{i,j}
{S}_{i,j}=\mathrm{exp}\left(-{\left(\frac{Dis{t}_{i,j}}{\sigma }\right)}^{2}\right)
V\in {ℝ}^{n×k}
{v}_{1},\dots ,{v}_{k}
This example demonstrates two approaches for performing spectral clustering.
The first approach estimates the number of clusters using the eigenvalues of the Laplacian matrix and performs spectral clustering on the data set.
The second approach estimates the number of clusters using the similarity graph and performs spectral clustering on the similarity matrix.
Randomly generate a sample data set with three well-separated clusters, each containing 20 points.
randn(n,2)*0.5;
Estimate the number of clusters in the data by using the eigenvalues of the Laplacian matrix, and perform spectral clustering on the data set.
Compute the five smallest eigenvalues (in magnitude) of the Laplacian matrix by using the spectralcluster function. By default, the function uses the normalized random-walk Laplacian matrix.
[~,V_temp,D_temp] = spectralcluster(X,5)
V_temp = 60×5
Perform spectral clustering on observations by using the spectralcluster function. Specify k=3 clusters.
[idx1,V,D] = spectralcluster(X,k)
idx1 = 60×1
The spectralcluster function correctly identifies the three clusters in the data set.
Instead of using the spectralcluster function again, you can pass V_temp to the kmeans function to cluster the data points.
idx2 = kmeans(V_temp(:,1:3),3);
The order of cluster assignments in idx1 and idx2 is different even though the data points are clustered in the same way.
Estimate the number of clusters using the similarity graph and perform spectral clustering on the similarity matrix.
Construct the similarity matrix from the pairwise distance and confirm that the similarity matrix is symmetric.
Limit the similarity values to 0.5 so that the similarity graph connects only points whose pairwise distances are smaller than the search radius.
S_eps = S;
S_eps(S_eps<0.5) = 0;
Create a graph object from S.
G_eps = graph(S_eps);
Visualize the similarity graph.
plot(G_eps)
Identify the number of connected components in graph G_eps by using the unique and conncomp functions.
unique(conncomp(G_eps))
The similarity graph shows three sets of connected components. The number of connected components in the similarity graph is a good estimate of the number of clusters in your data. Therefore, k=3 is a good choice for the number of clusters in X.
Perform spectral clustering on the similarity matrix derived from the data set X.
idx3 = spectralcluster(S_eps,k,'Distance','precomputed');
spectralcluster | pdist2 | pdist | squareform | adjacency | conncomp
|
Vector boson asymmetry - RHIC Spin Group
Last modified by Sfazio on 23-02-2015
Primary authors: Salvatore Fazio and Dmitri Smirnov
Production of W boson in proton-proton collisions
When protons accelerated by RHIC collide at
{\displaystyle {\sqrt {s}}\simeq 500~{\text{GeV}}}
the Z and W bosons can be produced at STAR. In this analysis we measure the asymmetry of the W (and Z) bosons produced in pp collisions with the spin of the protons perpendicular to the beam. During the 2011 run STAR has collected such data for the first time. The bosons cannot be detected directly but their kinematics can be reconstructed by detecting the decay products. The Z bosons can be easily reconstructed in the di-electron channel. However, the reconstruction of the W's decaying into a
{\displaystyle e\nu _{e}}
pair is challenging due to the neutrino completely escaping the detector. The W has never been reconstructed with the STAR detector. This analysis is the first attempt to reconstruct the kinematics of the W bosons at STAR.
The latest version of the analysis note is available from
1.2 Reading output files
2 MC Studies
2.1 Request for W Monte-Carlo for Run 11
4 2016 RHIC Projections
First one needs to reconstruct jets with stana as
$./stana -f filelist.lis -j
If the jet files are in the current directory one can run the analysis code as
$./stana -f filelist.lis
There is a job template available for running jobs on the farm. We use the following script to submit jobs
$./scripts/submit_jobs.sh
to check out transversely polarised pp2pp runs for the 2011 data taking year:
$get_file_list.pl -keys 'path,filename' -cond 'path!~long,filename~st_W,production=P11id,filetype=daq_reco_mudst,storage=hpss ' -limit 0 -delim '/'
Reading output files
The root file with the tree can be navigated after loading the appropriate libraries:
gSystem->Load(".sl53_gcc432/lib/libutils.so");
gSystem->Load(".sl53_gcc432/lib/libStVecBosAna.so");
TFile *_file0 = TFile::Open("R1210603X_test.Wtree.root")
MC Studies
Embedded MC - the embedded MC is existing for 2009 year geant configuration. To check the file list:
> get_file_list.pl -keys 'path,filename' -cond 'filename~MuDst,runnumber=2000010010, storage=HPSS' -limit 0 -delim '/'
'runnumber' for the relevand MC-data set can be found on the web page: [1]
The W events generated by Pythia (in pythia format) can be found in
/eicdata/eic0004/PYTHIA/pp/
Request for W Monte-Carlo for Run 11
STAR simulations requests page: http://drupal.star.bnl.gov/STAR/starsimrequest
Run 9 W embedding request details:
Official request: http://drupal.star.bnl.gov/STAR/blog/seelej/2010/sep/17/embedding-request-run9-w-xsec-analysis
Run 9 W analysis request: http://drupal.star.bnl.gov/STAR/blog/balewski/2009/dec/29/pp500-w-m-c-request-wo-embedding
Run 9 W analysis request addendum: http://www.star.bnl.gov/HyperNews-star/protected/get/starspin/3962.html
Run 9 W cross section request: http://drupal.star.bnl.gov/STAR/blog/balewski/2009/oct/29/pp500-embedding-revised-request-w-paper
W, Z and background MC samples for the Run 11 W/Z asymmetry analysis with embedding
Essential detectors are: TPC, BTOW, and ETOW
Nice to have BSMD, BPRS, ESMD, EPRS
Geometry y2011 (latest revision for Run 11)
STAR library SL11d
Use PPV vertex finder with beam line constraints, allow PPV to find all vertices
Physical processes (Can we use latest Pythia version 8.1? What other MC generators available at STAR?)
Sample 1: W + Pythia events, 30k
Sample 2: W- Pythia events, 10k
Sample 3: W(+,-) ->Tau pythia events, 10k
Sample 4: Z0->e+e- without interference term, 5k
Sample 5: Z0->anything except e+e-, 10k
Can we produce samples 4 and 5 with interference term and apply Failed to parse (syntax error): {\displaystyle q^2 > ?\!\,} to get more Z's?
Sample 6: QCD, 100k, partonic pt>35 GeV/c, not critical since we plan to use data for background estimation
Total: 165k events
Embedding events: zero bias events for Pythia samples (high lumi fills? low lumi fills?)
get_file_list.pl -keys runnumber,events -cond 'filename~st_zerobias_%,storage=hpss,trgsetupname=pp500_production_2011||pp500_production_2011_noeemc||pp500_production_2011_fms,trgname=zerobias,createtime~2011-%' -alls -delim "," -limit 0
of this sample we selected the transverse production withe 100 < events <1500
get_file_list.pl -keys runnumber,events -cond 'filename~st_zerobias_%,storage=hpss,trgsetupname=pp500_production_2011,trgname=zerobias,createtime~2011-%,events>100 && events<1500' -alls -delim "/" -limit 0
Vertex and beam line constraints with the proper parameters resembling the data:
z width determined from data
z offset determined from data
lateral offset to be done with the beam line offset
BFC options (the proper one for the relevant run period ?)
Magnetic Field = Reversed Full-Field
Zero bias runs for embedding:
Two lists available at the moment (runNumber, events)
runlists/run11_zerobias: all zero bias runs in Run 11
runlists/run11_zerobias_events50+: zero bias runs with more than 50 events per run
runlists/run11_zerobias_events500+: zero bias runs with more than 500 events per run
Available at DIS 2014 preliminaries
2016 RHIC Projections
Available at 2016 projections plots
Theoretical paper relevant for this analysis
"Test the time-reversal modified universality of the Sivers function", http://arxiv.org/abs/0903.3629v1
"Helicity Parton Distributions from Spin Asymmetries in W-Boson Production at RHIC", http://arxiv.org/pdf/1003.4533.pdf
Two internal STAR notes on a similar W analysis
"Longitudinal single-spin asymmetry A_L for W+ and W- production in polarized p+p collisions at \sqrt{s}=500~GeV, Run 9 data", http://drupal.star.bnl.gov/STAR/starnotes/private/psn0516
"Measurement of the W and Z Production Cross Sections at Mid-rapidity in Proton-Proton Collisions at \sqrt{s} = 500 GeV in Run 9", http://drupal.star.bnl.gov/STAR/starnotes/private/psn0546
Another paper from CDF on W reconstruction (CDF RunI data)
"Measurement of the polar-angle distribution of leptons from W boson decay as a function of the W transverse momentum in p \bar{p} collisions at \sqrt{s}=1.8TeV"; PRD 70, 032004 (2004)
Retrieved from "https://wiki.bnl.gov/rhicspin/index.php?title=Vector_boson_asymmetry&oldid=4276"
|
Compute estimate of autoregressive (AR) model parameters using covariance method - Simulink - MathWorks France
Covariance AR Estimator
Compute estimate of autoregressive (AR) model parameters using covariance method
The Covariance AR Estimator block uses the covariance method to fit an autoregressive (AR) model to the input data. This method minimizes the forward prediction error in the least squares sense.
The input must be a column vector or an unoriented vector, which is assumed to be the output of an AR system driven by white noise. This input represents a frame of consecutive time samples from a single-channel signal. The block computes the normalized estimate of the AR system parameters, A(z), independently for each successive input frame.
H\left(z\right)=\frac{G}{A\left(z\right)}=\frac{G}{1+a\left(2\right){z}^{-1}+\dots +a\left(p+1\right){z}^{-p}}
The order, p, of the all-pole model is specified by the Estimation order parameter. To guarantee a valid output, you must set the Estimation order parameter to be less than or equal to half the input vector length.
The top output, A, is a column vector of length p+1 with the same frame status as the input, and contains the normalized estimate of the AR model coefficients in descending powers of z.
The scalar gain, G, is provided at the bottom output (G).
The order of the AR model, p. To guarantee a nonsingular output, you must set p to be less than or equal to half the input length. Otherwise, the output might be singular.
Modified Covariance AR Estimator DSP System Toolbox
arcov Signal Processing Toolbox
|
A Plasma-Switch Impedance Tuner for Real-Time, Frequency-Agile, High-Power Radar Transmitter Reconfiguration | IEEE Conference Publication | IEEE Xplore
Conferences >2021 IEEE MTT-S International...
A Plasma-Switch Impedance Tuner for Real-Time, Frequency-Agile, High-Power Radar Transmitter Reconfiguration
<< Results | Next >
Caleb Calabrese; Justin Roessler; Austin Egbert; Alden Fisher; Charles Baylis; Zach Vander Missen; Mohammad Abu Khater; Dimitrios Peroulis; Robert J. Marks
Prototype Switched-Stub Tuner Design and Measurements
Impedance Optimization Search Results
Abstract:Recent policy changes to the United States radar S-band allocation requires that radar transmitters share the 3.45 to 3.7 GHz range with fifth-generation (5G) wireless co...View more
Recent policy changes to the United States radar S-band allocation requires that radar transmitters share the 3.45 to 3.7 GHz range with fifth-generation (5G) wireless communication systems. As more radar spectrum is designated for sharing, reconfigurable high-power radar transmitters are needed that can maintain performance while quickly adjusting operating frequency. At the heart of a frequency-agile radar transmitter is a reconfigurable matching network, placed between the power amplifier and the antenna, that can maintain optimal range performance in real time by adaptively matching the antenna to the amplifier. Our impedance tuner consists of laser diodes illuminating semiconductor-plasma switches to selectively expose six microstrip stubs and has an octave tuning range from 2–4 GHz. Using an advanced tuning algorithm, the tuner can optimize output power in approximately 260 µs under control of a software-defined radio platform, an improvement of three orders of magnitude over presently available high-power tuning technologies.
Published in: 2021 IEEE MTT-S International Microwave Symposium (IMS)
Spectrum sharing has never been more important for radar systems. A recent ruling known as “America's Mid-Band Initiative” in the United States has re-allocated 100 MHz of the former S-band radar allocation to fifth-generation (5G) wireless as the new primary user [1]. Radar transmitters must be able to adaptively and reconfigureably share the spectrum previously allocated solely for their use. Frequency-agile radar transmitters can be enabled by placing reconfigurable impedance tuners between the power amplifier and the antenna to provide adaptive matching, as shown in Fig. 1. The matching is adjusted upon changes in operating frequency and scan angle to re-match the amplifier to the antenna to maximize detection range and power efficiency. High-power, high-speed tuners are needed to enable frequency-agile radars.
Previous impedance tuners have typically shown either high-power or high-speed capabilities, but not both, evident in [2]–[3][4], using technologies such as MEMS switches, varactors, and ferroelectric materials. Additionally, computer automated reconfiguration of impedance tuners has been demonstrated in [5]–[6][7]. Semnani [8] has developed an evanescent mode cavity tuner capable of handling 90 W; however, it is limited in speed by the mechanical tuning process, which is performed by M3-L linear actuators.
Simple block diagram representing use scenario for reconfigurable matching network between power amplifier and antenna array element. Reprinted from [11].
With this 90 W tuner, Dockendorf demonstrates searches performed in 2–10 seconds and near 100 ms without and with prior information available [9], respectively, making use of a software-defined radar spectrum sharing application [10]. Faster tuning is needed for real-time radar adaptation, prompting the recent examinations of electrical high-power tuning capabilities. A low power prototype switched-stub impedance tuner is demonstrated by Calabrese, capable of full optimization searches in under 25 µs [11] and is the design base for the high-power tuner in the present paper. Switches using fiber-coupled laser excitation of semiconductor plasma in custom chiplets are demonstrated by Fisher [12]. While the chiplets have been shown to be able to handle 35 W [13], high power testing of the tuner has not yet been performed, although a test plan is currently in development. The present paper demonstrates the use of high-power semiconductor plasma switches in creating a fast impedance tuner using the 35 W switches capable of optimizing output power in approximately 260 µs, an improvement of over three orders of magnitude from presently available high-power tuners [9].
The tunable matching network design hinges upon the ability of the laser diodes to illuminate the silicon chiplets, with the help of gradient index (GRIN) rod and spherical lenses, creating semiconductor plasma and effectively closing the switches. A two-board design is used for implementation of control circuitry with the laser diodes. The RF board, shown in Fig. 2, utilizes the same radial matching stub topology of the Calabrese low-power prototype [11]. The radial stubs were chosen for their bandwidth performance and were empirically adjusted to achieve favorable RF performance. Each of the six switches is a custom silicon chiplet which is placed over a via, bridging the gap between the RF feedline and a stub, exposed whenever the switch is closed. The lenses are placed inside the vias to focus the laser light and provide a higher power density across the gap of the chiplet, decreasing the resistance and total loss of each semiconductor plasma switch. The other board, the control board, shown in Fig. 3, includes six commercially available 500 mW, 808 nm laser diodes with circuitry allowing control by a microcontroller or software-defined radio (SDR). The boards are attached and spaced apart using nylon screws and spacers, allowing the laser diodes to illuminate through the vias on the RF board and excite the silicon, and are ultimately placed in a PCTG housing, shown in Fig. 4, to contain any stray near-infrared (NIR) beams.
With six matching stubs, the tuner can achieve 26 = 64 unique matching states. Tuner characterization was performed using a vector network analyzer. Fig. 5 shows S11
S_{11}
coverage of the tuner across all 64 possible tuner states at 2, 3, and 4 GHz. The design shows significant coverage of the Smith Chart across this 2–4 GHz octave tuning bandwidth. Fig. 5 shows that as the operating frequency of the tuner increases, the variability of the reflection coefficient increases. At 2 GHz, the tuner can extend far out on the Smith Chart but has some overlap in the specific reflection coefficients it can obtain. At 4 GHz, the tuner has more variety in points and extends nearly as far out on the Smith Chart as at 2 GHz.
(a) Bottom of rf board. Arrows point to 3.075 mm x 0.5 mm silicon chiplets (b) top of rf board. Arrows point to 1 mm diameter vias which allow illumination of silicon chiplets. The lenses sit in these vias.
(a) Top of control board (b) bottom of control board with laser diodes attached at the top
Completed tuner boards, attached and in housing
An RF oscilloscope was used to measure the time taken for the RF signal to respond to the switches opening and closing. A microcontroller was used to turn the laser diodes on and off and was also used to trigger the oscilloscope measurements. Fig. 6 show off/open timing measurements, run by a microcontroller and measured by an oscilloscope. The lossiest tuner state was used for timing measurements, so the RF tone, shown in blue on the figures, decreases in magnitude when the silicon chiplet switches are closed and increases when they open. Using the metric of 10-90% of the voltage to determine the switching time, the on-time was calculated to be near 2 µs, and the off-time 20 µs, Impedance optimization searches, discussed later, often do not need to wait the full 20 µs for accurate convergence to the optimal tuner state for a given frequency and “antenna” reflection coefficient pair.
Applying the discrete search algorithm presented by Calabrese for the low-power switched-stub tuner [11] in a field-programmable gate array (FPGA), impedance optimization searches were performed with the tuner. The six switched stubs are represented by a binary 1 or 0, and the 64 tuner states are cycled through one bit at a time. If output power improves, the switch that was most recently toggled on or off keeps its current form, which provided the improvement, open or closed. These steps continue until an entire pass through all stubs provides no further performance improvement. Fig. 7 shows an example of an algorithm step, using the tuning stubs as a visual. The tuner begins at state 34 here and leaves the fifth switch closed due to a performance improvement. The next step is to retune to state 35 by closing the sixth switch. The output power would be remeasured. If an improvement is seen, the sixth switch would remain closed; otherwise, it would be opened. The first switch would then be revisited, and passes through toggling all switches continued until a complete pass results in no further power improvements.
Fig. 8 shows the test setup. The software-defined radio (SDR) generates the 3 dBm input signal and measures output power. A Skyworks 65017-70LF InGaP amplifier was used as the amplifier device, and the plasma tuner was placed at the output of the device. A commercial impedance tuner from Maury Microwave was used to emulate the changing antenna impedance that could result from varying array scan angle.
Table 1 shows five examples of optimization searches using the laser diode-silicon chiplet tuner. For example, the search on the center row was performed at 3 GHz and an “antenna” reflection coefficient Γant=0.70/−45∘–––––––
\Gamma_{ant}= 0.70\underline{/-45^{\circ}}
and completes in just under 200 µs. The optimum, state 32, corresponding to binary 100000 (switch one closed, remaining switches open) was reached on the second measurement in 23.54 µs.
Smith chart s11 coverage at 2, 3, and 4 ghz (from top to bottom)
Timing measurements for switch close (on) time and open (off) time of laser diode/silicon chiplet switches
Reconfiguration search step visualization
Test bench for fpga impedance optimization searches
The loss of the tuner is significantly dependent on the tuning state selected. A loss of near 1 dB can be obtained for some of the zero- and one-stub states. Typically, closing more switches (exposing more stubs) raises the loss, explaining the limited states appearing in the results of Table 1. At times, although a certain state can provide a good match, the additional loss is too high for that state to be chosen. As such, some of the end power values are low because a tuner state providing both a good match and low loss does not exist. We are currently investigating a single packaged switch including the chiplets and laser that would eliminate light divergence and slight alignment inconsistencies, reducing the loss significantly. An intermediate solution that could provide further loss improvement of states with more exposed stubs would include a high-precision housing for the diodes, chiplets, and lenses.
Table I. Optimization search examples
Design and fast optimization of a 2–4 GHz S-band impedance tuner using 35-W semiconductor plasma switches has been presented. The tuner can provide complete optimization of impedance in approximately 260 µs from a software-defined radio platform, helping a radar system maintain optimal output power and detection range. In addition to enabling spectrum-sharing radar in the S-band, the design will also be useful in other high-power transmitter applications, including satellite communications and cellular phone base stations.
Application of cylindrical near-field measurement technique to the calibration of spaceborne radar antennas: NASA scatterometer and SeaWinds
A range compensating feed motion concept for spaceborne radar
1."White House and DOD Announce Additional Mid-Band Spectrum Available for 5G by the End of the Summer", United States Department of Defense.
2.J.-S. Fu et al., "A Ferroelectric-Based Impedance Tuner for Adaptive Matching Applications", 2008 IEEE MTT-S Int'l Microwave Symp., pp. 955-958, 2008.
3.T. Singh et al., "Monolithically Integrated Reconfigurable RF MEMS Based Impedance Tuner on SOI Substrate", 2019 IEEE MTT-S Int'l Microwave Symp., pp. 790-792, 2019.
4.Y. Lu et al., "High-Power MEMS Varactors and Impedance Tuners for Millimeter-Wave Applications", IEEE Trans. Microwave Theory Tech., vol. 53, no. 11, pp. 3672-3678, Nov. 2005.
5.Y. Sun et al., "Adaptive Impedance Matching and Antenna Tuning for Green Software-Defined and Cognitive Radio", 54th IEEE Int'l Midwest Symp. Circuits Syst., 2011.
6.D. Qiao et al., "An Intelligently Controlled RF Power Amplifier With a Reconfigrable MEMS-Varactor Tuner", IEEE Trans. Microwave Theory Tech., vol. 53, no. 3, pp. 1089-1095, Mar. 2005.
7.A. van Bezooijen et al., "Adaptive Impedance-Matchmg Techniques for Controlling L Networks", IEEE Trans. Circuits Syst. I, vol. 57, no. 2, pp. 495-505, Feb. 2010.
8.A. Semnani et al., "High-Power Impedance Tuner Utilising Substrate-Integrated Evanescent-Mode Cavity Technology and External Linear Actuators", IET Microwaves Ant. & Prop., vol. 13, no. 12, pp. 2067-2072, 2019.
9.A. Dockendorf et al., "Fast Optimization Algorithm for Evanescent-Mode Cavity Tuner Optimization and Timing Reduction in Software-Defined Radar Implementation", IEEE Trans. Aero. Elec. Sys., vol. 56, no. 4, pp. 2762-2778, Aug. 2020.
10.B. Kirk et al., "Cognitive Software Defined Radar for Time-Varying RFI Avoidance", 2018 IEEE Radar Conf., Apr. 2018.
11.C. Calabrese et al., "Fast Switched-Stub Impedance Tuner Reconfiguration for Frequency and Beam Agile Radar and Electronic Warfare Applications", 2020 IEEE Radar Conf., Apr. 2020.
12.A. Fisher et al., "A Low-Loss 1–4 GHz Optically-Controlled Silicon Plasma Switch", IEEE Wireless and Microwave Technology Conference, Apr. 2021.
13.A. Fisher et al., "A Fiber-Free DC-7 GHz 35 W Integrated Semiconductor Plasma Switch", 2021 IEEE MTT-S Int'l Microwave Symp., Jun. 2021.
About IEEE Xplore | Contact Us | Help | Accessibility | Terms of Use | Nondiscrimination Policy | IEEE Ethics Reporting | Sitemap | Privacy & Opting Out of Cookies
|
Analyze signals in the frequency and time-frequency domains - MATLAB pspectrum - MathWorks Deutschland
The first channel has unit amplitude and a normalized sinusoid frequency of
\pi /4
rad/sample
The second channel has an amplitude of
1/\sqrt{2}\text{\hspace{0.17em}}
and a normalized frequency of
\pi /2
Compute the power spectrum of each channel and plot its absolute value. Zoom in on the frequency range from
0.15\pi
rad/sample to
0.6\pi
rad/sample. pspectrum scales the spectrum so that, if the frequency content of a signal falls exactly within a bin, its amplitude in that bin is the true average power of the signal. For a complex exponential, the average power is the square of the amplitude. Verify by computing the discrete Fourier transform of the signal. For more details, see Measure Power of Deterministic Periodic Signals.
\frac{1}{100}<\frac{\text{Median time interval}}{\text{Mean time interval}}<100.
{\text{RBW}}_{\text{theory}}=\frac{\text{ENBW}}{{t}_{\mathrm{max}}-{t}_{\mathrm{min}}}.
{\text{RBW}}_{\text{performance}}=4×\frac{{f}_{\text{span}}}{4096-1},
\text{RBW}=\mathrm{max}\left({\text{RBW}}_{\text{theory}},{\text{RBW}}_{\text{performance}}\right).
\text{Segment length}=\frac{{f}_{\text{Nyquist}}×\text{ENBW}}{\text{RBW}},
\text{Stride length}\equiv \text{Segment length}-\text{Overlap}=\frac{\text{Segment length}}{2×\text{ENBW}-1},
\left(1-\frac{\text{1}}{2×\text{ENBW}-1}\right)×100,
{\text{RBW}}_{\text{performance}}=4×\frac{{f}_{\text{span}}}{1024-1}.
|
In mathematics, the domain of a function is the set of inputs accepted by the function. It is sometimes denoted by
{\displaystyle \operatorname {dom} (f)}
, where f is the function.
More precisely, given a function
{\displaystyle f\colon X\to Y}
, the domain of f is X. Note that in modern mathematical language, the domain is part of the definition of a function rather than a property of it.
In the special case that X and Y are both subsets of
{\displaystyle \mathbb {R} }
, the function f can be graphed in the Cartesian coordinate system. In this case, the domain is represented on the x-axis of the graph, as the projection of the graph of the function onto the x-axis.
{\displaystyle f\colon X\to Y}
, the set Y is called the codomain, and the set of values attained by the function (which is a subset of Y) is called its range or image.
Any function can be restricted to a subset of its domain. The restriction of
{\displaystyle f\colon X\to Y}
{\displaystyle A}
{\displaystyle A\subseteq X}
{\displaystyle \left.f\right|_{A}\colon A\to Y}
1 Natural domain
3 Set theoretical notions
Natural domainEdit
{\displaystyle f}
{\displaystyle f(x)={\frac {1}{x}}}
cannot be evaluated at 0. Therefore the natural domain o{\displaystyle f}
is the set of real numbers excluding 0, which can be denoted by
{\displaystyle \mathbb {R} \setminus \{0\}}
{\displaystyle \{x\in \mathbb {R} :x\neq 0\}}
The piecewise function
{\displaystyle f}
{\displaystyle f(x)={\begin{cases}1/x&x\not =0\\0&x=0\end{cases}},}
has as its natural domain the set
{\displaystyle \mathbb {R} }
The square root function
{\displaystyle f(x)={\sqrt {x}}}
has as its natural domain the set of non-negative real numbers, which can be denoted by
{\displaystyle \mathbb {R} _{\geq 0}}
{\displaystyle [0,\infty )}
{\displaystyle \{x\in \mathbb {R} :x\geq 0\}}
The tangent function, denoted
{\displaystyle \tan }
, has as its natural domain the set of all real numbers which are not of the form
{\displaystyle {\tfrac {\pi }{2}}+k\pi }
{\displaystyle k}
{\displaystyle \mathbb {R} \setminus \{{\tfrac {\pi }{2}}+k\pi :k\in \mathbb {Z} \}}
The word "domain" is used with other related meanings in some areas of mathematics. In topology, a domain is a connected open set.[1] In real and complex analysis, a domain is an open connected subset of a real or complex vector space. In the study of partial differential equations, a domain is the open connected subset of the Euclidean space
{\displaystyle \mathbb {R} ^{n}}
where a problem is posed (i.e., where the unknown function(s) are defined).
Set theoretical notionsEdit
Retrieved from "https://en.wikipedia.org/w/index.php?title=Domain_of_a_function&oldid=1077236642"
|
Summability criterion - Electowiki
The summability criterion is a criterion about the vote-counting process of electoral systems, which describes how precinct-summable a voting method is. Unlike most other voting system criteria, it does not relate to the end result, only to the process.
This is important for elections wtih many voting jurisdictions to be able to practically transmit their vote totals for tabulation. Summability is important to be able to report real-time combined vote totals in an understandable way. Some non-summable methods require that the individual ballot images are are transmitted to a centralized counting location to find the combined result.
3 Summability of various voting methods
4.1 Summable methods
4.1.1 Points-scoring methods
4.1.1.1 Positional methods
4.1.1.1.1 Median methods
4.1.1.2 Cardinal methods
4.1.2 Pairwise methods
4.1.2.1 Condorcet methods
4.2 Non-summable methods
5 Importance of summability
6 Multi-winner generalizations and results
7.1 Amount of vote-counting work
7.2 Number of data value types versus number of data values
7.3 Counting first choices
Compliance[edit | edit source]
Back in 2009, English Wikipedia stated that the criterion was stated as follows:[1]
Each vote should be able to be mapped onto a summable array, such that its size at most grows polynomially with respect to the amount of candidates, the summation operation is associative and commutative and the winner could be determined from the array sum for all votes cast alone.[2]
Here at electowiki, we believe the following methods comply with the summability criterion:
Plurality voting (also known as "choose-one voting") — In plurality voting, the number of ballots for each candidate may be counted, and these totals reported from each precinct.
Approval voting — Though each ballot may contain votes for more than one candidate, the sum of all values for each candidate may be found at each precinct and reported.
Borda count — Though each ballot contains votes for more than one candidate, and these votes may have different values, the sum of all values for each candidate may be found at each precinct and reported.
Score voting — Though each ballot contains votes for more than one candidate, and these votes may have different values, the sum of all values for each candidate may be found at each precinct and reported.
Most Condorcet methods (e.g. Schulze method, Ranked Pairs) — these can generally be added into a two-dimensional array
Some Condorcet hybrids (e.g. Nanson's method, Majority Choice Approval)
As noted in William Poundstone's book Gaming the Vote, Instant-Runoff Voting does not comply.[3]
In many Condorcet methods, each ballot can be represented as a two-dimensional square array referred to as a pairwise matrix. The sum of these matrices may be reported from each precinct.
Informally speaking, the amount of data that has to be transmitted from the precincts should be less than the amount of data on the ballots themselves. In other words, it must be more efficient to count the votes in precincts than to bring the votes to a centralized location.
Mathematical requirements[edit | edit source]
Each vote should map onto a summable array, where the summation operation is associative and commutative, and the winner should be determined from the array sum for all votes cast. An election method is kth-order summable if there exists a constant c such that in any election with n candidates, the required size of the array is at most cnk. If there is no value of k for which the method is kth-order summable, the method is non-summable.
Strictly speaking, a method is kth-order summable if an election involving
{\displaystyle V}
voters and
{\displaystyle c}andidates can be stored in a data structure (a summary) that requires
{\displaystyle O(\log(V) \cdot c^k)}
bits in total, where there exists a summation operator that takes any two such summaries and produces a third for the combined election, and the election method itself can use these summaries instead of ballot sets to produce the same results. This definition closes the obvious loophole of using a few very large numbers to store more data than would otherwise be permitted.
Summability of various voting methods[edit | edit source]
Methods and their summability levels.
non-summable
Plurality-based Party list PR
Most forms of MCA
most Condorcet methods,
Borda-elimination (Baldwin[4] and Nanson[5])
MCA-IR, and some forms of MCA-AR
STAR voting[6]
Benham's method
Descending Acquiescing Coalitions
Most methods providing party-agnostic Proportional representation
Summable methods[edit | edit source]
Points-scoring methods[edit | edit source]
Positional methods[edit | edit source]
In plurality voting, each vote is equivalent to a one-dimensional array with a 1 in the element for the selected candidate, and a 0 for each of the other candidates. The sum of the arrays for all the votes cast is simply a list of vote counts for each candidate.
Any weighted positional method can be summed this way, but with different one-dimensional arrays depending on the method.
Median methods[edit | edit source]
Alternatively, precincts may sum up the number of times each candidate was ranked at each of the
{\displaystyle c}
possible ranks (or grades). This positional matrix can then be used to compute the result for any weighted positional method after the fact, or for median-based methods like graded Bucklin methods. This shows a contrast between median methods and point-scoring methods, where the grade level doesn't matter, only the strength/quality/degree of the grade (i.e. in points-scoring methods, two 1/5s are equivalent to one 2/5).
Cardinal methods[edit | edit source]
Approval voting is the same as plurality voting except that more than one candidate can get a 1 in the array for each vote. Each of the selected or "approved" candidates gets a 1, and the others get a 0.
For example, with Score voting, a voter who votes A:10 B:6 C:3 D:1 is treated as giving a 10 to A, a 6 to B, etc. Comparisons across different score scales can be made by dividing the score by the max score (i.e. instead of a 6, treat it as a 6/10=0.6, etc.) so that a voter who scores a candidate a 3 out of 5 and a voter who scores a candidate a 6 out of 10 can have their scores treated and counted the same without any issues.
Pairwise methods[edit | edit source]
Main article: Pairwise counting
Some voting methods, such as STAR voting are precinct-summable using voter's pairwise preference order alongside the total score received for each candidate.
Condorcet methods[edit | edit source]
In Schulze and many other summable Condorcet methods, each vote is equivalent to a two-dimensional array referred to as a pairwise matrix. If candidate A is ranked above candidate B, then the element in the A row and B column gets a 1, while the element in the B row and A column gets a 0. The pairwise matrices for all the votes are summed, and the winner is determined from the resulting pairwise matrix sum. The precincts' matrices may be added together to get the matrix for the whole electorate, just like a precinct's voters' matrices may be added together to get the matrix for that precinct.
For example, a voter who ranks all of the candidates A>B=C>D is treated as, in a matrix, giving:
A --- (A>B) 1 1 1
B (B>A) 0 --- 0 1
C 0 0 --- 1
D 0 0 0 ---
If some other voter ranked B above A, then that would be added into this matrix by adding a 1 to the B>A cell (i.e. increasing it from 0 to 1), etc.
Non-summable methods[edit | edit source]
Instant-runoff voting[edit | edit source]
IRV does not comply with the summability criterion. In the IRV system, a count can be maintained of identical votes, but votes do not correspond to a summable array. The total possible number of unique votes grows factorially with the number of candidates.
Importance of summability[edit | edit source]
The summability criterion addresses implementation logistics. Election methods with lower summability levels are substantially easier to implement with integrity than methods with higher summability levels or methods that are non-summable. In addition, summability points to the simplicity of understanding how voters' support for candidates influences who wins in the voting method.
Suppose, for example, that the number of candidates is ten.
Under first-order summable methods like plurality or Approval voting, the votes at any level (precinct, ward, county, etc.) can be compressed into a list of ten numbers.
For Schulze, a 10×10 matrix is needed (although only 10x9=90 data values are actually kept).
In an IRV system, however, each precinct would need to send a list of ten numbers, the number of first-place votes for each candidate. The central system would then return to each precinct a candidate to eliminate. Each precinct would then return the first-place votes for each of the nine remaining candidates, and receive another candidate to eliminate. This would be repeated at most 9 times. This is more than the others.
IRV therefore requires more data transfer and storage than the other methods. The biggest challenge in using computers for public elections will always be security and integrity. If N-1 times more data needs to be transferred and stored, verification becomes more difficult and the potential for fraudulent tampering becomes slightly greater.
To illustrate this point, consider the verification of a vote tally for a national office. In a plurality election, each precinct verifies its vote count. This can be an open process where The counts for each precinct in a county can then be added to determine the county totals, and anyone with a calculator or computer can verify that the totals are correct. The same process is then repeated at the state level and the national level. If the votes are verified at the lowest (precinct) level, the numbers are available to anyone for independent verification, and election officials could never get away with "fudging" the numbers. Of course, if verified images of all the ballots are available to the public, then the whole counting process is available to anyone for independent verification, for any voting system.
Recounts[edit | edit source]
In first-order summable election systems, adding new ballots to the count (say, ballots that were found after the initial count, or late absentee ballots, or ballots that were initially ruled invalid) is as simple as "summing" the original result with the newly-found ballots. Under non-summable systems, though, finding new ballots means all ballots might possibly need to be recounted. This is not a big problem for computer recounts, but manual recounts can be extremely time-consuming and expensive.
Multi-winner generalizations and results[edit | edit source]
Most block voting methods that are based on summable single-winner methods are also of the same degree of summability in the multi-winner case.
Generally speaking, except for proportional Category:FPTP-based voting methods (which notably include Party list and SNTV), there are no seriously used summable Category:PSC-compliant voting methods.
Ebert's method is summable in
{\displaystyle O(c^2)}
for any number of seats.[7]
Academic results[edit | edit source]
Forest Simmons has constructed a color-proportional method that's summable in
{\displaystyle O(\log(V) \cdot c)}
for any number of seats.[8] The same approach can be generalized to make a Droop-proportional method that's fixed-parameter summable in
{\displaystyle O(\log(V) \cdot c^s)}
{\displaystyle s}
is the number of seats, by keeping a separate count for each solid coalition of size
{\displaystyle s}
It's unknown whether it's possible to construct a Droop-proportional method that's summable in
{\displaystyle O(\log(V) \cdot c^k \cdot s^n)}
for constant
{\displaystyle k}
{\displaystyle n}
Many voting methods that are summable to some degree can be manually summed in a harder way. For example, Score voting can be counted using a form of Pairwise counting that takes degree of preference into account.
Amount of vote-counting work[edit | edit source]
Summability focuses on the amount of data that has to be captured, but not necessarily the amount of work required to capture it. For example, when doing pairwise counting, an election featuring a ballot that ranks a candidate last requires as many marks to count as if the same ballot had been cast without the last-ranked candidate. Yet in practice, the vote-counters must still take some time to check that that candidate is indeed last-ranked, meaning some work is done even while no data was produced.
Number of data value types versus number of data values[edit | edit source]
Summability focuses to a large extent on the number of data value types, not just the amount of data overall that has to be captured. This can make a difference in certain cases; for example, the regular pairwise counting approach only requires (n^2-n)data value types to be captured for all ballots, whereas the Negative vote-counting approach for pairwise counting requires (n^2) value types. This is because the latter not only records preferences in each pairwise matchup, but also the number of ballots ranking each candidate. Yet, depending on implementation, the negative counting approach actually has the same upper bound on number of data values to capture as the regular approach, and in practice could require fewer.
Counting first choices[edit | edit source]
Some voting methods can be counted like Approval voting when counting:
Approval-style ballots (a ballot that maximally supports some candidates and doesn't support any other candidates),.
More generally, the 1st choices of a ballot can be counted like Approval for any ballots that rank one (or possibly more) candidates 1st, but show some support for some other candidates, (This means the ballot's support for its non-1st choice candidates may be harder to count).
reducing the amount of work otherwise necessary to count them. For example, Condorcet methods can have this done using a certain implementation of the Negative vote-counting approach for pairwise counting, or simply by using Pairwise counting#Counting first choices separately.
Two-way communication[edit | edit source]
Some non-summable methods can be counted using two-way communication, which is when the precincts both transmit and receive information to and from the central vote-counting authorities during the counting process.
Most sequential Cardinal PR require less two-way communication and/or centralized counting work than most other PR methods.
↑ An article titled "Summability criterion" was deleted from English Wikipedia in 2009.<ref>English Wikipedia AfD for "Summability Criterion": https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Summability_criterion Before the page was deleted from Wikipedia, it was copied to Electowiki. Those with the correct permissions can see the edit history on English Wikipedia.
↑ Note that this blockquote was copied to electowiki before it was deleted from English Wikipedia. There were other changes may have been made after the article was copied from wikipedia:Summability criterion to Summability criterion (Wikipedia version). Please see the edit histories for each page to determine who authored the passage you are interested in.
↑ Gaming the Vote, Why Elections Aren't Fair (and What We Can Do About It), William Poundstone, New York: Hill and Wang, 2008, p. 170.
↑ Nanson, E. J. (1882). "Methods of election". Transactions and Proceedings of the Royal Society of Victoria. 19: 197–240.
↑ "Compare STAR and IRV - Equal Vote Coalition". Equal Vote Coalition. Retrieved 2018-11-12.
↑ "proof that sum of squared loads is precinct-summable". 2020-01-14. Retrieved 2020-04-29.
↑ "answer to puzzle 15". RangeVoting.org. 2007-02-01. Retrieved 2020-02-11.
This page was migrated from the "Summability_criterion" page on wiki.electorama.com. To view the authors prior to the migration, view the "Summability_criterion" page edit history prior to 2018-10-01
Retrieved from "https://electowiki.org/w/index.php?title=Summability_criterion&oldid=15297"
|
106.3.2.29 TM-29, Determination of Nitrogen in Liquid Fertilizers - Engineering_Policy_Guide
106.3.2.29 TM-29, Determination of Nitrogen in Liquid Fertilizers
This method determines the percent of nitrogen in liquid fertilizers.
106.3.2.29.1 Reagents
(a) 0.1 Normal Sodium Hydroxide Solution.
Dissolve 4 g NaOH, Reagent Grade, in H2O and dilute to 1000 ml. Standardize against Reagent Grade Potassium Acid Phthalate.
(b) 0.1 Normal Sulfuric Acid Solution.
Dilute 3 ml H2SO4 Reagent Grade to 1000 ml. Standardize against the 0.1 N NaOH.
(c) Sodium Sulfide Solution.
Dissolve 4 g Na2S, Reagent Grade in 100 ml H2O.
d) Sodium Hydroxide Solution.
Dissolve 500 g NaOH, Reagent Grade in 1000 ml H2O.
106.3.2.29.2 Procedure
Weigh, to the nearest 0.1 mg, 3-3.5 g of the sample from a dropping bottle into a 500 ml volumetric flask. Dilute to volume and pipette 50 ml into an 800 ml Kjeldahl flask. Add 5 g of finely powdered reduced Iron and 60 ml of H2SO4 (1-1). Swirl the flask to mix the contents and let stand until visible reaction ceases. Digest over low heat for 5-10 minutes then add 0.7 g HgO, Reagent Grade. Continue the digestion until most of the liquid is gone and the contents cling to the sides of the flask. Cool, add about 300 ml H2O, and cool to room temperature. Add about 0.5 g of granular Zinc and 25 ml of Na2S solution. Pipette 50 ml of 0.1 N H2SO4 into a 400 ml beaker and place the beaker under a condenser so that the tip of the condenser extends below the surface of the acid. Tilt the flask, slowly add 100 ml of NaOH solution, and immediately connect the flask to the condenser by means of a Kjeldahl connecting bulb. Distill until 150-200 ml has been collected in the beaker. Titrate the excess acid with 0.1 N NaOH solution, using Methyl Red as the indicator.
106.3.2.29.3 Calculations
Adjust the volumes of the H2SO4 and NaOH solutions to exactly 0.1000 Normal, and make the following calculations:
% N =
{\displaystyle 100\times {\frac {(ml\ 0.1\ NH_{2}SO_{4}-ml\ 0.1N\ NaOH)\times 0.014}{Sample\ Wt.}}}
Retrieved from "https://epg.modot.org/index.php?title=106.3.2.29_TM-29,_Determination_of_Nitrogen_in_Liquid_Fertilizers&oldid=23405"
|
Eigenvalue Problem for Nonlinear Fractional Differential Equations with Integral Boundary Conditions
Guotao Wang, Sanyang Liu, Lihong Zhang, "Eigenvalue Problem for Nonlinear Fractional Differential Equations with Integral Boundary Conditions", Abstract and Applied Analysis, vol. 2014, Article ID 916260, 6 pages, 2014. https://doi.org/10.1155/2014/916260
Guotao Wang ,1 Sanyang Liu,1 and Lihong Zhang2
1Department of Applied Mathematics, Xidian University, Xi'an, Shaanxi 710071, China
By employing known Guo-Krasnoselskii fixed point theorem, we investigate the eigenvalue interval for the existence and nonexistence of at least one positive solution of nonlinear fractional differential equation with integral boundary conditions.
Fractional calculus has been receiving more and more attention in view of its extensive applications in the mathematical modelling coming from physical and other applied sciences; see books [1–5]. Recently, the existence of solutions (or positive solutions) of nonlinear fractional differential equation has been investigated in many papers (see [6–28] and references cited therein). However, in terms of the eigenvalue problem of fractional differential equation, there are only a few results [29–33].
To the best of author’s knowledge, no paper has considered the eigenvalue problem of the following nonlinear fractional differential equation with integral boundary conditions: where , is the Caputo fractional derivative, and is a continuous function.
Our proof is based upon the properties of the Green function and Guo-Krasnoselskii’s fixed point theorem given in [34]. Our purpose here is to give the eigenvalue interval for nonlinear fractional differential equation with integral boundary conditions. Moreover, according to the range of the eigenvalue , we establish some sufficient conditions for the existence and nonexistence of at least one positive solution of the problem (1).
For the convenience of the readers, we first present some background materials.
Definition 1. For a function , the Caputo derivative of fractional order is defined as where denotes the integer part of the real number .
Definition 2. The Riemann-Liouville fractional integral of order for a function is defined as provided that such integral exists.
Lemma 3. Let ; then for some , , .
Lemma 4 (see [34]). Let be a Banach space, and let be a cone. Assume that , are open subsets of with , , and let be a completely continuous operator such that(i), , and , , or(ii), , and , .
Lemma 5. Let , , , and . Assume ; then the unique solution of the problem is given by the expression where
Proof. It is well known that the equation can be reduced to an equivalent integral equation: for some .
By the conditions and , we can get that and Hence, we have
Put ; then, from (10), we deduce that which implies that
Replacing this value in (10), we obtain the following expression for function : This completes the proof.
Lemma 6. Let be the Green function, which is given by the expression (7). For , the following property holds:
The proof is similar to that of Lemma 2.4 in [7], so we omit it.
Consider the Banach space with general norm Define the cone .
Suppose is a solution of (1). It is clear from Lemma 5 that
Define the operator as follows:
Proof. Since , it is obvious that . So we have Therefore, . The other proof is similar to that in [7], so we omit it.
For convenience, we list the denotation:
Next, we will establish some sufficient conditions for the existence and nonexistence of positive solution for problem (1).
Theorem 8. Let be a constant. Then for each problem (1) has at least one positive solution.
Proof. First, for any , from (20) we have
On the one hand, by the definition of , there exists such that, for any , we have Choose . For , we have
On the other hand, by the definition of , there exists such that, for any , we have Take . For , we have According to (23), (25), and Lemma 4, has at least one fixed point with , which is a positive solution of (1).
Remark 9. If and , then we can get Theorem 8 implies that, for , problem (1) has at least one positive solution.
Theorem 10. Let be a constant. Then for each problem (1) has at least one positive solution.
Proof. First, it follows from (27) that, for any ,
By the definition of , there exists such that, for any , we have Choose . For , we have . Similar to the proof in Theorem 8, it holds from (28) and (29) that
Note . There exists , such that We consider the problem on two cases. (I) Suppose is bounded. There exists , such that , . Choose . Let . For , we have
(II) Suppose is unbounded. There exists such that
Let . For , we have Combining (I) and (II), take ; here, . Then for , we have
Hence, (30) and (42) together with Lemma 4 imply that has at least one fixed point with , which is a positive solution of (1).
Theorem 11. Assume and . Problem (1) has no positive solution provided where is a constant defined in (38).
Proof. Since and , together with the definitions of and , there exist positive constants , , , and satisfying such that Take
It follows that for any . Suppose that is a positive solution of (1). That is, In sequence, which is a contradiction. Hence, (1) has no positive solution.
Proof. Since and , together with the definitions of and , there exist positive constants , , , and satisfying such that Take It follows that for any . Suppose that is a positive solution of (1). That is, In sequence, which is a contradiction. Hence, (1) has no positive solution.
Example 13. Consider the fractional differential equation In this example, take Obviously, we have
Since and , through a computation, we can get
Choose ; we have Theorem 8 implies that, for , , the problem (46) has at least one positive solution.
Remark 14. In particular, if we take in Example 13, then and . Remark 9 implies that problem (46) has at least one positive solution for .
This work is supported by the NNSF of China (no. 61373174) and the Natural Science Foundation for Young Scientists of Shanxi Province, China (no. 2012021002-3).
A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, vol. 204, Elsevier Science B.V., Amsterdam, The Netherlands, 2006. View at: MathSciNet
{L}^{p}
G. Wang, R. P. Agarwal, and A. Cabada, “Existence results and the monotone iterative technique for systems of nonlinear fractional differential equations,” Applied Mathematics Letters, vol. 25, no. 6, pp. 1019–1024, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
G. Wang, D. Baleanu, and L. Zhang, “Monotone iterative method for a class of nonlinear fractional differential equations,” Fractional Calculus and Applied Analysis, vol. 15, no. 2, pp. 244–252, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
G. Wang, A. Cabada, and L. Zhang, “Integral boundary value problem for nonlinear differential equations 3 of fractional order on an unbounded domain,” Journal of Integral Equations and Applications. In press. View at: Google Scholar
G. Wang, S. Liu, and L. Zhang, “Neutral fractional integro-differential equation with nonlinear term depending on lower order derivative,” Journal of Computational and Applied Mathematics, vol. 260, pp. 167–172, 2014. View at: Publisher Site | Google Scholar | MathSciNet
S. Liu, G. Wang, and L. Zhang, “Existence results for a coupled system of nonlinear neutral fractional differential equations,” Applied Mathematics Letters, vol. 26, pp. 1120–1124, 2013. View at: Publisher Site | Google Scholar
L. Zhang, B. Ahmad, G. Wang, R. P. Agarwal, M. Al-Yami, and W. Shammakh, “Nonlocal integrodifferential boundary value problem for nonlinear fractional differential equations on an unbounded domain,” Abstract and Applied Analysis, vol. 2013, Article ID 813903, 5 pages, 2013. View at: Publisher Site | Google Scholar | MathSciNet
Y. Liu, B. Ahmad, and R. P. Agarwal, “Existence of solutions for a coupled system of nonlinear fractional differential equations with fractional boundary conditions on the half-line,” Advances in Difference Equations, vol. 2013, article 46, 2013. View at: Publisher Site | Google Scholar | MathSciNet
M. Benchohra, A. Cabada, and D. Seba, “An existence result for nonlinear fractional differential equations on Banach spaces,” Boundary Value Problems, vol. 2009, Article ID 628916, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
S. Zhang, “Positive solutions for boundary-value problems of nonlinear fractional differential equations,” Electronic Journal of Differential Equations, vol. 2006, pp. 1–12, 2006. View at: Google Scholar | Zentralblatt MATH | MathSciNet
M. Feng, X. Zhang, and W. Ge, “New existence results for higher-order nonlinear fractional differential equation with integral boundary conditions,” Boundary Value Problems, vol. 2011, Article ID 720702, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
H. A. H. Salem, “Fractional order boundary value problem with integral boundary conditions involving Pettis integral,” Acta Mathematica Scientia B, vol. 31, no. 2, pp. 661–672, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Y. Zhou, F. Jiao, and J. Li, “Existence and uniqueness for fractional neutral differential equations with infinite delay,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 7-8, pp. 3249–3256, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Z. Bai, “On positive solutions of a nonlocal fractional boundary value problem,” Nonlinear Analysis: Theory, Methods & Applications, vol. 72, no. 2, pp. 916–924, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
C. S. Goodrich, “Existence of a positive solution to systems of differential equations of fractional order,” Computers & Mathematics with Applications, vol. 62, no. 3, pp. 1251–1268, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
X. Xu, D. Jiang, and C. Yuan, “Multiple positive solutions for the boundary value problem of a nonlinear fractional differential equation,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 10, pp. 4676–4688, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
W. Jiang, “Eigenvalue interval for multi-point boundary value problems of fractional differential equations,” Applied Mathematics and Computation, vol. 219, no. 9, pp. 4570–4575, 2013. View at: Publisher Site | Google Scholar | MathSciNet
G. Wang, S. K. Ntouyas, and L. Zhang, “Positive solutions of the three-point boundary value problem for fractional-order differential equations with an advanced argument,” Advances in Difference Equations, vol. 2011, article 2, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Z. Bai, “Eigenvalue intervals for a class of fractional boundary value problem,” Computers & Mathematics with Applications, vol. 64, no. 10, pp. 3253–3257, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
S. Sun, Y. Zhao, Z. Han, and J. Liu, “Eigenvalue problem for a class of nonlinear fractional differential equations,” Annals of Functional Analysis, vol. 4, no. 1, pp. 25–39, 2013. View at: Google Scholar | MathSciNet
D. J. Guo and V. Lakshmikantham, Nonlinear Problems in Abstract Cones, vol. 5, Academic Press, Boston, Mass, USA, 1988. View at: MathSciNet
Copyright © 2014 Guotao Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
M.C. Escher and Tessellations
math+art
SubscribeBlogAbout
Escher's Circle Limit I.
Many of the drawings of Dutch artist Maurits Cornelis (M.C.) Escher closely connect with the mathematical concepts of infinity and contradiction. While these concepts lead to many themes, tessellations of the plane appear particularly often in Escher's work.
A tessellation (or tiling) of the plane is a construction that fills a flat surface completely with geometric shapes, usually called tiles. Escher often explored symmetric tessellations that were formed by repeatedly duplicating and rearranging only a single tile through translation, rotation and reflection. A simple example of such a tessellation is one which uses only squares — you can imagine the following pattern repeated to completely cover an arbitrarily large surface.
Square-based tessellation.
Or, we might have a slightly more complex construction using triangles.
Triangle-based tessellation.
From the mathematical perspective, these constructs are often quite rigid as in the above examples, but Escher discovered more complex tessellations that bring visual interest and aesthetic beauty. This complexity is found in, for example, Escher's Lizard.
Escher's Lizard.
The lizard's shape, while on its own not particularly symmetric, combines with copies of itself to create symmetry on a larger scale. Escher uses tessellation to highlight the artistic relationship between positive and negative space. In Lizard, there isn't a clear distinction between the subject and its background. If you view the lighter-colored lizards as the subject, then the negative space is comprised of the darker-colored lizards and vice versa. In a sense, the subject is its own negative space through the symmetric tiling.
Transformations Between the Rigid and the Organic
Escher's Liberation.
Escher was probably aware of the perceived rigidity of tessellations. In contrast, Escher has a few works in which he depicts the transformation from rigid geometric tilings into organic and natural imagery. Liberation shows a triangular tessellation (similar to the example above) that slowly transforms into an array of birds mid-flight.
Which are being liberated — the birds or the triangles? On the one hand, the birds seem to be breaking free from the rigid geometric form of the triangles. But on the other hand, the triangles exist in a world of mathematical perfection which has a beauty of its own.
The transformation demonstrates one technique for constructing complex tilings by making small perturbations to an existing tiling. This process is detailed at the end of the post.
Escher's Circle Limit III.
Escher's Circle Limit drawings — Circle Limit I is pictured at the beginning of this post and Circle Limit III is pictured above — display a different kind of tiling than the ones we've talked about so far. Up until this point, the tilings we've seen have been such that they could be extended outwards toward infinity on a plane. The Circle Limit drawings instead seem to converge toward a circle. In fact, these are still plane tilings, but in different kind of geometry, namely hyperbolic geometry.
In Euclidean geometry, the parallel postulate roughly states: For any line
L
P
L
, there is exactly one line parallel to
L
P
. This matches our intuitive notion of parallel lines from our everyday experiences (which Euclidean geometry models). But it is not necessary for geometry to include the parallel postulate, and hyperbolic geometry is the consequence of modifying the postulate to allow multiple lines to be parallel to
L
and pass through
P
Taking a look at both Circle Limit I and Circle Limit III, both images are based on the Poincaré Hyperbolic Disc. In this model of hyperbolic geometry, we work within a circular disk. Lines are represented as circular arcs that intersect the disc at right angles. Circle Limit III highlights these arcs in the white stripes running along the fish. Two hyperbolic lines are considered parallel as long as their arcs don't intersect. Under this definition, we can confirm that multiple parallel lines can run through the same point — consider the following annotated version of Circle Limit III.
Escher's Circle Limit III - annotated.
The red and pink arcs exemplify lines of this hyperbolic space. Call the red line
L
and the labeled point where the fish mouths meet
P
as in the modified parallel postulate. Then, we see that there are indeed multiple lines parallel to
L
P
by observing the three pink arcs passing through
P
yet not intersecting
L
Interestingly, all the birds in Circle Limit I are the same size with respect to their hyperbolic geometry, and the same goes for the fish in Circle Limit III. This is because the notion of measurement in the Poincaré model is different from the notion of measurement in the Euclidean plane in such a way that equally spaced objects (from the hyperbolic perspective) are closer to each other (from our eyes' perspective) as you move toward the perimeter of the disc.
There are many other interesting and unintuitive facts that emerge in hyperbolic geometry which you can read more about here.
Want to create your own tessellation? One technique takes inspiration from Escher's Liberation. Start with a simple tessellation (like the geometric examples above), and make a small modification. Let's start with the square-based one.
Step 1. Start with a known tessellation.
And then let's cut out a part of one of the tiles.
Step 2. Modify one of the tiles.
In order to maintain the symmetry, we must then copy our modification over to the other tiles (making sure to apply the appropriate translations, reflections and rotations).
Step 3. Copy the modification to the other tiles.
Example construction after multiple modifications.
From this process, you can arrive at some interesting planar tessellations and gain more appreciation for Escher's art. If you're up for the challenge, you could try your hand at constructing a hyperbolic tiling in a similar way.
Receive emails about new posts and subscriber-exclusive content.
Copyright © 2020 Robert Adkins. All rights reserved.
|
Department of Materials and Metallurgical Engineering, Federal University of Technology, Owerri, Nigeria.
Abstract: Bauxite deposits for production of alumina are lacking in Nigeria and there is an aluminium smelter plant in the country which requires alumina for its operation. Development of alternative alumina resource using clays that are abundant in the country is the focus of this paper. The thermal activation of Ibere clay from southeastern Nigeria for optimal leaching of alumina was investigated. The clay assayed 28.52% Al2O3 and 51.6% SiO2, comprising mainly kaolinite mineral and quartz or free silica. The alumina locked up in the clay structure was rendered acid-soluble by thermal activation which transformed the clay from its crystalline nature to an amorphous, anhydrous phase or metakaolinite. The clay samples were heated at calcination temperatures of 500°C, 600°C, 700°C, 800°C, and 900°C at holding times of 30, 60, and 90 minutes. Uncalcined clay samples and samples calcined at 1000°C (holding for 60 minutes) were used in the control experiments. The result of leaching the clay calcines in 1 M hydrochloric acid solution at room temperature, showed that the clay calcines produced at 600°C (holding for 60 minutes) responded most to leaching. Samples calcined for 60 minutes also responded better than those held for 30 or 90 minutes. Based on activation energy studies, it was observed that calcines produced at 600°C (for 60 minutes) had both the highest leaching response (50.27% after 1 hour at leaching temperature of 100°C) and the lowest activation energy of 24.26 kJ/mol. It is concluded therefore that Ibere kaolinite clay should be best calcined for alumina dissolution by heating up to 600°C and holding for 60 minutes at that temperature. The clay deposit has potential for use as alternative resource for alumina production in Nigeria where bauxite is scarce.
Keywords: Ibere Clay, Bauxite, Kaolinite, Alumina, Calcination, Thermal Activation, Leaching
{\left({\text{Si}}_{2}{\text{O}}_{5}\right)}^{2-}
{\text{Al}}_{2}{\left(\text{OH}\right)}_{4}^{2+}
{\text{Al}}_{\text{2}}{\left(\text{OH}\right)}_{\text{4}}\cdot \left({\text{Si}}_{\text{2}}{\text{O}}_{\text{5}}\right)
{\text{Al}}_{2}{\text{Si}}_{2}{\text{O}}_{5}{\left(\text{OH}\right)}_{4}
{\text{Al}}_{2}{\text{O}}_{3}\cdot 2{\text{SiO}}_{2}\cdot 2{\text{H}}_{2}\text{O}
\underset{\text{Kaolinite}}{\underset{︸}{{\text{Al}}_{2}{\text{O}}_{3}\cdot 2{\text{SiO}}_{2}\cdot 2{\text{H}}_{2}\text{O}}}\underset{\begin{array}{l}\text{ }\text{Dehydration}\\ \text{above}600˚\text{C}\end{array}}{\to }\underset{\text{Metakaolinite}}{\underset{︸}{{\text{Al}}_{2}{\text{O}}_{3}\cdot 2{\text{SiO}}_{2}}}+\underset{\begin{array}{l}\text{\hspace{0.17em}}\text{Constitutional}\\ \text{WaterRemoved}\end{array}}{\underset{︸}{2{\text{H}}_{2}\text{O}}}
\underset{\text{Metakaolinite}}{\underset{︸}{4\left({\text{Al}}_{2}{\text{O}}_{3}\cdot 2{\text{SiO}}_{2}\right)}}\underset{950˚\text{C}\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}980˚\text{C}}{\to }\underset{\text{Primary Mullite}}{\underset{︸}{3{\text{Al}}_{2}{\text{O}}_{3}\cdot 2{\text{SiO}}_{2}}}+\underset{\begin{array}{l}\text{Gamma-Alumina}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(\text{fcc}\right)\end{array}}{\underset{︸}{\gamma {\text{-Al}}_{2}{\text{O}}_{3}}}+\underset{\begin{array}{l}\text{Amorphous}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{Silica}\end{array}}{\underset{︸}{6{\text{SiO}}_{2}}}
\underset{\begin{array}{l}\text{Gamma-Alumina}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(\text{fcc}\right)\end{array}}{\underset{︸}{\gamma {\text{-Al}}_{2}{\text{O}}_{3}}}+\underset{\begin{array}{l}\text{Amorphous}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{Silica}\end{array}}{\underset{︸}{{\text{3SiO}}_{2}}}\underset{1000˚\text{C}\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}1400˚\text{C}}{\to }\underset{\text{SecondaryMullite}}{\underset{︸}{3{\text{Al}}_{2}{\text{O}}_{3}\cdot 2{\text{SiO}}_{2}}}+\underset{\text{Tridymite}}{\underset{︸}{{\text{SiO}}_{2}}}
\underset{\text{SecondaryMullite}}{\underset{︸}{3{\text{Al}}_{2}{\text{O}}_{3}\cdot 2{\text{SiO}}_{2}}}+\underset{\text{Tridymite}}{\underset{︸}{{\text{SiO}}_{2}}}\underset{1400˚\text{C}\text{\hspace{0.17em}}\text{-}\text{\hspace{0.17em}}1580˚\text{C}}{\to }\underset{\text{Mullite}}{\underset{︸}{3{\text{Al}}_{2}{\text{O}}_{3}\cdot 2{\text{SiO}}_{2}}}+\underset{\text{Cristobalite}}{\underset{︸}{{\text{SiO}}_{2}}}
{W}_{d}
{W}_{f}
\left({W}_{d}-{W}_{f}/{W}_{d}\right)\times 100
k=A{\text{e}}^{-\left(\frac{{E}_{a}}{RT}\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ln}k=-\left(\frac{{E}_{a}}{R}\right)\cdot \frac{1}{T}+\mathrm{ln}A
\text{HCl}+{\text{H}}_{2}\text{O}⇌{\text{H}}_{3}{\text{O}}^{+}{}_{\text{aq}}+{\text{Cl}}^{-}{}_{\text{aq}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{or}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\text{HCl}}_{\text{aq}}⇌{\text{H}}^{+}{}_{\text{aq}}+{\text{Cl}}^{-}{}_{\text{aq}}
\underset{\begin{array}{l}\text{Metakaolinite}\\ \left(\text{Claycalcine}\right)\end{array}}{\underset{︸}{{\text{Al}}_{\text{2}}{\text{O}}_{\text{3}}\cdot {\text{2SiO}}_{\text{2}}}}+\underset{\begin{array}{l}\text{Hydrochloricacid}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{solution}\end{array}}{\underset{︸}{6\left({\text{H}}^{+}{\text{Cl}}^{-}\right)}}\to \underset{\begin{array}{l}\text{\hspace{0.17em}}\text{LeachSolution}\\ \left(\text{Tobeinfiltrate}\right)\end{array}}{\underset{︸}{\left(2{\text{Al}}^{3+}6{\text{Cl}}^{-}\right)+3{\text{H}}_{\text{2}}\text{O}}}+\underset{\text{Residue}}{\underset{︸}{{\text{2SiO}}_{\text{2}}}}
{X}_{\text{Al}}=\frac{\text{concentration of Al ions in the solution}}{\text{concentration of Al ions in the original clay sample}}
%\text{Al}=100\cdot {X}_{\text{Al}}
Cite this paper: Mark, U. , Anyakwo, C. , Onyemaobi, O. and Nwobodo, C. (2019) Effect of Calcination Condition on Thermal Activation of Ibere Clay and Dissolution of Alumina. International Journal of Nonferrous Metallurgy, 8, 9-24. doi: 10.4236/ijnm.2019.82002.
[1] Hudson, K., Misra, C. and Wefers, K. (1997) Aluminium Oxide. In: Habashi. F., Ed., Handbook of Extractive Metallurgy, Volume II, Part 3: Light Metals, John Wiley & Sons, New York, 1062-1068.
[2] Hudson, K., Misra, C. and Wefers, K. (1997) Bauxite, the Principal Alumina Raw Material. In: Habashi. F., Ed., Handbook of Extractive Metallurgy, Volume II, Part 3: Light Metals, John Wiley & Sons, New York, 1068-1072.
[3] Hudson, K., Misra, C. and Wefers, K. (1997) Other Processes for Alumina Production. In: Habashi. F., Ed., Handbook of Extractive Metallurgy, Volume II, Part 3: Light Metals, John Wiley & Sons, New York, 1091-1094.
[4] Hudson, L.K., Misra, C., Perrotta, A.J., Wefers, K. and Williams, F.S. (2011) Aluminum Oxide. In: Elvers, B., Ed., Ullman’s Encyclopedia of Industrial Chemistry, 7th Edition, Wiley-VCH, Weiheim, 1-40.
[5] Habashi, F. (1995) Bayer’s Process for Alumina Production: A Historical Perspective. Bulletin of Historical Chemistry No.17-18, 15-19.
[6] Habashi, F. (2005) A Short History of Hydrometallurgy. Hydrometallurgy, 79, 15-22.
[7] United States Geological Survey (USGS) (2013) Bauxite and Alumina. Compiled by Lee Bray, E., White, L.M. and Miller, L.D., 2013 Minerals Yearbook (Advance Release), 13 p.
[8] United States Geological Survey (USGS) (2015) Aluminium: Mineral Commodity Summaries. Compiled by Lee Bray, E., 16-17.
[9] United States Geological Survey (USGS) (2015) Bauxite and Alumina: Mineral Commodity Summaries. Compiled by Lee Bray, E., 26-27.
[10] United States Geological Survey (USGS) (2015) Bauxite and Alumina: Mineral Industry Surveys, First Quarter of 2015. Compiled by Lee Bray, E. and Barnes, L.M., 1-5.
[11] Schwarz, T. (1997) Distribution and Genesis of Bauxite on the Mambilla Plateau, SE Nigeria. Applied Geochemistry, 12, 119-131. https://doi.org/10.1016/S0883-2927(96)00058-3
[12] Petters, S.W. (1993) Metallic and Nonmetallic (Industrial) Minerals. In: Adalemo, I.A. and Baba, J.M., Eds., Nigeria: Giant in the Tropics, Volume 1: A Compendium, Gabuno Pub. Ltd., Lagos, 61.
[13] Obaje, N.G. (2009) Geology and Mineral Resources of Nigeria: Lecture Notes in Earth Sciences 120. Springer, London, 52, 118-119, 183-201.
[14] Wright, J.B., Hastings, D.A., Jones, W.B. and Williams, H.R. (1985) Geology and Mineral Resources of West Africa. George Allen & Unwin, London, 35, 48-49, 87, 119, 145, 157.
[15] Malomo, S. (2011) Framework and Opportunities for Sustainable Private Sector Participation in Solid Minerals Development in Ekiti State. Ekiti State Economic Development Summit.
[16] Talabi, A.O., Ademilua, O.L., Ajayi, O.Z. and Ogunniyi, S.O. (2013) Preliminary Geophysical Evaluation of Orin Bauxite Deposit, Southwestern Nigeria. Journal of Emerging Trends in Engineering and Applied Sciences, 4, 432-437.
[17] Sada, M.M. (2013) Mid-Term Report for the Minerals and Metals Sector. Ministerial Platform Presented by Minister, Ministry of Mines and Steel Development (MMSD).
[18] Adeniji, G. (2009) An Overview of Nigeria’s Solid Minerals Sector. ACCENTURE Knowledge Event, 1-10.
[19] Pabalkar, V.V. (1999) Nigerian Aluminium Industry: Scope and Prospects. Proceedings of the 16th Annual Conference of the Nigerian Metallurgical Society, Abuja, 3-5 November 1999, 1-7.
[20] Okorie, B.A. (2000) The Metallurgical Industry and National Development. 2nd Annual NAMMES Conference Tagged, Owerri, 16-17 March 2000, 1-8.
[21] Aliyu, A. (1996) Potentials of the Solid Minerals Industry in Nigeria. RMRDC, Abuja, 1-40, 63-83, 164-172.
[22] Fejokwu, L.C. (1996) Nigeria: A Viable Black Power, Vol. 2: Resources, Potentials and Challenges. Polcom Press, Lagos, 375-385.
[23] Mark, U. and Onyemaobi, O.O. (2009) Assessment of the Industrial Potentials of Some Nigerian Kaolinitic Clay Deposits. International Research Journal in Engineering, Science & Technology, 6, 77-84.
[24] Barsoum, M.W. (2003) Fundamentals of Ceramics. 2nd Edition, IOP Publishing Ltd., Bristle, 70-74. https://doi.org/10.1887/0750309024
[25] Shackelford, J.F. (2015) Introduction to Materials Science for Engineers. 8th Edition, Pearson Higher Education, Upper Saddle River, 71-72.
[26] Attah, L.E. and Oden M.I. (2010) Physico-Chemical Properties and Industrial Potential of Some Clay Deposits in Calabar Area, South Eastern Nigeria. Global Journal of Environmental Sciences, 9, 39-49.
[27] Velde, B. (1995) Composition and Mineralogy of Clay Minerals. In: Velde. B., Ed., Origin and Mineralogy of Clays: Clays and the Environment, Springer, Berlin, 8-42.
[28] International Committee for Study of Bauxite, Alumina and Aluminium (ICSOBA) (2018) Rusal and Russia Aluminium Industry. Upcoming Events. ICSOBA Newsletter, Vol. 17, 11-18.
[29] Habashi, F. (2017) Alumina from Silicates. Technical Paper, ICSOBA Newsletter, Vol. 17, 12-14.
https://works.bepress.com/fathi_habashi/
[30] Chakraborty, A.K. (2014) Phase Transformations of Kaolinite Clay. Springer, New Delh, 3-12, 43-47, 185-206, 327-329. https://doi.org/10.1007/978-81-322-1154-9
[31] Habashi, F. (2014) A New Era in Pressure Hydrometallurgy. Metall, 68, 27-34.
[32] Velde, B. (1992) Introduction to Clay Minerals: Chemistry, Origins, Uses and Environmental Significance. Chapman & Hall, London, 1-17, 55-82, 96.
[33] Fahrenholtz, W.G. (2008) Clays. In: Shackelford, J.F. and Doremus, R.H., Eds., Ceramic and Glass Materials: Structure, Properties and Processing, Springer, New York, 111-133.
[34] Kaolinite (2015) In Wikipedia.
https://en.wikipedia.org/w/index.php?title=Kaolinite&oldid=693335566
[35] Ilic, B.R., Mitrovic, A.A. and Milicic, L.R. (2010) Thermal Treatment of Kaolin Clay to Obtain Metakaolin. Hemijskaindustrija, 64, 351-356.
[36] Pinna, E.G., Suarez, D.G., Rosales, G.D. and Rodriguez, M.H. (2017) Hydrometallurgical Extraction of Al and Si from Kaolinitic Clays. International Engineering Journal, 70, 451-457.
[37] Lamberov, A.A., Sitnikova, E.Y. and Abdulga, A.S. (2012) Kinetic Features of Phase Transformation of Kaolinite into Metakaolinite for Kaolin Clays from Different Deposits. Russian Journal of Applied Chemistry, 85, 892-897. https://doi.org/10.1134/S1070427212060109
[38] Duval, D.J., Risbud, S.H. and Shackelford, J.F. (2008) Mullite. In: Shackelford. J.F. and Doremus, R.H., Eds., Ceramic and Glass Materials: Structure, Properties and Processing, Springer, New York, 27-39. https://doi.org/10.1007/978-0-387-73362-3_2
[39] Ghosh, A. and Ghosh, S. (2014) A Textbook of Metallurgical Kinetics. PHI Learning Private Ltd., Delhi, 1-10, 82-100, 151-185, 301-319.
[40] Gupta, C.K. (2003) Chemical Metallurgy: Principles and Practice. Wiley-VCH Verlag GmbH & Co., Weinheim, 31-52, 459-499.
[41] Rao, S.R.R. (2006) Hydrometallurgical Processes. Waste Management Series, 7, 71-108.
[42] Brown, T.L., LeMay, H.E., Bursten, B.E., Murphy, C.J. and Woodward, P.M. (2012) Chemistry: The Central Science. 12th Edition, Pearson Educationm New Yorkm 512-655,
[43] Al-Zahrani, A.A. and Abdul-Majid, M.H. (2009) Extraction of Alumina from Local Clays by Hydrochloric Acid Process. Engineering Science Journal of King Abdulaziz University, 20, 29-41.
https://doi.org/10.4197/Eng.20-2.2
[44] Al-Ajeel, A.W.A. and Al-Sindy, S.I. (2006) Alumina Recovery from Iraqi Kaolinitic Clay by Hydrochloric Acid Route. Iraqi Bulletin of Geology and Mining, 2, 67-76.
[45] Ajemba, R.O. and Onukwuli, O.D. (2012) Determination of the Optimum Dissolution Conditions of Ukpor Clay in Hydrochloric Acid Using Response Surface Methodology. International Journal of Engineering Research and Applications, 2, 732-742.
[46] Udeigwe, U., Onukwuli, O.D., Ajemba, R. and Ude, C.N. (2015) Kinetic Studies of Hydrochloric Acid Leaching of Alumina from Agbaja Clay. International Journal of Research in Advanced Engineering and Technology, 1, 64-72.
|
AssignTransformationType(
\textcolor[rgb]{0.407843137254902,0.250980392156863,0.36078431372549}{\mathbf{φ}}
\mathrm{φ}
- a transformation
E→ M
F→N
be two fiber bundles, and let
{\mathrm{π}}^{k}:{J}^{k}\left(E\right)→ M
{\mathrm{π}}^{k}:{J}^{k}\left(F\right)→M
be the associated bundles of
k-
[i] A map
\mathrm{φ} :E→F
which sends the fibers of
E
to fibers of
F
(and hence covers a map
{\mathrm{φ}}_{0}:M →N)
is called a projectable transformation.
[ii] A map
\mathrm{φ}:E→F
is called a point transformation.
[iii] A transformation
\mathrm{φ} :{J}^{1}\left(E\right) → {J}^{1}\left(F\right)
is called a contact transformation if the fiber dimensions of E and
F
\mathrm{φ}
pulls back the contact form on
{J}^{1}\left(F\right)
to a multiple of the contact form on
{J}^{1}\left(E\right)
\mathrm{φ}:{J}^{k}\left(E\right) → F
\mathrm{φ}
covers the identity map
M→N ,
\mathrm{φ}
is called a differential substitution.
[v] A map
\mathrm{φ}:{J}^{k}\left(E\right)→F
is called a generalized differential substitution.
The command AssignTransformationType(
\textcolor[rgb]{0.407843137254902,0.250980392156863,0.36078431372549}{\mathbf{φ}}
) returns the transformation
\mathrm{φ}
, but with internal representation
\textcolor[rgb]{0.407843137254902,0.250980392156863,0.36078431372549}{\mathrm{\phi }}
of changed to encode its transformation type. The type of a transformation and its prolongation order can be determined by the command DGinfo with the keyword "TransformationType".
\mathrm{with}\left(\mathrm{DifferentialGeometry}\right):
\mathrm{with}\left(\mathrm{JetCalculus}\right):
\mathrm{DGsetup}\left([x,y],[u],E,4\right):
\mathrm{DGsetup}\left([z],[v],F,4\right):
\mathrm{DGsetup}\left([p,q],[w],K,4\right):
Case 1. Projectable transformations from
E
F
\mathrm{Φ1}≔\mathrm{Transformation}\left(E,F,[z=A\left(x,y\right),v[]=B\left(x,y,u[]\right)]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Φ1}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\right)]
\mathrm{Tools}:-\mathrm{DGinfo}\left(\mathrm{Φ1},"TransformationType"\right)
[]
Now assign the transformation
\mathrm{Φ1}
a type.
\mathrm{newPhi1}≔\mathrm{AssignTransformationType}\left(\mathrm{Φ1}\right)
\textcolor[rgb]{0,0,1}{\mathrm{newPhi1}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\right)]
\mathrm{Tools}:-\mathrm{DGinfo}\left(\mathrm{newPhi1},"TransformationType"\right)
[\textcolor[rgb]{0,0,1}{"projectable"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]
\mathrm{Φ2}≔\mathrm{Transformation}\left(E,F,[z=A\left(x,y,u[]\right),v[]=B\left(x,y,u[]\right)]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Φ2}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\right)\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\right)]
\mathrm{newPhi2}≔\mathrm{AssignTransformationType}\left(\mathrm{Φ2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{newPhi2}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\right)\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\right)]
\mathrm{Tools}:-\mathrm{DGinfo}\left(\mathrm{newPhi2},"TransformationType"\right)
[\textcolor[rgb]{0,0,1}{"point"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]
\mathrm{Φ3}≔\mathrm{Transformation}\left(E,K,[p=-u[1],q=y,w[]=-u[1]x+u[],w[1]=x,w[2]=u[2]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Φ3}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{w}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{w}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{w}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}]
\mathrm{newPhi3}≔\mathrm{AssignTransformationType}\left(\mathrm{Φ3}\right)
\textcolor[rgb]{0,0,1}{\mathrm{newPhi3}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{w}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{w}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{w}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}]
\mathrm{Tools}:-\mathrm{DGinfo}\left(\mathrm{newPhi3},"TransformationType"\right)
[\textcolor[rgb]{0,0,1}{"contact"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]
By the conventions adopted here, a contact transformation need not be a local diffeomorphism so that, in particular, the dimensions of the bundles
E
F
need not coincide.
\mathrm{Φ4}≔\mathrm{Transformation}\left(F,E,[x=z,y=1,u[]=v[],u[1]=v[1],u[2]=0]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Φ4}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{v}}_{[]}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}]
\mathrm{newPhi4}≔\mathrm{AssignTransformationType}\left(\mathrm{Φ4}\right)
\textcolor[rgb]{0,0,1}{\mathrm{newPhi4}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{v}}_{[]}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}]
\mathrm{Tools}:-\mathrm{DGinfo}\left(\mathrm{newPhi3},"TransformationType"\right)
[\textcolor[rgb]{0,0,1}{"contact"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]
\mathrm{vars}≔x,y,u[],u[1],u[2],u[1,1],u[1,2],u[2,2]
\textcolor[rgb]{0,0,1}{\mathrm{vars}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}
\mathrm{Φ5}≔\mathrm{Transformation}\left(E,K,[p=x,q=y,w[]=A\left(\mathrm{vars}\right)]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Φ5}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{w}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}\right)]
\mathrm{newPhi5}≔\mathrm{AssignTransformationType}\left(\mathrm{Φ5}\right):
\mathrm{Tools}:-\mathrm{DGinfo}\left(\mathrm{newPhi5},"TransformationType"\right)
[\textcolor[rgb]{0,0,1}{"differentialSubstitution"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]
\mathrm{Φ5}≔\mathrm{Transformation}\left(E,F,[z=A\left(\mathrm{vars}\right),v[]=B\left(\mathrm{vars}\right)]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Φ5}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}\right)\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{B}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}}\right)]
\mathrm{newPhi5}≔\mathrm{AssignTransformationType}\left(\mathrm{Φ5}\right):
\mathrm{Tools}:-\mathrm{DGinfo}\left(\mathrm{newPhi5},"TransformationType"\right)
[\textcolor[rgb]{0,0,1}{"generalizedDifferentialSubstitution"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}]
\mathrm{Φ6}≔\mathrm{Transformation}\left(E,F,[z=u[1]y,v[]=u[2]+xu[],v[1]=y]\right)
\textcolor[rgb]{0,0,1}{\mathrm{Φ6}}\textcolor[rgb]{0,0,1}{≔}[\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{=}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{[]}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{u}}_{[]}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}]
\mathrm{newPhi6}≔\mathrm{AssignTransformationType}\left(\mathrm{Φ6}\right)
\mathrm{Tools}:-\mathrm{DGinfo}\left(\mathrm{newPhi6},"TransformationType"\right)
[\textcolor[rgb]{0,0,1}{"generic"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"NA"}]
|
Soundex - Maple Help
Home : Support : Online Help : Programming : Names and Strings : StringTools Package : String Homology Routines : Soundex
implements the classical Soundex algorithm
Soundex( s )
The Soundex(s) command implements the classical Soundex algorithm.
The Soundex algorithm is intended to hash words into a small space by using a model which approximates the sound of the word when spoken by an English speaker. Each word is reduced to a four character string (a Soundex key), where the first character is an uppercase letter and the remaining three characters are digits. Soundex keys have the property that words, which are pronounced similarly, produce the same soundex key and can thus be used to simplify searches in databases where the pronunciation but not the spelling is known.
8
\mathrm{with}\left(\mathrm{StringTools}\right):
\mathrm{Soundex}\left("James"\right)
\textcolor[rgb]{0,0,1}{"J520"}
\mathrm{Soundex}\left("Barb"\right)
\textcolor[rgb]{0,0,1}{"B610"}
\mathrm{Soundex}\left("Gauss"\right)
\textcolor[rgb]{0,0,1}{"G200"}
\mathrm{Soundex}\left("Goethe"\right)
\textcolor[rgb]{0,0,1}{"G300"}
\mathrm{Soundex}\left("Ghosh"\right)
\textcolor[rgb]{0,0,1}{"G200"}
\mathrm{Soundex}\left("Kline"\right)
\textcolor[rgb]{0,0,1}{"K450"}
\mathrm{Soundex}\left("Cline"\right)
\textcolor[rgb]{0,0,1}{"C450"}
\mathrm{Soundex}\left("Vallis"\right)
\textcolor[rgb]{0,0,1}{"V420"}
\mathrm{Soundex}\left("Fallis"\right)
\textcolor[rgb]{0,0,1}{"F420"}
Knuth, Donald. The Art of Computer Programming, Volume 3: Sorting and Searching. Reading, Massachusetts: Addison-Wesley, 1973, pp. 391-392.
StringTools[Metaphone]
|
Counting f such that f o g = g o f
P. Bouchard, Y. Fong, W. F. Ke, Y. N. Yeh
Results in Mathematics > 1997 > 31 > 1-2 > 14-27
In this paper, we derive an algorithm for finding all the mappings f of a finite set A which commute with a fixed mapping g: A → A. This algorithm is then applied to finding all the infra-endomorphisms of the groups ℤn and Dn.
On Dilatations and Substantial Boundary Points of Homeomorphisms of Jordan Curves
Results in Mathematics > 1997 > 31 > 1-2 > 180-188
We study the relation between the dilatations Kh and Kh* of a homeomorphism h of Jordan curves. We show that if Kh= Kh*, then either h is induced by an affine map or there is a substantial boundary point for h. In particular, we prove that if h is symmetric (in the sense of Gardiner and Sullivan), then Kh* > Kh. This is quite contrary to a previously conjectured relation between Kh and Kh*.
On Sums of Vector Fields
We discuss one case where the integration of a sum of vector fields is reducible to the integration of the summands. Applications include the construction of a class of additive group actions on affine space and a proof that these are stably tame, and also the explicit solution of a class of differential equations from mathematical biology.
Interpolation Functors In Weak-Type Interpolation
F. Fehér, M. J. Strauss
Results in Mathematics > 1997 > 31 > 1-2 > 95-104
For interpolation in the diagonal case, i.e. with respect to the two couples (X, X) and (Y, Y), there exists a natural relation between weak-type and strong-type interpolation. Indeed, weak-type interpolation is related to the “M-couples” (ΛX, MX) and (ΛY, MY) of the Lorentz spaces of X and Y. Since ΛZ ⊂ MZ for any space Z, any weak-type interpolation space also has the (strong-type) interpolation...
A Maximal Inequality and a Functional Central Limit Theorem for set-indexed empirical processes
For the tail probabilities of a general set-indexed empirical process in an arbitrary sample space a maximal inequality is derived. In the case that the class of sets by which the process is indexed possesses a total ordering, the application of our inequality yields an elementary proof for a functional central limit theorem without involving such advanced techniques as symmetrization, stratification,...
Existence and Uniqueness Theorem for Slant Immersions and Its Applications
Bang-yen Chen, Luc Vrancken
A slant immersion is an isometric immersion from a Riemannian manifold into an almost Hermitian manifold with constant Wirtinger angle. In this paper we establish the existence and uniqueness theorem for slant immersions into complex-space-forms. By applying this result, we prove in this paper several existence and nonexistence theorems for slant immersions. In particular, we prove the existence theorems...
Asymptotic Growth of Hermite Series and an Application to the Theory of the Riemann Zeta Function
On local approximation methods for multivariate polynomial spline surfaces
Hans-Jörg Wenz
We present a construction method for quasiinterpolants using the multivariate splines of Dahmen, Micchelli, and Seidel [7]. The key instrument is the concept of polar forms. The quasiinterpolants apply to continuous functions and are shown to have optimal rates of convergence.
Two Parameter Asymptotic Spectra in the Uniformly Elliptic Case
P. A. Binding, P. J. Browne, K. Seddighi
Results in Mathematics > 1997 > 31 > 1-2 > 1-13
In this article we study the abstract two parameter eigenvalue problem % MathType!MTEF!2!1!+- % feaagaart1ev2aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC % vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz % ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb % L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe % pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam % aaeaqbaaGceaqabeaacqWGubavdaWgaaWcbaGaeGymaedabeaakiab...
Unendliche und endliche Orthogonalsysteme von Continuous Hahnpolynomen
Peter A. Lesky
Die Continuous Hahnpolynome gehen auf R. Askey [1] zurück, wobei allerdings auch quasidefinite Orthogonalität zugelassen wird. Im Sinne der klassischen Orthogonalpolynome bleiben diese Ausführungen auf positiv definite Orthogonalität beschränkt. In [2] und [6] erfolgt die Einführung von Continuous Hahnpolynomen über Eigenwertprobleme mit komplexen Differenzengleichungen zweiter Ordnung. Demgegenüber...
On a Functional Equation Associated with Simpson’s Rule
P. L. Kannappan, T. Riedel, P. K. Sahoo
In this paper, we determine the general solution of the functional equation % MathType!MTEF!2!1!+-% feaaeaart1ev0aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXanrfitLxBI9gBaerbd9wDYLwzYbItLDharqqt% ubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq% -Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0x% fr-xfr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyuam% aaBaaaleaacaaIXaGaaGimaaqabaGccqGH9aqpciGGSbGaaiOBaiaa%...
The problem of defining abstract bivectors
F. Sommen
In this paper we investigate the problem of defining bivectors on a purely abstract level. This leads to algebras of symbolic vectors and bivectors.
Two-Parameter Eigenvalue Problems in Nonlinear Second Order Differential Equations
Two-parameter nonlinear second order differential equations are studied. By using a variational method we characterize the variational eigenvalues μ = μ(λ) and study the properties of μ(λ). Furthermore, asymptotic formulas of μ(λ) as λ → ±∞ are established.
Bonnet Surfaces and Isothermic Surfaces
Weihuan Chen, Haizhong Li
By the study of the relations between Bonnet surfaces and isothermic surfaces, we obtain classification results of Bonnet surfaces in 3-dimensional space form R3(c) and of the spacelike Bonnet surfaces in indefinite space form R1 3(c), which generalize the results in Bobenko’s [1] and Peng-Lu’s [11]. It is remarkable that there exist always Bonnet surfaces which are not Weingarten surfaces, if the...
Quadratic Differences that Depend on the Product of Arguments
J. K. Chung, B. R. Ebanks, C. T. Ng, P. K. Sahoo, more
In this paper, we determine all functions ƒ, defined on a field K (belonging to a certain class) and taking values in an abelian group, such that the quadratic difference ƒ(x + y) + ƒ(x − y) − 2ƒ(x) − 2ƒ(y) depends only on the product xy for all x, y ∈ K. Using this result, we find the general solution of the functional equation ƒ1(x + y) + ƒ2(x − y) = ƒ3(x) + ƒ4(y) + g(xy).
Analytic extension of non quasi-analytic Whitney jets of Roumieu type
Jean Schmets, Manuel Valdivia
Let (Mr)r∈ℕ 0 be a logarithmically convex sequence of positive numbers which verifies M0 = 1 as well as Mr ≥ 1 for every r ∈ ℕ and defines a non quasi-analytic class. Let moreover F be a closed proper subset of ℝn. Then for every function ƒ on ℝn belonging to the non quasi-analytic (Mr)-class of Roumieu type, there is an element g of the same class which is analytic on ℝn F and such that Dα ƒ(x) =...
σ - Complete Fuzzy Riesz Spaces
We define σ-complete fuzzy Riesz spaces and then study some of their interesting properties.
A Proof of KÜhnel’s Conjecture for n ⩾ k2 + 3k Eric Sparla
Eric Sparla
In this note we show that an Upper Bound Conjecture made by Kühnel for combinatorial 2k-manifolds holds for fixed k if its number of vertices is at least n ⩾ k2 + 3k. Together with known results this provides a simple proof of the conjecture for k = 1 and k = 2.
Anmerkungen zum „Riemannschen Beispiel“ % MathType!MTEF!2!1!+-% feaaeaart1ev0aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXanrfitLxBI9gBaerbd9wDYLwzYbItLDharqqt% ubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq% -Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0x% fr-xfr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyuam% aaBaaaleaacaaIXaGaaGimaaqabaGccqGH9aqpciGGSbGaaiOBaiaa% ysW7caWGRbWaaSbaaSqaaiaadsfacaaIXaaabeaakiaac+cacaWGRb% WaaSbaaSqaaiaadsfacaaIYaaabeaakiabg2da9iabgkHiTmaabmaa% baGaamyramaaBaaaleaacaWGHbaabeaakiaac+cacaWGsbaacaGLOa% GaayzkaaGaey41aq7aaiWaaeaadaqadaqaaiaadsfadaWgaaWcbaGa% aGOmaaqabaGccqGHsislcaWGubWaaSbaaSqaaiaaigdaaeqaaaGcca% GLOaGaayzkaaGaai4laiaacIcacaWGubWaaSbaaSqaaiaaikdaaeqa% aOGaaGjbVlaadsfadaWgaaWcbaGaamysaaqabaGccaGGPaaacaGL7b% GaayzFaaaaaa!5C4A!
{\sum\limits^\infty_{n=1}\ \ {\rm sin}\ n^2x\over n^2}
einer stetigen, nicht differenzierbaren Funktion
According to statements made by K. Weierstraß, B. Riemann claimed already in 1861 that the function % MathType!MTEF!2!1!+-% feaaeaart1ev0aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXanrfitLxBI9gBaerbd9wDYLwzYbItLDharqqt% ubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq% -Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0x% fr-xfr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyuam%...
Remarks on Nonlinear Neumann Problems in Periodic Domains
K. Pflüger
We study a semilinear elliptic equation Au = f(x, u) with nonlinear Neumann boundary condition Bu = φ(ξ, u) in an unbounded domain Ω ⊂ ℝn, the boundary of which is defined by periodic functions. We assume that f and φ and the coefficients of the operators are asymptotically periodic in the space variables. Our main result states the existence of an asymptotically decaying, nontrivial solution of this...
GAMMA FUNCTION (15)
HYERS–ULAM STABILITY (15)
ASYMPTOTIC EXPANSION (13)
FIXED POINT THEOREM (13)
INVERSE SPECTRAL PROBLEMS (13)
CONTINUED FRACTION (12)
NEVANLINNA THEORY (12)
MODULUS OF CONTINUITY (11)
POSITIVE LINEAR OPERATORS (10)
FUNCTIONAL EQUATION (9)
INVERSE SPECTRAL PROBLEM (8)
MODULUS OF SMOOTHNESS (7)
CRITICAL POINT THEORY (6)
HYPERSURFACES (6)
MONGE–AMPÈRE EQUATION (6)
QUADRATIC FUNCTIONAL EQUATION (6)
SIMULTANEOUS APPROXIMATION (6)
WARPED PRODUCT (6)
ADDITIVE FUNCTION (5)
AFFINE HYPERSPHERE (5)
HARMONIC NUMBERS (5)
ITERATES OF OPERATORS (5)
JACOBI OPERATOR (5)
K-LOOP (5)
LAGRANGIAN SUBMANIFOLDS (5)
MAXIMAL SURFACES (5)
STIRLING NUMBERS (5)
BERNSTEIN PROPERTY (4)
BESSEL SEQUENCE (4)
CIRCUMCENTER (4)
CLIFFORD ANALYSIS (4)
COMMUTATIVITY THEOREMS (4)
EQUIAFFINE IMMERSION (4)
|
Test Statistics: Definition, Formulas & Examples | Outlier
Test Statistics: Definition, Formulas & Examples
This article explains what a test statistic is, how to complete one with formulas, and how to find the value for t-tests.
What is a Standardized Test Statistic?
The General Formula for Calculating Test Statistics
Types of Test Statistics with Formulas
Difference Between T-Tests and Z-Tests and When to Use Each
How to Interpret a Test Statistic
A test statistic is a standardized score used in hypothesis testing. It tells you how likely the results obtained from your sample data are under the assumption that the null hypothesis is true. The more unlikely your results are under this assumption, the easier it becomes to reject the null hypothesis in favor of an alternative hypothesis. The more likely your results are, the harder it becomes to reject the null hypothesis.
There are different kinds of test statistics, but they all work the same way. A test statistic maps the value of a particular sample statistic (such as a sample mean or a sample proportion) to a value on a standardized distribution, such as the Standard Normal Distribution or the t-distribution. This allows you to determine how likely or unlikely it is to observe the particular value of the statistic you obtained.
Olanrewaju Michael Akande reviews normal distribution in the following lesson clip:
As a quick example, say you have a null hypothesis that the average wait time to get seated at your favorite restaurant—at a table for two without a reservation on a Friday night—is 45 minutes. You select a random sample of 100 parties that got seated under these conditions and ask them what their wait times were. You find that the average wait time for your sample is 55 minutes (
\bar{x}
= 55 minutes). A test statistic will convert this sample statistic
\bar{x}
into a standardized number that helps you answer this question:
“Assuming that my null hypothesis is true—assuming that the average wait time at the restaurant actually is 45 minutes—what is the likelihood that I found an average wait time of 55 minutes for my randomly drawn sample?”
Remember, the lower the likelihood of observing your sample statistic, the more confident you can be rejecting the null hypothesis.
The type of test statistic you use in a hypothesis test depends on several factors including:
The type of statistic you are using in the test
The size of your sample
Assumptions you can make about the distribution of your data
Assumptions you can make about the distribution of the statistic used in the test
The formula for calculating test statistics takes the following general form:
\text{Test Statistic} = \frac{\text{Statistic} - \text{Parameter}}{\text{Standard Deviation of the Statistic}}
Remember, a statistic is a measure calculated from a single sample or many samples. Examples include the sample mean
\bar{x}
, the difference between two sample means
\bar{x_{1}} - \bar{x_{2}}
, or a sample proportion
\hat{p}
A parameter is a measure calculated from a single population or many populations. Examples include the population mean
\mu
, the difference between two population means
\mu_{1}-\mu_{2}
, or a population proportion
p
In the denominator of the equation, you have the standard deviation—or the approximated standard deviation—of the statistic used in the numerator. If you use the sample mean
\bar{x}
, in the numerator, you should use the standard deviation of
\bar{x}
or an approximation of it in the denominator.
The test statistics you are most likely to encounter in an introductory statistics class are:
The Z-test statistic for a single sample mean
The Z-test statistic for population proportions
The t-test statistic for a single sample mean
The t-test statistic for two sample means
Z-test for a Sample Mean
We use the Z-test statistic (or Z-statistic) for a sample mean in hypothesis tests involving a sample mean
\bar{x}
, calculated for a single sample.
You use this test statistic when:
Your sample size is greater than or equal to 30 (n
\geq
The sampling distribution of the sample mean is assumed to be normal
The standard deviation of the population parameter
\sigma
The formula for this type of Z-test statistic is:
Z =\frac{\bar{x}-\mu_{0}}{\frac{\sigma}{\sqrt{n}}}
Z
is the symbol for the Z-test statistic
\bar{x}
\mu_{0}
is the hypothesized value of the population mean according to the null hypothesis
\sigma
is the population standard deviation
is the sample size
\frac{\sigma}{\sqrt{n}}
\bar{x}
. The standard error is just the standard deviation of the sampling distribution of the sample mean.
You may notice that a Z-test statistic is just a z-score for a particular value of a normally distributed statistic. There are many variations of the Z-test statistic. We can use these in hypothesis tests, where the sample statistic is being used in the test is approximately normally distributed. One such variation of the Z-test statistic is the Z-test for proportions.
We use the Z-test statistic for proportions in hypothesis tests where a sample proportion
\hat{p}
is being tested against the hypothesized value of the population proportion,
p_{0}
. We use the Z-test for proportions when your sample size is greater than or equal to 30 (n
\geq
30), and the distribution of the sample statistic is assumed to be normal. The formula for the Z-test statistic for population proportions is:
Z =\frac{\hat{p}-p_{0}}{\sqrt\frac{p_{0}(1-p_{0})}{n}}
Z is the symbol for the Z-test statistic for population proportions
\hat{p}
is the sample proportion
p_{0}
is the hypothesized value of the population proportion according to the null hypothesis
n
When your sample size is smaller than 30 (n<30)—or when you cannot assume that the distribution of your sample statistic is normally distributed—you’ll often use a t-test statistic rather than a Z-test.
T-test for a Single Sample Mean
We use the t-test statistic (or t-statistic) for a sample mean in hypothesis tests involving a sample mean calculated for a single sample drawn from a population. Unlike the Z-test for a single sample mean, you use the t-test when:
Your sample size is less than 30 (n<30)
The distribution of the sample statistic is not approximated by a normal distribution
\sigma
A t-test statistic maps your statistics to a t-distribution as opposed to the normal distribution with a Z-test. A t-distribution is like a standard normal distribution, but it has thicker tails and changes depending on your sample size
n
. When
is large, the t-distribution is closer to the normal distribution; and as the sample size gets larger and larger, a t-distribution will converge to the normal distribution. As
n
gets smaller, the t-distribution gets flatter with thicker tails.
The formula for the t-test statistic for a sample mean is:
t =\frac{\bar{x}-\mu_0}{\frac{s}{\sqrt{n}}}
t
is the symbol for the t-test statistic
\bar{x}
\mu_0
is the value of the population mean according to the null hypothesis
is the sample standard deviation
\frac{s}{\sqrt{n}}
is an approximation of the standard error of
\bar{x}
. In a t-test, because you do not know the value of the population standard deviation, you need to approximate the standard error of
\bar{x}
using the sample standard deviation
s
T-test for Two Sample Means
We can also use t-test statistics in hypothesis tests where the values of two sample means (
x_{1}
x_{2}
) are being compared. You do this to test the null hypothesis that the two samples are drawn from the same underlying population. If the null hypothesis is true, then any difference between the sample means is due to random variations in the data. Rejecting the null hypothesis suggests that the samples were drawn from two distinct populations and that the difference in the sample means reflects actual differences in the characteristics of subjects in one population compared to the other.
Like the t-test for a single sample mean, you use the t-test for two sample means when:
Your sample sizes are less than 30 (n<30)
The distribution of the sample statistics are not approximated by a normal distribution
\sigma
The formula for the t-test statistic for two sample means is:
t =\frac{(\bar{x_1}-\bar{x_2})(\mu_1\mu_2)}{\sqrt{\frac{s_1^2}{n_1}+{\frac{s_2^2}{n_2}}}}
t
\bar{x_1}
is the sample mean of sample 1
\bar{x_2}
\mu_1
is the mean of the population from which sample 1 was drawn
\mu_2
s_1^2
is the variance of sample 1
s_2^2
n_{1}
is the sample size for sample 1
n_{2}
T-tests are generally used in place of Z-tests when one or more of the following conditions hold: The sample size is less than 30 (n<30) The statistic you use in the hypothesis test is not approximated by a normal distribution The population standard deviation \sigma is unknown
If you know the population standard deviation
\sigma
and you are confident that the statistic used in your hypothesis test is normally distributed, then you can use a Z-test.
As with all test statistics, you should only use a Z-test or a t-test when your data is from a randomly and independently drawn sample.
We use test statistics together with critical values, p-values, and significance levels to determine whether to reject or not a null hypothesis.
A critical value is a value of a test statistic that marks a cutoff point. If a test statistic is more extreme than the critical value—greater than the critical value in the right tail of a distribution or less than the critical value in the left tail of a distribution—the null hypothesis is rejected.
Critical values are determined by the significance level (or alpha level) of a hypothesis test. The significance level you use is up to you, but the most commonly used significance level is 0.05 (
\alpha
A significance level of 0.05 means that if the probability of observing a sample statistic at least as extreme as the one you observed is less than 0.05 (or 5%), you should reject your null hypothesis. In a one-sided hypothesis test that uses a Z-test statistic, a significance level of 0.05 is associated with a critical value of 1.645 when you conduct the test in the right tail and a value of -1.645 when you conduct the test in the left tail.
A p-value is the probability associated with your test statistic’s value. Let’s say you calculate a Z-test statistic that maps to the standard normal distribution. You find that the test statistic is equal to 1.75. For this value of a Z-test statistic, the associated p-value is 0.04 or 4%—you can find p-values using tables or statistical software.
A p-value of 0.04 means that the probability of observing a sample statistic at least as extreme as the one you found from your sample data is 4%. If you choose a significance level of 0.05 for your test, we would reject the null hypothesis, since the p-value of 0.04 is less than the significance level of 0.05.
It can be easy to confuse test statistics, critical values, significance levels, and p-values. Remember, these are all different measures involved in determining whether to reject or fail to reject a null hypothesis.
Critical values and significance levels provide cut-offs for your test. The difference between a critical value and a significance level is that the critical value is a point on the distribution, and the significance level is a probability represented by an area under the distribution.
You can compare the test statistic and the p-value against the critical value and the significance level.
If the test statistic is more extreme than the critical value, you reject the null hypothesis.
If the p-value is less than the significance level, you reject the null hypothesis.
If the test statistic is less extreme than the critical value, you fail to reject the null hypothesis.
If the p-value is greater than the significance level, you reject the null hypothesis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.