text
stringlengths 256
16.4k
|
|---|
Tunnel magnetoresistance - Wikipedia
Magnetic effect in insulators between ferromagnets
1 Phenomenological description
5 Symmetry-filtering in tunnel barriers
6 Spin-transfer torque in magnetic tunnel junctions (MTJs)
7 Discrepancy between theory and experiment
Phenomenological description[edit]
{\displaystyle \mathrm {TMR} :={\frac {R_{\mathrm {ap} }-R_{\mathrm {p} }}{R_{\mathrm {p} }}}}
{\displaystyle R_{\mathrm {ap} }}
{\displaystyle R_{\mathrm {p} }}
{\displaystyle {\mathcal {D}}}
{\displaystyle P={\frac {{\mathcal {D}}_{\uparrow }(E_{\mathrm {F} })-{\mathcal {D}}_{\downarrow }(E_{\mathrm {F} })}{{\mathcal {D}}_{\uparrow }(E_{\mathrm {F} })+{\mathcal {D}}_{\downarrow }(E_{\mathrm {F} })}}}
{\displaystyle \mathrm {TMR} ={\frac {2P_{1}P_{2}}{1-P_{1}P_{2}}}}
Symmetry-filtering in tunnel barriers[edit]
Spin-transfer torque in magnetic tunnel junctions (MTJs)[edit]
{\displaystyle \mathbf {T} =\mathrm {Tr} [{\hat {\mathbf {T} }}{\hat {\rho }}_{\mathrm {neq} }]}
{\displaystyle {\hat {\rho }}_{\mathrm {neq} }}
{\displaystyle {\hat {\mathbf {T} }}}
{\displaystyle {\hat {\mathbf {T} }}={\frac {d{\hat {\mathbf {S} }}}{dt}}=-{\frac {i}{\hbar }}\left[{\frac {\hbar }{2}}{\boldsymbol {\sigma }},{\hat {H}}\right]}
{\displaystyle {\hat {H}}={\hat {H}}_{0}-\Delta ({\boldsymbol {\sigma }}\cdot \mathbf {m} )/2}
{\displaystyle \mathbf {m} }
{\displaystyle \mathbf {p} ,\mathbf {q} }
{\displaystyle ({\boldsymbol {\sigma }}\cdot \mathbf {p} )({\boldsymbol {\sigma }}\cdot \mathbf {q} )=\mathbf {p} \cdot \mathbf {q} +i(\mathbf {p} \times \mathbf {q} )\cdot {\boldsymbol {\sigma }}}
{\displaystyle ({\boldsymbol {\sigma }}\cdot \mathbf {p} ){\boldsymbol {\sigma }}=\mathbf {p} +i{\boldsymbol {\sigma }}\times \mathbf {p} }
{\displaystyle {\boldsymbol {\sigma }}({\boldsymbol {\sigma }}\cdot \mathbf {q} )=\mathbf {q} +i\mathbf {q} \times {\boldsymbol {\sigma }}}
{\displaystyle {\hat {\mathbf {T} }}}
{\displaystyle \Delta ,\mathbf {m} }
{\displaystyle {\boldsymbol {\sigma }}=(\sigma _{x},\sigma _{y},\sigma _{z})}
{\displaystyle T_{\parallel }={\sqrt {T_{x}^{2}+T_{z}^{2}}}}
{\displaystyle T_{\perp }=T_{y}}
{\displaystyle T_{\perp }\equiv 0}
{\displaystyle T_{\parallel }}
{\displaystyle \theta }
Discrepancy between theory and experiment[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Tunnel_magnetoresistance&oldid=1087279076"
|
(→Defining the Integral as a Limit)
{\displaystyle {\displaystyle \int _{0}^{1}x^{2}\,dx}}
{\displaystyle x=1,}
{\displaystyle x}
{\displaystyle (y=0)}
{\displaystyle y=x^{2}.}
{\displaystyle 0}
{\displaystyle 1}
{\displaystyle 4}
{\displaystyle \Delta x}
{\displaystyle 1/4.}
{\displaystyle f(x)}
{\displaystyle [0,1/4],}
{\displaystyle f(0)=0.}
{\displaystyle f(0)\cdot \Delta x\,=\,0\cdot \left({\displaystyle {\frac {1}{4}}}\right)\,=\,0.}
{\displaystyle [1/4,1/2].}
{\displaystyle 1/4}
{\displaystyle {\displaystyle f\left({\frac {1}{4}}\right)\cdot \Delta x\,=\,{\frac {1}{16}}\cdot {\frac {1}{4}}\,=\,{\frac {1}{64}}}.}
{\displaystyle [1/2,3/4],}
{\displaystyle 1/2,}
{\displaystyle {\displaystyle f\left({\frac {1}{2}}\right)\cdot \Delta x\,=\,{\frac {1}{4}}\cdot {\frac {1}{4}}\,=\,{\frac {1}{16}}.}}
{\displaystyle [3/4,1].}
{\displaystyle 3/4}
{\displaystyle {\displaystyle f\left({\frac {3}{4}}\right)\cdot \Delta x\,=\,{\frac {9}{16}}\cdot {\frac {1}{4}}\,=\,{\frac {9}{64}}.}}
{\displaystyle (\Sigma )}
{\displaystyle {\begin{array}{rcl}S&=&{\displaystyle \sum _{i=1}^{4}f\left(x_{i}\right)\cdot \Delta x}\\\\&=&{\displaystyle 0+{\frac {1}{64}}+{\frac {1}{16}}+{\frac {9}{64}}}\\\\&=&{\displaystyle {\frac {14}{64}}}\\\\&=&{\displaystyle {\frac {7}{32}}.}\end{array}}}
{\displaystyle 1/4,\,1/2,\,3/4}
{\displaystyle 1}
{\displaystyle {\begin{array}{rcl}S&=&{\displaystyle \sum _{i=1}^{4}f\left(x_{i}\right)\cdot \Delta x}\\\\&=&{\displaystyle f\left({\frac {1}{4}}\right)\cdot \Delta x+{\displaystyle f\left({\frac {1}{2}}\right)\cdot \Delta x+}{\displaystyle f\left({\frac {3}{4}}\right)\cdot \Delta x+}{\displaystyle f\left(1\right)\cdot \Delta x}}\\\\&=&{\displaystyle {\frac {1}{16}}\cdot {\frac {1}{4}}+{\frac {1}{4}}\cdot {\frac {1}{4}}+{\frac {9}{16}}\cdot {\frac {1}{4}}+1\cdot {\frac {1}{4}}}\\\\&=&{\displaystyle {\frac {1}{64}}+{\frac {1}{16}}+{\frac {9}{64}}+{\frac {1}{4}}}\\\\&=&{\displaystyle {\frac {15}{32}}}.\end{array}}}
{\displaystyle f(x)=x^{3}-x}
{\displaystyle -1}
{\displaystyle 3}
{\displaystyle n=4}
{\displaystyle x}
{\displaystyle -1}
{\displaystyle 3}
{\displaystyle 3-(-1)=4.}
{\displaystyle \Delta x\,=\,{\displaystyle {\frac {b-a}{n}}\,=\,{\frac {3-(-1)}{4}}\,=\,{\frac {4}{4}}\,=\,1.}}
{\displaystyle [-1,0],\,[0,1],\,[1,2]}
{\displaystyle [2,3].}
{\displaystyle {\begin{array}{rcl}S&=&{\displaystyle \sum _{i=1}^{4}f(x_{i})\cdot \Delta x}\\\\&=&f(-1)\cdot 1+f(0)\cdot 1+f(1)\cdot 1+f(2)\cdot 1\\\\&=&-2+0+0+6\\\\&=&4.\end{array}}}
{\displaystyle x}
{\displaystyle x}
{\displaystyle f(x)=x^{3}-x}
{\displaystyle -4}
{\displaystyle 4}
{\displaystyle n=4}
{\displaystyle x}
{\displaystyle -4}
{\displaystyle 4,}
{\displaystyle \Delta x\,=\,{\displaystyle {\frac {b-a}{n}}\,=\,{\frac {4-(-4)}{4}}\,=\,{\frac {8}{4}}\,=\,2.}}
{\displaystyle [-4,-2],\,[-2,0],\,[0,2]}
{\displaystyle [2,4].}
{\displaystyle -3,\,-1,\,1}
{\displaystyle 3}
{\displaystyle {\begin{array}{rcl}S&=&{\displaystyle \sum _{i=1}^{4}f(x_{i})\cdot \Delta x}\\\\&=&f(-3)\cdot 2+f(-1)\cdot 2+f(1)\cdot 2+f(3)\cdot 2\\\\&=&(-24)\cdot 2+0\cdot 2+0\cdot 2+24\cdot 2\\\\&=&0.\end{array}}}
{\displaystyle \Delta x,}
{\displaystyle f(x)}
{\displaystyle [a,b],}
{\displaystyle {\displaystyle \int _{a}^{b}f(x)\,dx},}
{\displaystyle {\displaystyle \int _{a}^{b}f(x)\,dx\,=\,\lim _{n\rightarrow \infty }\sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}.}}
{\displaystyle \Delta x_{i}=\Delta x={\displaystyle {\frac {b-a}{n}},}}
{\displaystyle f(x)=x^{2}}
{\displaystyle [0,3]}
{\displaystyle a=0,\,b=3}
{\displaystyle n}
{\displaystyle \Delta x\,=\,{\displaystyle {\frac {b-a}{n}}\,=\,{\frac {3-0}{n}}\,=\,{\frac {3}{n}}}.}
For a give{\displaystyle n}
{\displaystyle 0,}
{\displaystyle \Delta x=3/n.}
{\displaystyle [0,3/n].}
{\displaystyle 3/n}
{\displaystyle \Delta x=3/n.}
{\displaystyle {\displaystyle \left[{\frac {3}{n}},{\frac {3}{n}}+{\frac {3}{n}}\right]\,=\,\left[1\cdot {\frac {3}{n}},2\cdot {\frac {3}{n}}\right].}}
{\displaystyle I_{1},}
{\displaystyle I_{1}={\displaystyle \left[0\cdot {\frac {3}{n}},1\cdot {\frac {3}{n}}\right]}.}
{\displaystyle I_{2}={\displaystyle \left[1\cdot {\frac {3}{n}},2\cdot {\frac {3}{n}}\right]}.}
{\displaystyle i=1,2,\ldots ,n,}
{\displaystyle I_{i}={\displaystyle \left[(i-1)\cdot {\frac {3}{n}},i\cdot {\frac {3}{n}}\right]}.}
{\displaystyle f\left({\displaystyle 1\cdot {\frac {3}{n}}}\right)}
{\displaystyle I_{1},}
{\displaystyle f\left({\displaystyle 2\cdot {\frac {3}{n}}}\right)}
{\displaystyle I_{2},}
{\displaystyle f\left({\displaystyle i\cdot {\frac {3}{n}}}\right)}
{\displaystyle I_{i}.}
{\displaystyle n,}
{\displaystyle {\displaystyle \sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}\,=\,\sum _{i=1}^{n}f\left({\frac {3i}{n}}\right)\cdot \Delta x\,=\,\sum _{i=1}^{n}{\frac {9i^{2}}{n^{2}}}\cdot {\frac {3}{n}}\,=\,\sum _{i=1}^{n}{\frac {27i^{2}}{n^{3}}}.}}
{\displaystyle {\displaystyle \int _{a}^{b}f(x)\,dx\,=\,\lim _{n\rightarrow \infty }\sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}\,=\,\lim _{n\rightarrow \infty }\sum _{i=1}^{n}{\frac {27i^{2}}{n^{3}}}.}}
{\displaystyle n}umbers, the sum of the first
{\displaystyle n}
{\displaystyle n}
{\displaystyle {\displaystyle \sum _{i=1}^{n}i\,=\,{\frac {n(n+1)}{2}};\qquad \sum _{i=1}^{n}i^{2}\,=\,{\frac {n(n+1)(2n+1)}{6}};\qquad \sum _{i=1}^{n}i^{3}\,=\,{\frac {n^{2}(n+1)^{2}}{4}}.}}
{\displaystyle ca_{1}+ca_{2}=c(a_{1}+a_{2})}
{\displaystyle (a_{1}+b_{1})+(a_{2}+b_{2})=(a_{1}+a_{2})+(b_{1}+b_{2})}
{\displaystyle {\displaystyle \sum _{i=1}^{n}ca_{i}\,=\,ca_{1}+ca_{2}+\cdots +ca_{n}\,=\,c(a_{1}+\cdots +a_{n})\,=\,c\sum _{i=1}^{n}a_{i},}\qquad \qquad (\dagger )}
{\displaystyle {\displaystyle \sum _{i=1}^{n}(a_{i}+b_{i})\,=\,a_{1}+b_{1}+a_{2}+b_{2}\cdots a_{n}+b_{n}\,=\,a_{1}+a_{2}+\cdots +a_{n}+b_{1}+b_{2}+\cdots +b_{n}\,=\,\sum _{i=1}^{n}a_{i}+\sum _{i=1}^{n}b_{i}.\qquad \qquad (\dagger \dagger )}}
{\displaystyle n\rightarrow \infty }
{\displaystyle {\displaystyle \sum _{i=1}^{n}{\frac {27i^{2}}{n^{3}}},}}
{\displaystyle 27}
{\displaystyle n^{3}}
{\displaystyle c}
{\displaystyle (\dagger ).}
{\displaystyle {\displaystyle \sum _{i=1}^{n}{\frac {27i^{2}}{n^{3}}}\,=\,{\frac {27}{n^{3}}}\sum _{i=1}^{n}i^{2}\,=\,{\frac {27}{n^{3}}}\cdot {\frac {n(n+1)(2n+1)}{6}}\,=\,{\frac {9n(n+1)(2n+1)}{2n^{3}}},}}
{\displaystyle n}
{\displaystyle 18n^{3}/2n^{3}}
{\displaystyle n,}
{\displaystyle n\rightarrow \infty }
{\displaystyle 9.}
{\displaystyle {\displaystyle \int _{0}^{3}x^{2}\,dx=9.}}
{\displaystyle f(x_{i}).}
{\displaystyle {\displaystyle \left[0\cdot {\frac {3}{n}},1\cdot {\frac {3}{n}}\right],}}
{\displaystyle 0.}
{\displaystyle {\displaystyle \left[1\cdot {\frac {3}{n}},2\cdot {\frac {3}{n}}\right],}}
{\displaystyle 3/n.}
{\displaystyle I_{1}={\displaystyle \left[(i-1)\cdot {\frac {3}{n}},i\cdot {\frac {3}{n}}\right],}}
{\displaystyle 3(i-1)/n.}
{\displaystyle {\begin{array}{rcl}{\displaystyle \int _{0}^{3}x^{2}\,dx}&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}\left({\frac {3(i-1)}{n}}\right)^{2}\cdot {\frac {3}{n}}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}{\frac {9(i-1)}{n^{2}}}^{2}\cdot {\frac {3}{n}}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}{\frac {27(i-1)}{n^{3}}}^{2}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,{\frac {27}{n^{3}}}\sum _{i=1}^{n}(i^{2}-2i+1).}\end{array}}}
{\displaystyle {\begin{array}{rcl}{\displaystyle \lim _{n\rightarrow \infty }\,{\frac {27}{n^{3}}}\sum _{i=1}^{n}(i^{2}-2i+1)}&=&{\displaystyle \lim _{n\rightarrow \infty }\,{\frac {27}{n^{3}}}\left({\frac {n(n+1)(2n+1)}{6}}+{\frac {n(n+1)}{2}}+n\right)}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\left({\frac {9n(n+1)(2n+1)}{2n^{3}}}+{\frac {27n(n+1)}{2n^{3}}}+{\frac {27}{n^{2}}}\right)}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\left({\frac {18n^{3}}{2n^{3}}}+{\frac {27n^{2}}{2n^{3}}}+{\frac {27}{n^{2}}}\right)\qquad \qquad ({\textrm {for~large~}}n)}\\\\&=&9+0+0\\\\&=&9.\end{array}}}
{\displaystyle \Delta x}
{\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx?}
{\displaystyle \Delta x=(b-a)/n=1/n,}
{\displaystyle x_{0}=0,}
{\displaystyle x_{i}\,=\,{\displaystyle {\frac {i^{2}}{n^{2}}}.}}
{\displaystyle x_{1}=1/n^{2}}
{\displaystyle x_{2}=4/n^{2}.}
{\displaystyle x_{0}=0,}
{\displaystyle {\displaystyle \Delta x_{i}\,=\,x_{i}-x_{i-1}\,=\,{\frac {i^{2}}{n^{2}}}-{\frac {(i-1)^{2}}{n^{2}}}\,=\,{\frac {2i-1}{n^{2}}}.}}
{\displaystyle \Delta x_{i}}
{\displaystyle f(x_{i})}
{\displaystyle f(x_{i})={\displaystyle {\sqrt {\frac {i^{2}}{n^{2}}}}={\frac {i}{n}}.}}
{\displaystyle {\begin{array}{rcl}{\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx}&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\sum _{i=1}^{n}{\frac {i}{n}}\cdot {\frac {2i-1}{n^{2}}}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\sum _{i=1}^{n}{\frac {2i^{2}-i}{n^{3}}}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }{\frac {1}{n^{3}}}\left(2\cdot {\frac {n(n+1)(2n+1)}{6}}-{\frac {n(n+1)}{2}}\right)}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\left({\frac {2n(n+1)(2n+1)}{6n^{3}}}-{\frac {n(n+1)}{2n^{3}}}\right)}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\left({\frac {4n^{3}}{6n^{3}}}+{\frac {n^{2}}{2n^{3}}}\right)\qquad \qquad ({\textrm {for~large~}}n)}\\\\&=&{\displaystyle {\frac {2}{3}}+0}\\\\&=&{\displaystyle {\frac {2}{3}}.}\end{array}}}
{\displaystyle \Delta x_{i}}
|
Independent Researcher, Karlsfeld, Germany.
Abstract: In this paper, a very simple novel model is presented concerning the unified field theory (“theory of everything”). In the scope of this novel theory, it is assumed that matter, space and time are quantized. It is assumed that the space is subdivided into cubic elementary cells (space quanta), and in each of its eight corners a Delta potential is positioned. That means the Delta potentials are equidistantly arranged, so that the Delta potentials are forming a lattice similar to a crystal lattice in solid state physics. The novel theory is analogue to the Kronig-Penney model well-known in solid state physics: a crystal lattice comprises Delta potentials arranged equidistantly to one another, so the lattice space can be considered as being quantized by an array of equally spaced Delta potentials or the lattice space is divided into cubic elementary cells (space quanta). But instead of electrons, material quanta are inserted into the cubic elementary cells or space quanta. So the material quanta are not freely vibrating (unbound state), but are vibrating in a bound state with discrete energy levels separated by an energy gap. This is due to the presence of the array of Delta potentials. In the frame of this novel theory the Schr?dinger Equation for the Kronig-Penney-Model is not solved by differentiation, but the Schr?dinger Equation is integrated yielding the formula , by whose discussion the existence of an energy gap is revealed. This energy gap is responsible if the material quantum occurs as light quantum (photon) or mass quantum.
Keywords: String Theory, Loop Quantum Gravity Theory, Unified Field Theory
{|\psi \left(x=na\right)|}^{2}=\left(E-{E}_{kin}\right)/z{V}_{0}
{V}_{0}\delta \left(x=na\right)
{|\psi \left(x\right)|}^{2}
-\frac{{\hslash }^{2}}{2m}\Delta \psi +{V}_{0}\underset{n=-\infty }{\overset{\infty }{\sum }}\delta \left(x+na\right)\psi =E\psi
{\psi }^{*}
-\frac{{\hslash }^{2}}{2m}\int {\psi }^{*}\Delta \psi \text{d}x+{V}_{0}\int \underset{n=-\infty }{\overset{\infty }{\sum }}\delta \left(x+na\right){\psi }^{*}\psi \text{d}x=\int E{\psi }^{*}\psi \text{d}x
{\psi }^{*}\psi \left(a\right)={|\psi \left(a\right)|}^{2}
{\int }_{-\infty }^{\infty }{\psi }^{*}\psi \text{d}x=1
f\left(a\right)={\int }_{-\infty }^{\infty }\delta \left(x-a\right)f\left(x\right)\text{d}x
\psi =\psi \left(x\right)
{E}_{kin}+{V}_{0}\Sigma {|\psi \left(x=na\right)|}^{2}=E
\Sigma {|\psi \left(x=na\right)|}^{2}=\left(E-{E}_{kin}\right)/{V}_{0}
\Sigma {|\psi \left(na\right)|}^{2}=z
{|\psi \left(x=na\right)|}^{2}=\left(E-{E}_{kin}\right)/z{V}_{0}
{|\psi \left(x=na\right)|}^{2}
\delta \left(x=na\right)
{|\psi \left(x=na\right)|}^{2}
\delta \left(x=na\right)
{|\psi \left(na\right)|}^{2}
{|\psi \left(na,{E}_{kin}\right)|}^{2}=\left(E-{E}_{kin}\right)/z{V}_{0}=E/z{V}_{0}-{E}_{kin}/z{V}_{0}
m=-\text{1}/z{V}_{0}
b=E/z{V}_{0}
0\le {E}_{kin}\le E={E}_{0}
{|\psi \left(na\right)|}^{2}
{|\psi \left(na\right)|}^{2}
\delta \left(x=na\right)
{|\psi \left(na\right)|}^{2}
\delta \left(x=na\right)
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
{|\psi \left(x=na\right)|}^{2}
\int {\psi }^{*}\psi \text{d}x=1
{|\psi \left(0<x<na\right)|}^{2}
E<{E}_{kin}<0
{|\psi \left(x=na\right)|}^{2}
{|\psi \left(x=na\right)|}^{2}
{|\psi \left(0<x<na\right)|}^{2}
{|\psi \left(x=na\right)|}^{2}
{|\psi \left(x=na\right)|}^{2}
{|\psi \left(0<x<na\right)|}^{2}
{|\psi \left(x=na\right)|}^{2}
x\to {|\psi \left(x\right)|}^{2}
{|\psi \left(x=na\right)|}^{2}
{|\psi \left(0<x<na\right)|}^{2}
{|\psi \left(x=na\right)|}^{2}
{|\psi \left(0<x<na\right)|}^{2}
{|\psi \left(x=na\right)|}^{2}
\int {\psi }^{*}\psi \text{d}x=1
x\to {|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x,{E}_{kin}\right)|}^{2}
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
\delta \left(x=na\right)
{|\psi \left(0<x<na,{E}_{kin}\right)|}^{2}
\delta \left(x=na\right)
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
{|\psi \left(0<x<na,{E}_{kin}\right)|}^{2}
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
{|\psi \left(0<x<na,{E}_{kin}\right)|}^{2}
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
{|\psi \left(0<x<na,{E}_{kin}\right)|}^{2}
{|\psi \left(0<x<na,{E}_{kin}\right)|}^{2}
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
x\to {|\psi \left(x\right)|}^{2}
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
{|\psi \left(0<x<na,{E}_{kin}\right)|}^{2}
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
{|\psi \left(0<x<na,{E}_{kin}\right)|}^{2}
{|\psi \left(x=na,{E}_{kin}\right)|}^{2}
{|\psi \left(0<x<na,{E}_{kin}\right)|}^{2}
{|\psi \left(x=a\right)|}^{2}
{|\psi \left(x=a\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{|\psi \left(x\right)|}^{2}
{f}_{b}=|{f}_{2}-{f}_{1}|
{f}_{b}=|{f}_{2}-{f}_{1}|
{f}_{b}=|{f}_{2}-{f}_{1}|
Cite this paper: Wochnowski, C. (2019) A Very Simple Model Concerning the Unified Field Theory Basing on the Kronig-Penney-Model. Journal of High Energy Physics, Gravitation and Cosmology, 5, 941-952. doi: 10.4236/jhepgc.2019.53050.
[1] Veneziano, G. (1968) Construction of a Crossing-Simmetric, Regge-Behaved Amplitude for Linearly Rising Trajectories. Il Nuovo Cimento A (1965-1970), 57, 190- 197.
[2] Galli, E. and Susskind, L. (1970) Structure of Hadrons. II. Nonplanar Diagrams. Physical Review D, 1, 1189.
[3] Susskind, L. (1970) Structure of Hadrons Implied by Duality. Physical Review D, 1, 1182.
[5] Susskind, L. (1969) Harmonic-Oscillator Analogy for the Veneziano Model. Physical Review Letters, 23, 545.
[6] Nambu, Y. (1969) Quark Model and the Factorization of the Veneziano Amplitude. Symmetries and Quark Models: Proceedings of the International Conference, Detroit, 18-20 June 1969, 269.
[7] Nambu, Y. (1970) Lectures at the Copenhagen Summer Symposium.
[8] Nambu, Y. (1970) Dual Model of Hadrons. EFI-70-07, 14.
[9] Nielsen, H.B. (1969) An Almost Physical Interpretation of the Integrand of the n-Point Veneziano Model. Niels Bohr Institute, København.
[10] Nielsen, H.B. (1970) Paper Presented at the XV International Conference on High Energy Physics. USSR, Kiev.
[11] Polchinski, J. String Theory: Volume 1, an Introduction to the Bosonic String. Cambridge University Press, Cambridge.
[12] Schwarz, J.H. (2000) Introduction to Superstring Theory. Lectures Presented at the St. Croix NATO Advanced Study Institute on Techniques and Concepts of High Energy Physics.
[13] Witten, E. (1995) Some Problems of Strong and Weak Coupling. Univ. of Southern California, Los Angeles.
[14] Witten, E. (1995) String Theory Dynamics in Various Dimensions. Nuclear Physics B, 443, 85.
[15] Wilson, K. (1974) Confinement of Quarks. Physical Review D, 10, 2445.
[16] Penrose, R. (1971) Applications of Negative Dimensional Tensors. In: Welsh, D., Ed., Combinatorial Mathematics and Its Applications, Academic Press, New York, 221-244.
[17] Penrose, R. (1971) Angular Momentum: An Approach to Combinatorial Space- Time. In: Bastin, T., Ed., Quantum Theory and Beyond, Cambridge University Press, Cambridge, 151-180.
[18] Sen, A. (1982) Gravity as a Spin System. Physics Letters B, 119, 89-91.
[19] Ashtekar, A. (1986) New Variables for Classical and Quantum Gravity. Physical Review Letters, 57, 2244.
[20] Ashtekar, A. (1987) New Hamiltonian Formulation of General Relativity. Physical Review D, 36, 1587.
[21] Vaid, D. (2018) Connecting Loop Quantum Gravity and String Theory via Quantum Geometry.
[22] Kittel, C. (2002) Einführung in die Festkörperphysik. de Gruyter, Berlin.
[23] Ibach, H. and Lüth, H. (2009) Festköperphysik. Springer, Berlin.
[24] Bonc-Bruevic, V.L. and Kalasnikov, S.G. (1982) Halbleiterphysik. VEB Verlag, Berlin.
Ashcroft, N.W. (1976) Solid State Physics. Brooks Cole, Monterey.
[25] http://www.pci.tu-bs.de/aggericke/PC3/Kap_III/Oszillator.htm
|
Hom functor — Wikipedia Republished // WIKI 2
Functor mapping hom objects to an underlying category
In mathematics, specifically in category theory, hom-sets (i.e. sets of morphisms between objects) give rise to important functors to the category of sets. These functors are called hom-functors and have numerous applications in category theory and other branches of mathematics.
7 Left and Right Exact functors
Category Theory Foundations, Lecture 3
Category theory for JavaScript programmers #12: the hom functor
2 Yoneda's lemma
3 Internal Hom functor
Let C be a locally small category (i.e. a category for which hom-classes are actually sets and not proper classes).
For all objects A and B in C we define two functors to the category of sets as follows:
Hom(A, –) : C → Set
Hom(–, B) : C → Set[1]
This is a covariant functor given by:
Hom(A, –) maps each object X in C to the set of morphisms, Hom(A, X)
Hom(A, –) maps each morphism f : X → Y to the function
Hom(A, f) : Hom(A, X) → Hom(A, Y) given by
{\displaystyle g\mapsto f\circ g}
for each g in Hom(A, X).
This is a contravariant functor given by:
Hom(–, B) maps each object X in C to the set of morphisms, Hom(X, B)
Hom(–, B) maps each morphism h : X → Y to the function
Hom(h, B) : Hom(Y, B) → Hom(X, B) given by
{\displaystyle g\mapsto g\circ h}
for each g in Hom(Y, B).
The functor Hom(–, B) is also called the functor of points of the object B.
Note that fixing the first argument of Hom naturally gives rise to a covariant functor and fixing the second argument naturally gives a contravariant functor. This is an artifact of the way in which one must compose the morphisms.
The pair of functors Hom(A, –) and Hom(–, B) are related in a natural manner. For any pair of morphisms f : B → B′ and h : A′ → A the following diagram commutes:
Both paths send g : A → B to f ∘ g ∘ h : A′ → B′.
The commutativity of the above diagram implies that Hom(–, –) is a bifunctor from C × C to Set which is contravariant in the first argument and covariant in the second. Equivalently, we may say that Hom(–, –) is a bifunctor
Hom(–, –) : Cop × C → Set
where Cop is the opposite category to C. The notation HomC(–, –) is sometimes used for Hom(–, –) in order to emphasize the category forming the domain.
Main article: Yoneda lemma
Referring to the above commutative diagram, one observes that every morphism
h : A′ → A
gives rise to a natural transformation
Hom(h, –) : Hom(A, –) → Hom(A′, –)
f : B → B′
gives rise to a natural transformation
Hom(–, f) : Hom(–, B) → Hom(–, B′)
Yoneda's lemma implies that every natural transformation between Hom functors is of this form. In other words, the Hom functors give rise to a full and faithful embedding of the category C into the functor category SetCop (covariant or contravariant depending on which Hom functor is used).
Internal Hom functor
Some categories may possess a functor that behaves like a Hom functor, but takes values in the category C itself, rather than Set. Such a functor is referred to as the internal Hom functor, and is often written as
{\displaystyle \left[-\ -\right]:C^{\text{op}}\times C\to C}
to emphasize its product-like nature, or as
{\displaystyle \mathop {\Rightarrow } :C^{\text{op}}\times C\to C}
to emphasize its functorial nature, or sometimes merely in lower-case:
{\displaystyle \operatorname {hom} (-,-):C^{\text{op}}\times C\to C.}
For examples, see Category of relations.
Categories that possess an internal Hom functor are referred to as closed categories. One has that
{\displaystyle \operatorname {Hom} (I,\operatorname {hom} (-,-))\simeq \operatorname {Hom} (-,-)}
where I is the unit object of the closed category. For the case of a closed monoidal category, this extends to the notion of currying, namely, that
{\displaystyle \operatorname {Hom} (X,Y\Rightarrow Z)\simeq \operatorname {Hom} (X\otimes Y,Z)}
{\displaystyle \otimes }
is a bifunctor, the internal product functor defining a monoidal category. The isomorphism is natural in both X and Z. In other words, in a closed monoidal category, the internal Hom functor is an adjoint functor to the internal product functor. The object
{\displaystyle Y\Rightarrow Z}
is called the internal Hom. When
{\displaystyle \otimes }
is the Cartesian product
{\displaystyle \times }
, the object
{\displaystyle Y\Rightarrow Z}
is called the exponential object, and is often written as
{\displaystyle Z^{Y}}
Internal Homs, when chained together, form a language, called the internal language of the category. The most famous of these are simply typed lambda calculus, which is the internal language of Cartesian closed categories, and the linear type system, which is the internal language of closed symmetric monoidal categories.
Note that a functor of the form
Hom(–, A) : Cop → Set
is a presheaf; likewise, Hom(A, –) is a copresheaf.
A functor F : C → Set that is naturally isomorphic to Hom(A, –) for some A in C is called a representable functor (or representable copresheaf); likewise, a contravariant functor equivalent to Hom(–, A) might be called corepresentable.
Note that Hom(–, –) : Cop × C → Set is a profunctor, and, specifically, it is the identity profunctor
{\displaystyle \operatorname {id} _{C}\colon C\nrightarrow C}
The internal hom functor preserves limits; that is,
{\displaystyle \operatorname {hom} (X,-)\colon C\to C}
sends limits to limits, while
{\displaystyle \operatorname {hom} (-,X)\colon C^{\text{op}}\to C}
sends limits in
{\displaystyle C^{\text{op}}}
, that is colimits in
{\displaystyle C}
, into limits. In a certain sense, this can be taken as the definition of a limit or colimit.
If A is an abelian category and A is an object of A, then HomA(A, –) is a covariant left-exact functor from A to the category Ab of abelian groups. It is exact if and only if A is projective.[2]
Let R be a ring and M a left R-module. The functor HomR(M, –): Mod-R → Ab[clarification needed] is adjoint to the tensor product functor –
{\displaystyle \otimes }
R M: Ab → Mod-R.
Ext functor
Representable functor
^ Also commonly denoted Cop → Set, where Cop denotes the opposite category, and this encodes the arrow-reversing behaviour of Hom(–, B).
^ Jacobson (2009), p. 149, Prop. 3.9.
Mac Lane, Saunders (September 1998). Categories for the Working Mathematician (Second ed.). Springer. ISBN 0-387-98403-8.
Goldblatt, Robert (2006) [1984]. Topoi, the Categorial Analysis of Logic (Revised ed.). Dover Publications. ISBN 978-0-486-45026-1. Retrieved 2009-11-25. [dead link]
Jacobson, Nathan (2009). Basic algebra. Vol. 2 (2nd ed.). Dover. ISBN 978-0-486-47187-7.
Hom functor in nLab
Internal Hom in nLab
|
diffalg(deprecated)/print_ranking - Maple Help
Home : Support : Online Help : diffalg(deprecated)/print_ranking
print_ranking
print a message describing the ranking of a differential polynomial ring.
print_ranking (R)
The print_ranking command prints a message describing the ranking defined on a differential polynomial ring R set up with the differential_ring command.
The ranking of a differential polynomial ring R is a total ordering over the set of all the derivatives of the differential indeterminates of R that is compatible with derivation (see ranking)
The command with(diffalg,print_ranking) allows the use of the abbreviated form of this command.
\mathrm{with}\left(\mathrm{diffalg}\right):
p≔u[x,y]+v[x,x];
q≔v[x]+v[y,y]
\textcolor[rgb]{0,0,1}{p}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}}
\textcolor[rgb]{0,0,1}{q}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}}
Q≔\mathrm{differential_ring}\left(\mathrm{derivations}=[x,y],\mathrm{ranking}=[\mathrm{grlexA}[u,v]]\right):
\mathrm{print_ranking}\left(Q\right)
The derivatives of [u, v] are ordered by grlexA:
\mathrm{leader}\left(p,Q\right),\mathrm{leader}\left(q,Q\right)
{\textcolor[rgb]{0,0,1}{u}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}}
R≔\mathrm{differential_ring}\left(\mathrm{derivations}=[x,y],\mathrm{ranking}=[\mathrm{grlexB}[u,v]]\right)
\textcolor[rgb]{0,0,1}{R}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{PDE_ring}}
\mathrm{print_ranking}\left(R\right)
The derivatives of [u, v] are ordered by grlexB:
|tau| = |phi| and tau > phi w.r.t. [x, y] or
\mathrm{leader}\left(p,R\right),\mathrm{leader}\left(q,R\right)
{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}}
S≔\mathrm{differential_ring}\left(\mathrm{derivations}=[x,y],\mathrm{ranking}=[\mathrm{lex}[u,v]]\right)
\textcolor[rgb]{0,0,1}{S}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{PDE_ring}}
\mathrm{print_ranking}\left(S\right)
The derivatives of [u, v] are ordered by lex:
tau > phi for the lex. order [x, y] or
\mathrm{leader}\left(p,S\right),\mathrm{leader}\left(q,S\right)
{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{v}}_{\textcolor[rgb]{0,0,1}{x}}
T≔\mathrm{differential_ring}\left(\mathrm{derivations}=[x,y],\mathrm{indeterminates}={u,v},\mathrm{leaders_of}\left([p,q]\right)=[u[x,y],v[x]]\right)
\textcolor[rgb]{0,0,1}{T}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{PDE_ring}}
\mathrm{print_ranking}\left(T\right)
|
Revision as of 14:13, 28 April 2015 by MathAdmin (talk | contribs) (Created page with "'''Question: ''' a) Find the equation of the line passing through (3, -2) and (5, 6).<br> ...")
{\displaystyle (x_{1},y_{1})}
{\displaystyle (x_{2},y_{2})}
{\displaystyle {\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}}
{\displaystyle y-y_{1}=m(x-x_{1})}
{\displaystyle (x_{1},y_{1})}
{\displaystyle {\frac {-1}{m}}}
{\displaystyle {\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}}
{\displaystyle {\frac {6-(-2)}{5-3}}={\frac {8}{2}}=4}
{\displaystyle y-6=4(x-5)}
{\displaystyle y+2=4(x-3)}
{\displaystyle {\frac {-1}{4}}}
|
{\displaystyle a_{n}={\frac {\ln n}{n}}}
{\displaystyle f}
{\displaystyle g}
be differentiable functions on the open interval
{\displaystyle (a,\infty )}
for some value
{\displaystyle a,}
{\displaystyle g'(x)\neq 0}
{\displaystyle (a,\infty )}
{\displaystyle \lim _{x\rightarrow \infty }{\frac {f(x)}{g(x)}}}
returns either
{\displaystyle {\frac {0}{0}}}
{\displaystyle {\frac {\infty }{\infty }}.}
{\displaystyle \lim _{x\rightarrow \infty }{\frac {f(x)}{g(x)}}=\lim _{x\rightarrow \infty }{\frac {f'(x)}{g'(x)}}.}
{\displaystyle \lim _{n\rightarrow \infty }\ln n=\infty }
{\displaystyle \lim _{n\rightarrow \infty }n=\infty .}
Therefore, the limit has the form
{\displaystyle {\frac {\infty }{\infty }},}
which means that we can use L'Hopital's Rule to calculate this limit.
First, switch to the variable
{\displaystyle x}
so that we have functions and
can take derivatives. Thus, using L'Hopital's Rule, we have
{\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{n\rightarrow \infty }{\frac {\ln n}{n}}}&=&\displaystyle {\lim _{x\rightarrow \infty }{\frac {\ln x}{x}}}\\&&\\&{\overset {L'H}{=}}&\displaystyle {\lim _{x\rightarrow \infty }{\frac {{\big (}{\frac {1}{x}}{\big )}}{1}}}\\&&\\&=&\displaystyle {0.}\end{array}}}
The sequence converges. The limit of the sequence is
{\displaystyle 0.}
|
Empirically Calibrated Ground‐Motion Prediction Equation for OklahomaEmpirically Calibrated Ground‐Motion Prediction Equation for Oklahoma | Bulletin of the Seismological Society of America | GeoScienceWorld
Mark Novakovic;
Department of Earth Sciences, Western University, London, Ontario, Canada N6A 5B7, mnovako3@uwo.cagmatkinson@aol.comkassatou@uwo.ca
Errata: Empirically Calibrated Ground‐Motion Prediction Equation for Oklahoma
Mark Novakovic, Gail M. Atkinson, Karen Assatourians; Empirically Calibrated Ground‐Motion Prediction Equation for Oklahoma. Bulletin of the Seismological Society of America 2018;; 108 (5A): 2444–2461. doi: https://doi.org/10.1785/0120170331
A region‐specific ground‐motion prediction equation (GMPE) is developed using a selected and compiled database of 7278 ground‐motion observations in Oklahoma, including 188 events of magnitude 3.5–5.8, recorded over the hypocentral distance range from 2 to 500 km; most events are considered to be induced by wastewater injection. A generalized inversion is used to solve for regional source and attenuation parameters and station site responses, within the context of an equivalent point‐source model, following the method of Atkinson et al. (2015), Yenier and Atkinson (2015b) and Hassani and Atkinson (2015). The resolved parameters include the regional geometric spreading and anelastic attenuation functions, source parameters for each event (e.g., moment magnitude and stress parameter for Brune point‐source model), and site‐response terms for each station relative to a reference site condition (B/C boundary). The parameters fully specify a regionally calibrated GMPE that can be used to describe median amplitudes from induced earthquakes in the central United States. The GMPE can be implemented to estimate magnitude, stress, and median ground motions in near‐real time, which is useful for ground‐motion‐based alerting systems and traffic‐light protocols. The derived GMPE has further applications for the evaluation of hazards from induced seismicity.
Overall, the ground motions for B/C site conditions for induced events in Oklahoma are of similar amplitude to those predicted by the GMPEs of Atkinson et al. (2015) and Yenier and Atkinson (2015b) at close distances, for events of
M
4–5. For larger events, the Oklahoma motions are larger, especially at high frequencies. The Oklahoma motions follow a pronounced trilinear amplitude decay function at regional distances.
A Ground Motion Prediction Model for Deep Earthquakes beneath the Island of Hawaii
M
|
When Will US Air Travel Near Pre-COVID Level? | Metaculus
When Will US Air Travel Near Pre-COVID Level?
Following the outbreak of COVID-19 in the US in February 2020, a series of international travel restrictions and statewide stay-at-home orders were put in place. The impact on the aviation industry has been severe. According to Conde Nast Traveler:
On April 7, the total amount of U.S. fliers screened by the TSA fell below 100,000 for the first time in the agency’s history. That’s a 95 percent drop compared to the passenger numbers from the same day in 2019, when 2,091,056 people passed through the checkpoints. Experts say the majority of those screened were airline crew members or healthcare workers heading to COVID-19 hot spots.
Some states have begun reopening, but domestic airline executives have warned that their operations may not come back in full force after the pandemic.
These were the domestic passenger Departures Performed numbers for the year of 2019:
When will US domestic passenger air travel return to 80% of pre-COVID-19 volumes?
This question resolves as the first time when the total monthly US domestic passenger Departures Performed is at least 80% of that for the same month in 2019, according to US Air Carrier Traffic Statistics.
To pin down a specific day, we will linearly interpolate between the last day of the first month when the air passenger volume meets the threshold and the last day of the prior month. Specifically, let the difference at month
k
\Delta_k\ \equiv y_k-0.8\times y_k^{2019}
t_1
be the last day of the last month with
\Delta_1 \lt 0
t_2
be the last day of the first month with
\Delta_2 \geq 0
Will American Airlines file for bankruptcy protection before 2021?
When will the suspension of incoming travel to the US from the Schengen area be terminated?
When will daily commercial flights exceed 75,000?
|
Visiting all 30 Major League Baseball Stadiums - with Python and SAS® Viya® - Operations Research with SAS
By Sertalp B. Cay on Operations Research with SAS June 13, 2018 Topics | Advanced Analytics Data Visualization
Oriole Park at Camden Yards by Ed Hughes
A cross-country trip is pretty much an all-American experience, and so is baseball. Traveling around the country to see all 30 Major League Baseball (MLB) stadiums is not a new idea; there's even a social network between so-called "Ballpark Chasers" where people communicate and share their journeys. Even though I'm not a baseball fan myself, I find the idea of traveling around the country to visit all 30 MLB stadiums pretty interesting.
Since we all lack time, the natural question that might pop into your mind is, "How fast can I visit all 30 stadiums and see a MLB game in each?" This question was first asked by Cleary et al. (2000) in a mathematical context. This is where the math and baseball intersect. Finding the optimal trip is an expensive calculation. Discarding the schedule for a second and focusing only on ordering stadiums to visit results in more than
2.65 \times 10^{32}
different permutations. When you add the game schedule and distances between stadiums, the problem gets much bigger and more difficult quickly. See the Traveling Salesman Problem if you are interested in difficult scheduling problems, which is the main source of the "Traveling Baseball Fan Problem."
The Optimal Trip
Before starting to talk about "The Optimal Trip" I should make some assumptions. The Optimal Trip is quite a subjective term, so I need to choose a measurable objective. My focus is to complete the schedule within the shortest time possible. Further, I assume that the superfan only uses land transportation (a car) between stadiums. Unlike some variations, I don't require the fan to return back to the origin, so the start and end cities will be different. Each stadium will be visited only once.
The Traveling Baseball Fan Problem (TBFP) has gained quite a bit of attention. There is a book by Ben Blatt and Eric Brewster about their 30 games in 30 days. They created an online visualization of such a tour for those interested; unfortunately the tool only shows schedules for the 2017 season. Their approach here is a heuristic, so the resulting solution is not guaranteed to be the "shortest possible tour." Since the problem is huge, one can only expect the true optimal solution to be obtained after optimization. Ben Blatt also wrote a mathematical optimization formulation for the shortest possible baseball road trip.
There are different ways to model this problem. The model I am going to use to optimize the TBFP is the network-based formulation presented in a SAS Global Forum 2014 paper by Chapman, Galati, and Pratt. Rob Pratt, one of the authors, wrote about the TBFP in this blog before.
Ground Rules and Challenge
Just to reiterate, ground rules for the optimal schedule:
Use ground transportation only.
Use driving distances between stadiums. The driving distances are obtained via OSRM.
Stay until each game ends (assume each game lasts 3 hours).
The main challenge is to gather data, model the problem, and visualize results---all within the Python environment. Moreover, my aim here is to show you that the mathematical formulation for the TBFP can easily be written with our new open-source Python package sasoptpy and can be solved using SAS Viya on the cloud. If you are interested, check our Github repository for the package and our SAS Global Forum 2018 paper (Erickson and Cay) to learn more about it!
To provide variety, let's solve the problem for different time periods (2, 3, and 6 months) with two different objectives. The first objective is to finish the schedule in the shortest time possible. The second objective is to finish the schedule with spending the least amount of money. For this, I will assume $130 accommodation rate per day and $0.25 travel cost per mile. This objective was the main motivation of Rick and Mike's 1980 tour, as mentioned in the paper by Cleary et al. (2000).
TBFP Model
I will use the network formulation from the aforementioned SAS Global Forum 2014 paper. For this model, I define directed arcs between pairs of games, eliminate the arcs that cannot be part of a feasible solution, and optimize the given objective.
The decision variable in this formulation is
u
, which is defined for each arc
(g_1, g_2)
u[g_1, g_2] = \begin{cases}1 & \text{if the fan attends games $g_1$ and $g_2$ and no game in between} \\0 & \text{otherwise}\end{cases}
c[g_1,g_2]
as the time between games
g_1
g_2
in days, including game duration, as follows:
c[g_1,g_2] = \begin{cases} \textrm{end}[g_2] - \textrm{end}[g_1] & \textrm{if } g_1 \not = \textrm{source and } g_2 \not = \textrm{sink} \\ \textrm{end} [g_2] - \textrm{start}[g_2] & \textrm{if } g_1 = \textrm{source and } g_2 \not = \textrm{sink} \\ 0 & \textrm{otherwise} \end{cases}
Also denote
l[g]
as the location of the game
g
\text{NODES}
as the list of games including dummy nodes 'source' and 'sink',
\text{ARCS}
as the connections between games, and finally
\text{STADIUMS}
as the list of all stadiums. Now I can write the Network Formulation as follows:
\begin{array}{rlcll} \textrm{minimize:} & \displaystyle \sum_{(g_1, g_2) \in \text{ARCS}} c[g_1,g_2] \cdot u[g_1,g_2] \\ \textrm{subject to:} & \displaystyle \sum_{(g,g_2) \in \text{ARCS}} u[g,g_2] - \sum_{(g_1,g) \in \text{ARCS}} u[g_1,g] & = & \begin{cases} 1 & \text{if } g = \text{source,} \\ -1 & \text{if } g = \text{sink,} \\ 0 & \text{otherwise}\end{cases} & & \forall g \in \text{NODES} \\ & \displaystyle \sum_{(g_1,g_2) \in \text{ARCS}: g_2 \not = \text{sink and } l[g_2] = s} u[g_1, g_2] & = & 1 & & \forall s \in \text{STADIUMS} \end{array}
The solution of this optimization problem should produce a route starting at the source, finishing at the sink, and passing through all 30 ballparks. The objective here is to minimize the total schedule time. The first set of constraints ensures that inflow and outflow are equal for regular nodes. The second set of constraints ensures that the fan visits every stadium once.
For the second objective, I need to replace the objective function with the following:
\textrm{minimize:} \; \displaystyle 130 \cdot \sum_{(g_1, g_2) \in \text{ARCS}} c[g_1, g_2] \cdot u[g_1, g_2] + 0.25 \cdot \sum_{(g_1, g_2) \in \text{ARCS}: g_1 \not = \text{source} \text{ and } g_2 \not = \text{sink}} d[g_1, g_2] \cdot u[g_1, g_2]
d
is the distance between games in miles.
Modeling with sasoptpy
Now that I have my formulation ready, it is a breeze to write this problem using sasoptpy. Only part of the code is shown here for illustration purposes. See the Github repository for all of the code, including the code used for grabbing the season schedule from the MLB website, driving distances from OpenStreetMap, and exporting results.
I can write the Network Formulation to solve TBFP in Python as follows:
Defines the optimization problem and solves it.
distance_data : pandas.DataFrame
Distances between stadiums in miles.
driving_data : pandas.DataFrame
The driving times between stadiums in minutes.
game_data : pandas.DataFrame
The game schedule information for the current season.
venue_data : pandas.DataFrame
The information regarding each 30 MLB venues.
start_date : datetime.date, optional
The earliest start date for the schedule.
end_date : datetime.date, optional
The latest end date for the schedule.
obj_type : integer, optional
Objective type for the optimization problem,
0: Minimize total schedule time, 1: Minimize total cost
def tbfp(distance_data, driving_data, game_data, venue_data,
start_date=datetime.date(2018, 3, 29),
end_date=datetime.date(2018, 10, 31),
obj_type=0):
# Define a CAS session
cas_session = CAS(your_cas_server, port=your_cas_port)
m = so.Model(name='tbfp', session=cas_session)
# Define sets, parameters and pre-process data (omitted)
use_arc = m.add_variables(ARCS, vartype=so.BIN, name='use_arc')
# Define expressions for the objectives
total_time = so.quick_sum(
cost[g1,g2] * use_arc[g1,g2] for (g1, g2) in ARCS)
total_distance = so.quick_sum(
distance[location[g1], location[g2]] * use_arc[g1, g2]
for (g1, g2) in ARCS if g1 != 'source' and g2 != 'sink')
total_cost = total_time * 130 + total_distance * 0.25
# Set objectives
if obj_type == 0:
m.set_objective(total_time, sense=so.MIN)
elif obj_type == 1:
m.set_objective(total_cost, sense=so.MIN)
# Balance constraint
m.add_constraints((
so.quick_sum(use_arc[g, g2] for (gx,g2) in ARCS if gx==g) -\
so.quick_sum(use_arc[g1, g] for (g1,gx) in ARCS if gx==g)\
== (1 if g == 'source' else (-1 if g == 'sink' else 0) )
for g in NODES),
name='balance')
# Visit once constraint
visit_once = so.ConstraintGroup((
so.quick_sum(
use_arc[g1,g2]
for (g1,g2) in ARCS if g2 != 'sink' and location[g2] == s) == 1
for s in STADIUMS), name='visit_once')
m.include(visit_once)
# Send the problem to SAS Viya solvers and solve the problem
m.solve(milp={'concurrent': True})
# Post-process results (omitted)
I ran this problem for the following twelve settings.
Objective (Min)
1 03/29 - 06/01 Time
2 03/29 - 06/01 Cost
10 07/01 - 10/01 Cost
11 03/29 - 10/01 Time
The last two settings (11 and 12) cover the entire 2018 MLB season from March 29th to October 1st. Therefore, these problems should give the optimal solutions for the best time and best cost objectives, respectively. My aim is to show how the problem size and solution time grow when the problem period is larger.
The ultimate benefit of working within the Python environment is ability to use open-source packages for many tasks. I have used the Bokeh package for plots and Folium for generating travel maps below. Bokeh is capable of web-ready interactive plots, which makes the visualization engaging for the user. Folium uses the Leaflet.js Javascript library to generate interactive maps based on OpenStreetMap maps. For interaction between the Bokeh scatter plots and the Leaflet maps, I have used a custom Javascript function provided by the Bokeh CustomJS class. You can see details of how the visualization part works in the Jupyter notebook.
The optimal solution I obtained from the 11th experiment gives the best schedule time for the 2018 season, which is just over 24 days (24 days and 3 hours). The solution starts with Diamondbacks @ Giants in AT&T Park, San Francisco on the 5th of June and ends with Royals @ Mariners in Safeco Field, Seattle on the 29th of June. This is the global best solution among the scheduled games this season. Maps and itineraries of selected solutions are shown below. Click on any of the plots and tables to see the Jupyter notebook. All times in these schedules are in EDT.
As a 22,528-mile trip, this schedule costs roughly $8,767. By changing the objective, it is possible to obtain a better cost, however the schedule takes longer significantly longer. Solution 12 is only 11,914 miles. It's 10,614 miles shorter and $2,149 cheaper compared to Solution 11 but takes 4 days longer.
Among the schedules you can still try at the writing of this post, Solution 9 gives the shortest schedule time (24 days 3 hours) and Solution 10 gives the best cost ($6,899). The latter schedule starts with Tigers @ Angels in Angel Stadium, Anaheim on the 6th of August, and ends with Orioles @ Mariners in Safeco Field, Seattle on the 4th of September. Note that, this solution is the longest schedule among all solutions I have with a little over 29 days.
Here's a list of all solutions:
The objective and the time period of the formulation heavily affect the solution time. Minimizing the cost takes longer due to unique optimal solutions. Moreover, increasing the time period from 3 months to 6 months nearly quadruples the solution time.
Ultimately, the best trip is up to you. You can define another objective of your choice, whether it be minimizing the cost, minimizing the schedule time, avoiding the risky connections (minimum time between games minus the driving time), minimizing the total driving time, or even maximizing the landmarks you have visited along the way. Whatever objective you choose, you can use sasoptpy and use the powerful SAS Viya mixed integer linear optimization solver to generate a trip that is perfect for you! Working in Python allows you to integrate packages you are familiar with and makes everything smoother. Do not forget to check my Jupyter notebook and see the Python files.
You can further improve this model based on your desire. Here I list a few ideas:
If you would like to see an away and a home game for every team exactly once, then you can add the following constraint
\displaystyle \sum_{(g, g_1) \in \text{ARCS}: \text{away}[g]= t \text{ and } \text{away}[g_1] \not = t} u[g,g_1]+\sum_{(g_1, g) \in \text{ARCS}: \text{away}[g]= t \text{ and } \text{away}[g_1] \not = t} u[g_1,g] = 1 \qquad \forall t \in \text{TEAMS}
You can try to avoid risky connections you have in the schedule by replacing the objective with
\displaystyle \text{maximize: } \sum_{(g_1,g_2) \in \text{ARCS}} ( \text{start}[g_2] - \text{end}[g_1] - c[g_1,g_2])\cdot u[g_1, g_2]
To prevent schedules that are too long, you should add a limit to the total schedule length, for example, 25 days:
\displaystyle \sum_{(g, \text{sink}) \in \text{ARCS}} \text{end}[g]\cdot u[g, \text{sink}] - \sum_{(\text{source}, g) \in \text{ARCS}} \text{start}[g]\cdot u[\text{source}, g] \leq 25
Share how you define the perfect schedule below if you have more ideas!
Tags baseball operations research optimization SAS Viya sports analytics
Great use of SAS Viya and Python Sertalp. I wish I had the time/funds to undertake this adventure. Maybe next year 🙂
I have featured this blog on https://developer.sas.com/home.html.
Sertalp B. Cay on October 9, 2018 10:38 am
Thanks Joe! The trip certainly needs quite a bit of time (read: valuable vacation days) and detailed planning. Even though the whole trip sounds intimidating, a sub-tour is doable.
Thanks for featuring the post 🙂
I love it and was able to duplicate the process for the 2019 season, however can I ask how would this be modified to ensure the stop and start locations were the same? So if I wanted to start in Southern California and end in Southern California. . . .in any of the ballparks. (Dodgers, Angels, Padres.)
Rob Pratt on May 17, 2019 1:23 pm
If you want to start and end only at one of those three ballparks, you can omit the arcs from the source node to all games at the 27 other ballparks and also from all games at those 27 to the sink node.
|
Translation Invariant Wavelet Denoising with Cycle Spinning - MATLAB & Simulink - MathWorks 한êµ
1-D Cycle Spinning
Cycle spinning compensates for the lack of shift invariance in the critically-sampled wavelet transform by averaging over denoised cyclically-shifted versions of the signal or image. The appropriate inverse circulant shift operator is applied to the denoised signal/image and the results are averaged together to obtain the final denoised signal/image.
There are N unique cyclically-shifted versions of a signal of length, N. For an M-by-N image, there are MN versions. This makes using all possible shifted versions computationally prohibitive. However, in practice, good results can be obtained by using a small subset of the possible circular shifts.
The following example shows how you use wdenoise and circshift to denoise a 1-D signal using cycle spinning. For denoising grayscale and RGB images, wdenoise2 supports cycle spinning.
This example shows how to denoise a 1-D signal using cycle spinning and the shift-variant orthogonal nonredundant wavelet transform. The example compares the results of the two denoising methods.
Create a noisy 1-D bumps signal with a signal-to-noise ratio of 6. The signal-to-noise ratio is defined as
\frac{N||X||\genfrac{}{}{0}{}{2}{2}}{\mathrm{Ï}}
N
is the length of the signal,
||X||\genfrac{}{}{0}{}{2}{2}
is the squared L2 norm, and
{\mathrm{Ï}}^{2}
is the variance of the noise.
Denoise the signal using cycle spinning with 15 shifts, 7 to the left and 7 to the right, including the zero-shifted signal. Use wdenoise with default settings. By default, wdenoise uses Daubechies' least-asymmetric wavelet with four vanishing moments, sym4. Denoising is down to the minimum of floor(log2(N)) and wmaxlev(N,'sym4') where N is the number of samples in the data.
ydenoise = zeros(length(XN),15);
for nn = -7:7
yshift = circshift(XN,[0 nn]);
[yd,cyd] = wdenoise(yshift);
ydenoise(:,nn+8) = circshift(yd,[0, -nn]);
ydenoise = mean(ydenoise,2);
Denoise the signal using wdenoise. Compare with the cycle spinning results.
xd = wdenoise(XN);
plot(ydenoise,'b','linewidth',2)
axis([1 1024 -10 10])
legend('Denoised Signal','Original Signal','Location','SouthEast')
title('Cycle Spinning Denoising')
plot(xd,'b','linewidth',2)
title('Standard Orthogonal Denoising')
absDiffDWT = norm(X-xd,2)
absDiffDWT = 12.4248
absDiffCycleSpin = norm(X-ydenoise',2)
absDiffCycleSpin = 10.6124
Cycle spinning with only 15 shifts has reduced the approximation error.
|
Bejan_number Knowpia
There are two different Bejan numbers (Be) used in the scientific domains of thermodynamics and fluid mechanics. Bejan numbers are named after Adrian Bejan.
In the field of thermodynamics the Bejan number is the ratio of heat transfer irreversibility to total irreversibility due to heat transfer and fluid friction:[1][2]
{\displaystyle \mathrm {Be} ={\frac {{\dot {S}}'_{\mathrm {gen} ,\,\Delta T}}{{\dot {S}}'_{\mathrm {gen} ,\,\Delta T}+{\dot {S}}'_{\mathrm {gen} ,\,\Delta p}}}}
{\displaystyle {\dot {S}}'_{\mathrm {gen} ,\,\Delta T}}
is the entropy generation contributed by heat transfer
{\displaystyle {\dot {S}}'_{\mathrm {gen} ,\,\Delta p}}
is the entropy generation contributed by fluid friction.
Schiubba has also achieved the relation between Bejan number Be and Brinkmann number Br
{\displaystyle \mathrm {Be} ={\frac {{\dot {S}}'_{\mathrm {gen} ,\,\Delta T}}{{\dot {S}}'_{\mathrm {gen} ,\,\Delta T}+{\dot {S}}'_{\mathrm {gen} ,\,\Delta p}}}={\frac {1}{1+Br}}}
Heat transfer and mass transferEdit
In the context of heat transfer. the Bejan number is the dimensionless pressure drop along a channel of length
{\displaystyle L}
{\displaystyle \mathrm {Be} ={\frac {\Delta p\,L^{2}}{\mu \alpha }}}
{\displaystyle \mu }
{\displaystyle \alpha }
is the thermal diffusivity
The Be number plays in forced convection the same role that the Rayleigh number plays in natural convection.
In the context of mass transfer. the Bejan number is the dimensionless pressure drop along a channel of length
{\displaystyle L}
{\displaystyle \mathrm {Be} ={\frac {\Delta p\,L^{2}}{\mu D}}}
{\displaystyle \mu }
{\displaystyle D}
is the mass diffusivity
For the case of Reynolds analogy (Le = Pr = Sc = 1), it is clear that all three definitions of Bejan number are the same.
Also, Awad and Lage:[5] obtained a modified form of the Bejan number, originally proposed by Bhattacharjee and Grosshandler for momentum processes, by replacing the dynamic viscosity appearing in the original proposition with the equivalent product of the fluid density and the momentum diffusivity of the fluid. This modified form is not only more akin to the physics it represents but it also has the advantage of being dependent on only one viscosity coefficient. Moreover, this simple modification allows for a much simpler extension of Bejan number to other diffusion processes, such as a heat or a species transfer process, by simply replacing the diffusivity coefficient. Consequently, a general Bejan number representation for any process involving pressure-drop and diffusion becomes possible. It is shown that this general representation yields analogous results for any process satisfying the Reynolds analogy (i.e., when Pr = Sc = 1), in which case the momentum, energy, and species concentration representations of Bejan number turn out to be the same.
Therefore, it would be more natural and broad to define Be in general, simply as:
{\displaystyle \mathrm {Be} ={\frac {\Delta p\,L^{2}}{\rho \delta ^{2}}}}
{\displaystyle \rho }
{\displaystyle \delta }
is the corresponding diffusivity of the process in consideration.
In addition, Awad:[6] presented Hagen number vs. Bejan number. Although their physical meaning is not the same because the former represents the dimensionless pressure gradient while the latter represents the dimensionless pressure drop, it will be shown that Hagen number coincides with Bejan number in cases where the characteristic length (l) is equal to the flow length (L).
In the field of fluid mechanics the Bejan number is identical to the one defined in heat transfer problems, being the dimensionless pressure drop along the fluid path length
{\displaystyle L}
in both external flows and internal flows:[7]
{\displaystyle \mathrm {Be_{L}} ={\frac {\Delta p\,L^{2}}{\mu \nu }}}
{\displaystyle \mu }
{\displaystyle \nu }
is the momentum diffusivity (or Kinematic viscosity).
A further expression of Bejan number in the Hagen–Poiseuille flow will be introduced by Awad. This expression is
{\displaystyle \mathrm {Be} ={{32\mathrm {Re} L^{3}} \over {d^{3}}}}
{\displaystyle \mathrm {Re} }
is the Reynolds number
{\displaystyle L}
is the flow length
{\displaystyle d}
is the pipe diameter
The above expression shows that the Bejan number in the Hagen–Poiseuille flow is indeed a dimensionless group, not recognized previously.
The Bhattacharjee and Grosshandler formulation of the Bejan number has large importance on fluid dynamics in the case of the fluid flow over a horizontal plane [8] because it is directly related to fluid dynamic drag D by the following expression of drag force
{\displaystyle D=\Delta p\,A_{w}={\frac {1}{2}}C_{D}A_{f}{\frac {\nu \mu }{L^{2}}}Re^{2}}
which allows expressing the drag coefficient
{\displaystyle C_{D}}
{\displaystyle A_{w}}
{\displaystyle A_{f}}
{\displaystyle C_{D}=2{\frac {A_{w}}{A_{f}}}{\frac {Be}{Re_{L}^{2}}}}
{\displaystyle Re_{L}}
is the Reynolds Number related to fluid path length L. This expression has been verified experimentally in a wind tunnel.[9]
This equation represents the drag coefficient in terms of second law of thermodynamics:[10]
{\displaystyle C_{D}={\frac {2T_{0}{\dot {S}}'gen}{A_{f}\rho u^{3}}}={\frac {2{\dot {X}}'}{A_{f}\rho u^{3}}}}
{\displaystyle {\dot {S}}'gen}
is entropy generation rate and
{\displaystyle {\dot {X}}'}
is exergy dissipation rate and ρ is density.
The above formulation allows expressing Bejan number in terms of second law of thermodynamics:[11][12]
{\displaystyle Be_{L}={\frac {1}{A_{w}\rho u}}{\frac {L^{2}}{\nu ^{2}}}\Delta {\dot {X}}'={\frac {1}{A_{w}\rho u}}{\frac {T_{0}L^{2}}{\nu ^{2}}}\Delta {\dot {S}}'}
This expression is a fundamental step toward a representation of fluid dynamic problems in terms of the second law of thermodynamics.[13]
^ Paoletti, S.; Rispoli, F.; Sciubba, E. (1989). "Calculation of exergetic losses in compact heat exchanger passages". ASME AES. 10 (2): 21–29.
^ Sciubba, E. (1996). A minimum entropy generation procedure for the discrete pseudo-optimization of finned-tube heat exchangers. Revue générale de thermique, 35(416), 517-525. http://www.academia.edu/download/43107839/A_minimum_entropy_generation_procedure_f20160226-12590-s0t7qc.pdf
^ Petrescu, S. (1994). "Comments on 'The optimal spacing of parallel plates cooled by forced convection'". Int. J. Heat Mass Transfer. 37 (8): 1283. doi:10.1016/0017-9310(94)90213-5.
^ Awad, M.M. (2012). "A new definition of Bejan number". Thermal Science. 16 (4): 1251–1253. doi:10.2298/TSCI12041251A.
^ Awad, M.M.; Lage, J. L. (2013). "Extending the Bejan number to a general form". Thermal Science. 17 (2): 631. doi:10.2298/TSCI130211032A.
^ Awad, M.M. (2013). "Hagen number versus Bejan number". Thermal Science. 17 (4): 1245–1250. doi:10.2298/TSCI1304245A.
^ Bhattacharjee, S.; Grosshandler, W. L. (1988). "The formation of wall jet near a high temperature wall under microgravity environment". ASME 1988 National Heat Transfer Conference. 96: 711–716. Bibcode:1988nht.....1..711B.
^ a b Liversage, P., and Trancossi, M. (2018). Analysis of triangular sharkskin profiles according to the second law, Modelling, Measurement and Control B. 87(3), 188-196. http://www.iieta.org/sites/default/files/Journals/MMC/MMC_B/87.03_11.pdf
^ Trancossi, M. and Sharma, S., 2018. Numerical and Experimental Second Law Analysis of a Low Thickness High Chamber Wing Profile (No. 2018-01-1955). SAE Technical Paper. https://www.sae.org/publications/technical-papers/content/2018-01-1955/
^ Herwig, H., and Schmandt, B., 2014. How to determine losses in a flow field: A paradigm shift towards the second law analysis.” Entropy 16.6 (2014): 2959-2989. DOI:10.3390/e16062959 https://www.mdpi.com/1099-4300/16/6/2959
^ Trancossi, M., and Pascoa J.. "Modeling fluid dynamics and aerodynamics by second law and Bejan number (part 1-theory)." INCAS Bulletin 11, no. 3 (2019): 169-180. http://bulletin.incas.ro/files/trancossi__pascoa__vol_11_iss_3__a_1.pdf
^ Trancossi, M., & Pascoa, J. (2019). Diffusive Bejan number and second law of thermodynamics toward a new dimensionless formulation of fluid dynamics laws. Thermal Science, (00), 340-340. http://www.doiserbia.nb.rs/ft.aspx?id=0354-98361900340T
^ Trancossi, M., Pascoa, J., & Cannistraro, G. (2020). Comments on “New insight into the definitions of the Bejan number”. International Communications in Heat and Mass Transfer, 104997. https://doi.org/10.1016/j.icheatmasstransfer.2020.104997
|
Revision as of 21:55, 20 September 2015 by MathAdmin (talk | contribs) (→Defining the Integral as a Limit)
{\displaystyle {\displaystyle \int _{0}^{1}x^{2}\,dx}}
{\displaystyle x=1,}
{\displaystyle x}
{\displaystyle (y=0)}
{\displaystyle y=x^{2}.}
{\displaystyle 0}
{\displaystyle 1}
{\displaystyle 4}
{\displaystyle \Delta x}
{\displaystyle 1/4.}
{\displaystyle f(x)}
{\displaystyle [0,1/4],}
{\displaystyle f(0)=0.}
{\displaystyle f(0)\cdot \Delta x\,=\,0\cdot \left({\displaystyle {\frac {1}{4}}}\right)\,=\,0.}
{\displaystyle [1/4,1/2].}
{\displaystyle 1/4}
{\displaystyle {\displaystyle f\left({\frac {1}{4}}\right)\cdot \Delta x\,=\,{\frac {1}{16}}\cdot {\frac {1}{4}}\,=\,{\frac {1}{64}}}.}
{\displaystyle [1/2,3/4],}
{\displaystyle 1/2,}
{\displaystyle {\displaystyle f\left({\frac {1}{2}}\right)\cdot \Delta x\,=\,{\frac {1}{4}}\cdot {\frac {1}{4}}\,=\,{\frac {1}{16}}.}}
{\displaystyle [3/4,1].}
{\displaystyle 3/4}
{\displaystyle {\displaystyle f\left({\frac {3}{4}}\right)\cdot \Delta x\,=\,{\frac {9}{16}}\cdot {\frac {1}{4}}\,=\,{\frac {9}{64}}.}}
{\displaystyle (\Sigma )}
{\displaystyle {\begin{array}{rcl}S&=&{\displaystyle \sum _{i=1}^{4}f\left(x_{i}\right)\cdot \Delta x}\\\\&=&{\displaystyle 0+{\frac {1}{64}}+{\frac {1}{16}}+{\frac {9}{64}}}\\\\&=&{\displaystyle {\frac {14}{64}}}\\\\&=&{\displaystyle {\frac {7}{32}}.}\end{array}}}
{\displaystyle 1/4,\,1/2,\,3/4}
{\displaystyle 1}
{\displaystyle {\begin{array}{rcl}S&=&{\displaystyle \sum _{i=1}^{4}f\left(x_{i}\right)\cdot \Delta x}\\\\&=&{\displaystyle f\left({\frac {1}{4}}\right)\cdot \Delta x+{\displaystyle f\left({\frac {1}{2}}\right)\cdot \Delta x+}{\displaystyle f\left({\frac {3}{4}}\right)\cdot \Delta x+}{\displaystyle f\left(1\right)\cdot \Delta x}}\\\\&=&{\displaystyle {\frac {1}{16}}\cdot {\frac {1}{4}}+{\frac {1}{4}}\cdot {\frac {1}{4}}+{\frac {9}{16}}\cdot {\frac {1}{4}}+1\cdot {\frac {1}{4}}}\\\\&=&{\displaystyle {\frac {1}{64}}+{\frac {1}{16}}+{\frac {9}{64}}+{\frac {1}{4}}}\\\\&=&{\displaystyle {\frac {15}{32}}}.\end{array}}}
{\displaystyle f(x)=x^{3}-x}
{\displaystyle -1}
{\displaystyle 3}
{\displaystyle n=4}
{\displaystyle x}
{\displaystyle -1}
{\displaystyle 3}
{\displaystyle 3-(-1)=4.}
{\displaystyle \Delta x\,=\,{\displaystyle {\frac {b-a}{n}}\,=\,{\frac {3-(-1)}{4}}\,=\,{\frac {4}{4}}\,=\,1.}}
{\displaystyle [-1,0],\,[0,1],\,[1,2]}
{\displaystyle [2,3].}
{\displaystyle {\begin{array}{rcl}S&=&{\displaystyle \sum _{i=1}^{4}f(x_{i})\cdot \Delta x}\\\\&=&f(-1)\cdot 1+f(0)\cdot 1+f(1)\cdot 1+f(2)\cdot 1\\\\&=&-2+0+0+6\\\\&=&4.\end{array}}}
{\displaystyle x}
{\displaystyle x}
{\displaystyle f(x)=x^{3}-x}
{\displaystyle -4}
{\displaystyle 4}
using Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle n=4 } rectangles and midpoints.
{\displaystyle x}
{\displaystyle -4}
{\displaystyle 4,}
{\displaystyle \Delta x\,=\,{\displaystyle {\frac {b-a}{n}}\,=\,{\frac {4-(-4)}{4}}\,=\,{\frac {8}{4}}\,=\,2.}}
{\displaystyle [-4,-2],\,[-2,0],\,[0,2]}
{\displaystyle [2,4].}
{\displaystyle -3,\,-1,\,1}
{\displaystyle 3}
{\displaystyle {\begin{array}{rcl}S&=&{\displaystyle \sum _{i=1}^{4}f(x_{i})\cdot \Delta x}\\\\&=&f(-3)\cdot 2+f(-1)\cdot 2+f(1)\cdot 2+f(3)\cdot 2\\\\&=&(-24)\cdot 2+0\cdot 2+0\cdot 2+24\cdot 2\\\\&=&0.\end{array}}}
{\displaystyle \Delta x,}
{\displaystyle f(x)}
{\displaystyle [a,b],}
{\displaystyle {\displaystyle \int _{a}^{b}f(x)\,dx},}
{\displaystyle {\displaystyle \int _{a}^{b}f(x)\,dx\,=\,\lim _{n\rightarrow \infty }\sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}.}}
{\displaystyle \Delta x_{i}=\Delta x={\displaystyle {\frac {b-a}{n}},}}
{\displaystyle f(x)=x^{2}}
{\displaystyle [0,3]}
{\displaystyle a=0,\,b=3}
{\displaystyle n}
{\displaystyle \Delta x\,=\,{\displaystyle {\frac {b-a}{n}}\,=\,{\frac {3-0}{n}}\,=\,{\frac {3}{n}}}.}
For a give{\displaystyle n}
{\displaystyle 0,}
{\displaystyle \Delta x=3/n.}
{\displaystyle [0,3/n].}
{\displaystyle 3/n}
{\displaystyle \Delta x=3/n.}
{\displaystyle {\displaystyle \left[{\frac {3}{n}},{\frac {3}{n}}+{\frac {3}{n}}\right]\,=\,\left[1\cdot {\frac {3}{n}},2\cdot {\frac {3}{n}}\right].}}
{\displaystyle I_{1},}
{\displaystyle I_{1}={\displaystyle \left[0\cdot {\frac {3}{n}},1\cdot {\frac {3}{n}}\right]}.}
{\displaystyle I_{2}={\displaystyle \left[1\cdot {\frac {3}{n}},2\cdot {\frac {3}{n}}\right]}.}
{\displaystyle i=1,2,\ldots ,n,}
{\displaystyle I_{i}={\displaystyle \left[(i-1)\cdot {\frac {3}{n}},i\cdot {\frac {3}{n}}\right]}.}
{\displaystyle f\left({\displaystyle 1\cdot {\frac {3}{n}}}\right)}
{\displaystyle I_{1},}
{\displaystyle f\left({\displaystyle 2\cdot {\frac {3}{n}}}\right)}
{\displaystyle I_{2},}
{\displaystyle f\left({\displaystyle i\cdot {\frac {3}{n}}}\right)}
{\displaystyle I_{i}.}
{\displaystyle n,}
{\displaystyle {\displaystyle \sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}\,=\,\sum _{i=1}^{n}f\left({\frac {3i}{n}}\right)\cdot \Delta x\,=\,\sum _{i=1}^{n}{\frac {9i^{2}}{n^{2}}}\cdot {\frac {3}{n}}\,=\,\sum _{i=1}^{n}{\frac {27i^{2}}{n^{3}}}.}}
{\displaystyle {\displaystyle \int _{a}^{b}f(x)\,dx\,=\,\lim _{n\rightarrow \infty }\sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}\,=\,\lim _{n\rightarrow \infty }\sum _{i=1}^{n}{\frac {27i^{2}}{n^{3}}}.}}
{\displaystyle n}umbers, the sum of the first
{\displaystyle n}
{\displaystyle n}
{\displaystyle {\displaystyle \sum _{i=1}^{n}i\,=\,{\frac {n(n+1)}{2}};\qquad \sum _{i=1}^{n}i^{2}\,=\,{\frac {n(n+1)(2n+1)}{6}};\qquad \sum _{i=1}^{n}i^{3}\,=\,{\frac {n^{2}(n+1)^{2}}{4}}.}}
{\displaystyle ca_{1}+ca_{2}=c(a_{1}+a_{2})}
{\displaystyle (a_{1}+b_{1})+(a_{2}+b_{2})=(a_{1}+a_{2})+(b_{1}+b_{2})}
{\displaystyle {\displaystyle \sum _{i=1}^{n}ca_{i}\,=\,ca_{1}+ca_{2}+\cdots +ca_{n}\,=\,c(a_{1}+\cdots +a_{n})\,=\,c\sum _{i=1}^{n}a_{i},}\qquad \qquad (\dagger )}
{\displaystyle {\displaystyle \sum _{i=1}^{n}(a_{i}+b_{i})\,=\,a_{1}+b_{1}+a_{2}+b_{2}\cdots a_{n}+b_{n}\,=\,a_{1}+a_{2}+\cdots +a_{n}+b_{1}+b_{2}+\cdots +b_{n}\,=\,\sum _{i=1}^{n}a_{i}+\sum _{i=1}^{n}b_{i}.\qquad \qquad (\dagger \dagger )}}
{\displaystyle n\rightarrow \infty }
{\displaystyle {\displaystyle \sum _{i=1}^{n}{\frac {27i^{2}}{n^{3}}},}}
{\displaystyle 27}
{\displaystyle n^{3}}
{\displaystyle c}
{\displaystyle (\dagger ).}
{\displaystyle {\displaystyle \sum _{i=1}^{n}{\frac {27i^{2}}{n^{3}}}\,=\,{\frac {27}{n^{3}}}\sum _{i=1}^{n}i^{2}\,=\,{\frac {27}{n^{3}}}\cdot {\frac {n(n+1)(2n+1)}{6}}\,=\,{\frac {9n(n+1)(2n+1)}{2n^{3}}},}}
{\displaystyle n}
{\displaystyle 18n^{3}/2n^{3}}
{\displaystyle n,}
{\displaystyle n\rightarrow \infty }
{\displaystyle 9.}
{\displaystyle {\displaystyle \int _{0}^{3}x^{2}\,dx=9.}}
{\displaystyle f(x_{i}).}
{\displaystyle {\displaystyle \left[0\cdot {\frac {3}{n}},1\cdot {\frac {3}{n}}\right],}}
{\displaystyle 0.}
{\displaystyle {\displaystyle \left[1\cdot {\frac {3}{n}},2\cdot {\frac {3}{n}}\right],}}
{\displaystyle 3/n.}
{\displaystyle I_{1}={\displaystyle \left[(i-1)\cdot {\frac {3}{n}},i\cdot {\frac {3}{n}}\right],}}
{\displaystyle 3(i-1)/n.}
{\displaystyle {\begin{array}{rcl}{\displaystyle \int _{0}^{3}x^{2}\,dx}&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}\left({\frac {3(i-1)}{n}}\right)^{2}\cdot {\frac {3}{n}}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}{\frac {9(i-1)}{n^{2}}}^{2}\cdot {\frac {3}{n}}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}{\frac {27(i-1)}{n^{3}}}^{2}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,{\frac {27}{n^{3}}}\sum _{i=1}^{n}(i^{2}-2i+1).}\end{array}}}
{\displaystyle {\begin{array}{rcl}{\displaystyle \lim _{n\rightarrow \infty }\,{\frac {27}{n^{3}}}\sum _{i=1}^{n}(i^{2}-2i+1)}&=&{\displaystyle \lim _{n\rightarrow \infty }\,{\frac {27}{n^{3}}}\left({\frac {n(n+1)(2n+1)}{6}}+{\frac {n(n+1)}{2}}+n\right)}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\left({\frac {9n(n+1)(2n+1)}{2n^{3}}}+{\frac {27n(n+1)}{2n^{3}}}+{\frac {27}{n^{2}}}\right)}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\left({\frac {18n^{3}}{2n^{3}}}+{\frac {27n^{2}}{2n^{3}}}+{\frac {27}{n^{2}}}\right)\qquad \qquad ({\textrm {for~large~}}n)}\\\\&=&9+0+0\\\\&=&9.\end{array}}}
{\displaystyle \Delta x}
{\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx?}
{\displaystyle \Delta x=(b-a)/n=1/n,}
{\displaystyle x_{0}=0,}
{\displaystyle x_{i}\,=\,{\displaystyle {\frac {i^{2}}{n^{2}}}.}}
{\displaystyle x_{1}=1/n^{2}}
{\displaystyle x_{2}=4/n^{2}.}
{\displaystyle x_{0}=0,}
{\displaystyle {\displaystyle \Delta x_{i}\,=\,x_{i}-x_{i-1}\,=\,{\frac {i^{2}}{n^{2}}}-{\frac {(i-1)^{2}}{n^{2}}}\,=\,{\frac {2i-1}{n^{2}}}.}}
{\displaystyle \Delta x_{i}}
{\displaystyle f(x_{i})}
{\displaystyle f(x_{i})={\displaystyle {\sqrt {\frac {i^{2}}{n^{2}}}}={\frac {i}{n}}.}}
{\displaystyle {\begin{array}{rcl}{\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx}&=&{\displaystyle \lim _{n\rightarrow \infty }\,\sum _{i=1}^{n}f(x_{i})\cdot \Delta x_{i}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\sum _{i=1}^{n}{\frac {i}{n}}\cdot {\frac {2i-1}{n^{2}}}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\sum _{i=1}^{n}{\frac {2i^{2}-i}{n^{3}}}}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }{\frac {1}{n^{3}}}\left(2\cdot {\frac {n(n+1)(2n+1)}{6}}-{\frac {n(n+1)}{2}}\right)}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\left({\frac {2n(n+1)(2n+1)}{6n^{3}}}-{\frac {n(n+1)}{2n^{3}}}\right)}\\\\&=&{\displaystyle \lim _{n\rightarrow \infty }\,\left({\frac {4n^{3}}{6n^{3}}}+{\frac {n^{2}}{2n^{3}}}\right)\qquad \qquad ({\textrm {for~large~}}n)}\\\\&=&{\displaystyle {\frac {2}{3}}+0}\\\\&=&{\displaystyle {\frac {2}{3}}.}\end{array}}}
{\displaystyle \Delta x_{i}}
|
List_of_logic_symbols Knowpia
This article contains logic symbols. Without proper rendering support, you may see question marks, boxes, or other symbols instead of logic symbols.
Basic logic symbolsEdit
{\displaystyle \Rightarrow }
{\displaystyle \to }
{\displaystyle \supset }
{\displaystyle \implies }
material implication implies; if ... then propositional logic, Heyting algebra
{\displaystyle A\Rightarrow B}
is false when A is true and B is false but true otherwise.
{\displaystyle \rightarrow }
{\displaystyle \Rightarrow }
(the symbol may also indicate the domain and codomain of a function; see table of mathematical symbols).
{\displaystyle \supset }
{\displaystyle \Rightarrow }
{\displaystyle x=2\Rightarrow x^{2}=4}
{\displaystyle x^{2}=4\Rightarrow x=2}
{\displaystyle \Leftrightarrow }
{\displaystyle \equiv }
{\displaystyle \leftrightarrow }
{\displaystyle \iff }
\iff material equivalence if and only if; iff; means the same as propositional logic
{\displaystyle A\Leftrightarrow B}
{\displaystyle x+5=y+2\Leftrightarrow x+3=y}
! ¬
{\displaystyle \neg }
{\displaystyle \sim }
negation not propositional logic The statement
{\displaystyle \lnot A}
{\displaystyle \neg }
{\displaystyle \neg (\neg A)\Leftrightarrow A}
{\displaystyle x\neq y\Leftrightarrow \neg (x=y)}
{\displaystyle \mathbb {D} }
U+1D53B 𝔻 𝔻 \mathbb{D} Domain of discourse Domain of predicate Predicate (mathematical logic)
{\displaystyle \mathbb {D} \mathbb {:} \mathbb {R} }
{\displaystyle \wedge }
{\displaystyle \cdot }
{\displaystyle \&}
logical conjunction and propositional logic, Boolean algebra The statement A ∧ B is true if A and B are both true; otherwise, it is false. n < 4 ∧ n >2 ⇔ n = 3 when n is a natural number.
{\displaystyle \lor }
{\displaystyle \parallel }
logical (inclusive) disjunction or propositional logic, Boolean algebra The statement A ∨ B is true if A or B (or both) are true; if both are false, the statement is false. n ≥ 4 ∨ n ≤ 2 ⇔ n ≠ 3 when n is a natural number.
{\displaystyle \oplus }
{\displaystyle \veebar }
{\displaystyle \not \equiv }
exclusive disjunction xor; either ... or propositional logic, Boolean algebra The statement A ↮ B is true when either A or B, but not both, are true. A ⊻ B means the same. (¬A) ↮ A is always true, and A ↮ A always false, if vacuous truth is excluded.
{\displaystyle \top }
\top Tautology top, truth, full clause propositional logic, Boolean algebra, first-order logic The statement ⊤ is unconditionally true. ⊤(A) ⇒ A is always true.
{\displaystyle \bot }
\bot Contradiction bottom, falsum, falsity, empty clause propositional logic, Boolean algebra, first-order logic The statement ⊥ is unconditionally false. (The symbol ⊥ may also refer to perpendicular lines.) ⊥(A) ⇒ A is always false.
{\displaystyle \forall }
\forall universal quantification for all; for any; for each first-order logic ∀ x: P(x) or (x) P(x) means P(x) is true for all x.
{\displaystyle \forall n\in \mathbb {N} :n^{2}\geq n.}
U+2203 ∃ ∃
{\displaystyle \exists }
\exists existential quantification there exists first-order logic ∃ x: P(x) means there is at least one x such that P(x) is true.
{\displaystyle \exists n\in \mathbb {N} :}
U+2203 U+0021 ∃ ! ∃!
{\displaystyle \exists !}
\exists ! uniqueness quantification there exists exactly one first-order logic ∃! x: P(x) means there is exactly one x such that P(x) is true.
{\displaystyle \exists !n\in \mathbb {N} :n+5=2n.}
U+003A U+229C ≔ (: =)
{\displaystyle :=}
{\displaystyle \equiv }
{\displaystyle :\Leftrightarrow }
{\displaystyle \cosh x:={\frac {e^{x}+e^{-x}}{2}}}
U+0028 U+0029 ( ) (
{\displaystyle (~)}
( ) precedence grouping parentheses; brackets everywhere Perform the operations inside the parentheses first. (8 ÷ 4) ÷ 2 = 2 ÷ 2 = 1, but 8 ÷ (4 ÷ 2) = 8 ÷ 2 = 4.
U+22A2 ⊢ ⊢
{\displaystyle \vdash }
\vdash turnstile proves propositional logic, first-order logic x ⊢ y means x proves (syntactically entails) y (A → B) ⊢ (¬B → ¬A)
{\displaystyle \vDash }
\vDash, \models double turnstile models propositional logic, first-order logic x ⊨ y means x models (semantically entails) y (A → B) ⊨ (¬B → ¬A)
Advanced and rarely used logical symbolsEdit
U+0305 COMBINING OVERLINE used format for denoting Gödel numbers.
VERTICAL LINE Sheffer stroke, the sign for the NAND operator (negation of conjunction).
U+2193 DOWNWARDS ARROW Peirce Arrow, the sign for the NOR operator (negation of disjunction).
{\displaystyle \odot }
U+2193 UP TACK
TOP RIGHT CORNER corner quotes, also called "Quine quotes"; for quasi-quotation, i.e. quoting specific context of unspecified ("variable") expressions;[3] also used for denoting Gödel number;[4] for example "⌜G⌝" denotes the Gödel number of G. (Typographical note: although the quotes appears as a "pair" in unicode (231C and 231D), they are not symmetrical in some fonts. And in some fonts (for example Arial) they are only symmetrical in certain sizes. Alternatively the quotes can be rendered as ⌈ and ⌉ (U+2308 and U+2309) or by using a negation symbol and a reversed negation symbol ⌐ ¬ in superscript mode. )
WHITE SQUARE modal operator for "it is necessary that" (in modal logic), or "it is provable that" (in provability logic), or "it is obligatory that" (in deontic logic), or "it is believed that" (in doxastic logic); also as empty clause (alternatives:
{\displaystyle \emptyset }
U+297D RIGHT FISH TAIL sometimes used for "relation", also used for denoting various ad hoc relations (for example, for denoting "witnessing" in the context of Rosser's trick) The fish hook is also used as strict implication by C.I.Lewis
{\displaystyle p}
{\displaystyle q\equiv \Box (p\rightarrow q)}
, the corresponding LaTeX macro is \strictif. See here for an image of glyph. Added to Unicode 3.2.0.
Usage in various countriesEdit
Poland and GermanyEdit
As of 2014[update] in Poland, the universal quantifier is sometimes written
{\displaystyle \wedge }
, and the existential quantifier as
{\displaystyle \vee }
. The same applies for Germany.
List of notation used in Principia Mathematica
Logic alphabet, a suggested set of logical symbols
Logic gate § Symbols
^ "Named character references". HTML 5.1 Nightly. W3C. Retrieved 9 September 2015.
^ Quine, W.V. (1981): Mathematical Logic, §6
^ Hintikka, Jaakko (1998), The Principles of Mathematics Revisited, Cambridge University Press, p. 113, ISBN 9780521624985 .
|
#Ashley Nettleman and Bill Goodwine, [http://controls.ame.nd.edu/~bill/papers/2015/icra2015.pdf Symmetries and Reduction for Multi-Agent Control], accepted at the 2015 IEEE International Conference on Robotics and Automation
Bill Goodwine, Nonlinear Stability of Approximately Symmetric Large-Scale Systems, Proceedings of the 2014 IFAC World Congress, Cape Town, South Africa
{\displaystyle {\dot {x}}=Ax+Bu}
{\displaystyle {\dot {x}}=f(x)+g(x)u}
|
#Jason Nightingale, Richard Hind and Bill Goodwine, "Geometric analysis of a class of constrained mechanical control systems in the nonzero velocity setting,"Proceedings of the 17th International Federation of Automatic Control (IFAC) World Congress, Seoul, Korea July, 2008.
#Jason Nightingale, Richard Hind and Bill Goodwine, [http://controls.ame.nd.edu/~bill/papers/2008/icra08.pdf Intrinsic Vector-Valued Symmetric Form for Simple Mechanical Control Systems in the Nonzero Velocity Setting], Proceedings of the 2008 IEEE International Conference on Robotics and Automation, (43.4% acceptance rate) Pasadena, CA, May, 2008
{\displaystyle {\dot {x}}=Ax+Bu}
{\displaystyle {\dot {x}}=f(x)+g(x)u}
|
{\displaystyle \phi }
{\displaystyle \phi =\iint {\textbf {g}}\cdot d{\textbf {s}}=\iiint \nabla \cdot {\textbf {g}}\;dxdydz}
{\displaystyle \phi =\iint _{S}{\textbf {g}}\cdot \mathbf {\hat {n}} \;dS=\iiint _{V}\nabla \cdot {\textbf {g}}\;dV}
{\displaystyle dS}
{\displaystyle S}
{\displaystyle V}
{\displaystyle dV}
Furthermore, the result can be generalized to an arbitrary volume in
{\displaystyle M}
-dimensions bounded by a surface
{\displaystyle S}
{\displaystyle M-1}
{\displaystyle \int _{S}\mathbf {g} \cdot \mathbf {\hat {n}} \;dS=\int _{V}\mathbf {\nabla } \cdot \mathbf {g} \;dV.}
Here, the symbol
{\displaystyle \nabla }
{\displaystyle M}
-dimensional gradient, and
{\displaystyle \mathbf {\hat {n}} }
is the unit normal vector to the surface
{\displaystyle S}
Commonly called Gauss's theorem.
{\displaystyle D}
{\displaystyle R^{m}}
{\displaystyle \partial D}
{\displaystyle R^{m-1}}
{\displaystyle \int _{D}\mathbf {\nabla } \cdot \mathbf {Q} \;dV=\int _{\partial D}\mathbf {\hat {n}} \cdot \mathbf {Q} \;dS.}
{\displaystyle Q}
{\displaystyle \mathbf {\hat {n}} =({\bar {n}}_{1},{\bar {n}}_{2},...{\bar {n}}_{m}),}
{\displaystyle \partial D.}
{\displaystyle {\bar {n}}_{k}}
{\displaystyle \partial D}
{\displaystyle {\hat {x}}_{k}}
{\displaystyle \mathbf {x} =(x_{1},x_{2},...,x_{m})}
{\displaystyle \nabla \equiv \left({\frac {\partial }{\partial x_{1}}},{\frac {\partial }{\partial x_{2}}},...,{\frac {\partial }{\partial x_{m}}}\right)}
{\displaystyle m=3}
{\displaystyle D}
{\displaystyle \partial D}
{\displaystyle \int _{D}\mathbf {\nabla } \cdot \mathbf {Q} \;dV=\int _{D}{\frac {\partial Q_{1}}{\partial x_{1}}}\;dV+\int _{D}{\frac {\partial Q_{2}}{\partial x_{2}}}\;dV+\int _{D}{\frac {\partial Q_{3}}{\partial x_{3}}}\;dV.}
{\displaystyle x_{1}}
{\displaystyle \int _{D}{\frac {\partial Q_{1}}{\partial x_{1}}}dV=\int _{X_{2}^{-}(x_{1},x_{3})}^{X_{2}^{+}(x_{1},x_{3})}\int _{X_{3}^{-}(x_{1},x_{2})}^{X_{3}^{+}(x_{1},x_{2})}\int _{X_{1}^{-}(x_{2},x_{3})}^{X_{1}^{+}(x_{2},x_{3})}{\frac {\partial Q_{1}}{\partial x_{1}}}\;dx_{1}dx_{2}dx_{3}}
{\displaystyle =\int _{X_{2}^{-}(X_{1}^{-},x_{3})}^{x_{2}^{+}(X_{1}^{+},x_{3})}\int _{X_{3}^{-}(X_{1}^{-},x_{2})}^{X_{3}^{+}(X_{1}^{+},x_{2})}\left[Q_{1}(X_{1}^{+},x_{2},x_{3})-Q_{1}(X_{1}^{-},x_{2},x_{3})\right]\;dx_{2}dx_{3}.}
{\displaystyle X_{k}^{+}}
{\displaystyle X_{k}^{-}}
{\displaystyle \partial D}
{\displaystyle x_{k}}
{\displaystyle x_{k},}
{\displaystyle \int _{\partial D}\mathbf {\hat {n}} \cdot \mathbf {Q} \;dS.=\int _{\partial D}\left[{\bar {n}}_{1}Q_{1}+{\bar {n}}_{2}Q_{2}+{\bar {n}}_{3}Q_{3}\right]\;dS}
{\displaystyle x_{1}}
{\displaystyle \int _{\partial D}{\bar {n}}_{1}Q_{1}\;dS=\int _{X_{2}^{-}(X_{1}^{-},x_{3})}^{X_{2}^{+}(X_{1}^{+},x_{3})}\int _{X_{3}^{-}(X_{1}^{-},x_{2})}^{X_{3}^{+}(X_{1}^{+},x_{2})}{\bar {n}}_{1}Q_{1}(x_{1},x_{2},x_{3})\;dS.}
{\displaystyle \partial D}
{\displaystyle x_{1}}
{\displaystyle Q_{1}=Q_{1}(X_{1}^{+},x_{2},x_{3})}
{\displaystyle {\bar {n}}_{1}dS=dx_{2}dx_{3}}
{\displaystyle x_{1}}
{\displaystyle Q_{1}=Q_{1}(X_{1}^{-},x_{2},x_{3})}
{\displaystyle {\bar {n}}_{1}dS=-dx_{2}dx_{3}}
{\displaystyle {\bar {n}}_{1}\equiv \pm {\hat {x}}_{1}\cdot \mathbf {\hat {n}} }
{\displaystyle Q_{1}}
{\displaystyle \partial Q_{1}/\partial x_{1}}
{\displaystyle \int _{\partial D}{\bar {n_{1}}}Q_{1}\;dS=\int _{X_{2}^{-}(X_{1}^{-},x_{3})}^{X_{2}^{+}(X_{1}^{+},x_{3})}\int _{X_{3}^{-}(X_{1}^{-},x_{2})}^{X_{3}^{+}(X_{1}^{+},x_{2})}\left[Q_{1}(X_{1}^{+},x_{2},x_{3})-Q_{1}(X_{1}^{-},x_{2},x_{3})\right]\;dx_{2}dx_{3}.}
{\displaystyle X_{1}^{\pm }=X_{1}^{\pm }(x_{2},x_{3}).}
{\displaystyle Q_{2}}
{\displaystyle Q_{3}}
{\displaystyle m>3}
|
represent 1 2i 1 3iin the polar form - Mathematics - TopperLearning.com | b3jwf744
represent 1+2i/1-3iin the polar form
Asked by monami chatterjee | 26th Aug, 2010, 06:56: PM
z=\frac{1+2i}{1-3i}=\quad \frac{1+2i}{1-3i}x\frac{1+3i}{1+3i}\quad =\quad \frac{-5+5i}{10}=\quad \frac{-1+i}{2}
\left|z\right|=\sqrt{{\left(\frac{-1}{2}\right)}^{2}+{\left(\frac{1}{2}\right)}^{2}}=\frac{1}{\sqrt{2}}\quad \quad \mathrm{and}\quad \quad \mathrm{tan\theta }=\frac{1/2}{-1/2}=-1
\mathrm{If}\quad \mathrm{tan\theta }\quad =1\quad \mathrm{then}\quad \quad \theta =\quad \frac{\pi }{4}
\mathrm{As}\quad \mathrm{the}\quad \mathrm{point}\quad \left(-1/2,\quad 1/2\right)\quad \mathrm{is}\quad \mathrm{in}\quad \mathrm{second}\quad \mathrm{quadrant},\quad \theta \quad =\quad \pi -\frac{\pi }{4}=\quad \frac{3\pi }{4}
\mathrm{Polar}\quad \mathrm{form}\quad \mathrm{of}\quad z\quad \mathrm{is}\quad \frac{1}{\sqrt{2}}\left(\mathrm{cos}\frac{3\pi }{4}\quad +i\quad \mathrm{sin}\frac{3\pi }{4}\right)
|
A Hierarchy of Discrete Integrable Coupling System with Self-Consistent Sources
Yuqing Li, Huanhe Dong, Baoshu Yin, "A Hierarchy of Discrete Integrable Coupling System with Self-Consistent Sources", Journal of Applied Mathematics, vol. 2014, Article ID 416472, 8 pages, 2014. https://doi.org/10.1155/2014/416472
Yuqing Li,1 Huanhe Dong,1 and Baoshu Yin2,3
Integrable coupling system of a lattice soliton equation hierarchy is deduced. The Hamiltonian structure of the integrable coupling is constructed by using the discrete quadratic-form identity. The Liouville integrability of the integrable coupling is demonstrated. Finally, the discrete integrable coupling system with self-consistent sources is deduced.
Many physical problems may be modeled by soliton equation. The Hamiltonian structures of many systems have been obtained by the famous trace identity [1–6]. The study of integrable couplings of integrable systems has become the focus of common concern in recent years. It originates from the investigations on the symmetry problems and associated centerless Virasoro algebras [7]. Many integrable coupling systems have been constructed by using the methods of a direct method [8], perturbations [9], enlarging spectral problems [10, 11], creating new loop algebras [12, 13], and semidirect sums of Lie algebras [14, 15]. The Hamiltonian structures of the integrable couplings of lattice equations can be constructed by means of the discrete quadratic-form identity [16, 17].
Since Mel’Nikov proposed a new kind of integrable model which was called soliton equations with self-consistent sources [18] in 1983, many soliton equations with self-consistent sources [19–23] have been presented in recent years. For applications, these kinds of systems are usually used to describe interactions between different solitary waves. In this paper, we deduce a hierarchy of discrete integrable coupling system with self-consistent sources which are few compared with the continuous ones.
The paper will be organized as follows. We first get a hierarchy of integrable lattice soliton equation with self-consistent sources in Section 2. In Section 3, a hierarchy of discrete integrable coupling system is derived by making use of the discrete zero curvature representation. By means of the discrete quadratic-form identity we establish the Hamiltonian structures of the hierarchy. Further, the resulting Hamiltonian equations are all proved to be integrable in Liouville sense. Finally, we give the integrable coupling systems with self-consistent sources.
2. A Hierarchy of Integrable Lattice Soliton Equations with Self-Consistent Sources
We first briefly describe our notations. Assume is a lattice function; the shift operator and the inverse of are defined by
A system of discrete equations is said to have a discrete Lax pair if it is equivalent to the compatibility condition In [16], a Lie algebra is presented as where Set and ; it is easy to see that , , and construct three Lie algebra, and So is an Abelian ideal of the Lie algebra . The corresponding loop algebra is defined by
In [15], a new discrete matrix spectral problem has been proposed: by solving the stationary discrete zero curvature equation where and introducing the auxiliary spectral problems associated with the spectral problem (9) a hierarchy of integrable lattice soliton equations with a potential has been presented: where Equation (13) possesses the following Hamiltonian forms [15]: where
Next, we will construct a hierarchy of integrable lattice soliton equations (13) with self-consistent sources. For distinct real , consider the auxiliary linear problem Based on the results in [24], we show the following equation: where According to the approach proposed in [24–26], through a direct computation, we obtain the discrete integrable hierarchy with self-consistent sources as follows:
Taking in the above system, under , we can obtain the following equation with self-consistent sources:
3. A Hierarchy of Discrete Integrable Coupling System with Self-Consistent Sources
First, we will give out the integrable couplings of the hierarchy (13). Consider the discrete isospectral problem in which is the potential, and are real functions defined over , is a spectral parameter, , and is the eigenfunction vector.
We solve the stationary discrete zero curvature equation where Equation (23) gives Substituting the expansions into (25), we can get the recursion relation The initial values are taken as Note that the definition of the inverse operator of does not yield any arbitrary constant in computing and , . Thus, the recursion relation (27) uniquely determines and the first few quantities are given by Set so Take , , and let We introduce the auxiliary spectral problems associated with the spectral problem (22): The compatibility conditions of (22) and (34) are which give rise to the following hierarchy of integrable lattice equations:
So (35) is the discrete zero curvature representation of (36); the discrete spectral problems (22) and (34) constitute the Lax pairs of (36), and (36) are a hierarchy of Lax integrable nonlinear lattice equations. It is easy to verify that the first nonlinear lattice equation in (36), when , under , is
In (36) the first lattice equations constitute a hierarchy of integrable lattice soliton equations with a potential ; in the view of integrable coupling theory [7, 13, 17], (36) are integrable coupling systems of (13) or (15).
In what follows, we would like to establish the Hamiltonian structures for the integrable coupling systems (36).
Set , , and . We define a map Following [16], we introduce the matrix
It is easy to verify that meets . Under the definition of the quadratic-form function we have and . Set ; through a direct calculation, we get By the discrete quadratic-form identity [16] with being a constant to be determined, we have By the substitution of into (44) and comparing the coefficients of in (44), we get When in (46), a direct calculation shows that . So we have Set
Now we can rewrite those lattice equations in (36) as where is a local difference operator defined by where Obviously, the operator is a skew-symmetric operator; that is, . Moreover, we can prove that the operator satisfies the Jacobi identity
So we have the following facts.
Proposition 1. is a discrete Hamiltonian operator.
Set From the recursion relation (27) we can get the recursion operator in (53).
Therefore, we have So (49) are a family of Hamiltonian systems. The hierarchy of lattice equations (36) possesses Hamiltonian structures (54). Furthermore, a direct calculation shows that It is easy to verify that the operator is a skew-symmetric operator; that is, . So we have the following.
Proposition 2. defined by (48) forms an infinite set of conserved functionals of the hierarchy (36), and , , are involution in pairs with respect to the Poisson bracket.
Proof. We can find that . Namely, , and then . Hence Similarly, we get This implies that Thus
In summary, we obtain the following theorem.
Theorem 3. The lattice equations in (36) or the discrete Hamiltonian equations in (49) are all discrete Liouville integrable Hamiltonian systems.
Now we search for the integrable coupling systems with self-consistent sources. For distinct real , consider the auxiliary linear problem Based on the results in [24], we show the following equation: where According to the approach proposed in [24–26], through a direct computation, we get the discrete integrable hierarchy with self-consistent sources as follows: When in the above system, under , we can obtain the following coupling equations with self-consistent sources:
This work was supported by the Global Change and Air-Sea Interaction (no. GASI-03-01-01-02), National Natural Science Foundation of China (no. 61304074), the Nature Science Foundation of Shandong Province of China (no. ZR2013AQ017), the Science and Technology Plan Project of Qingdao (no. 14-2-4-77-JCH), the Open Fund of the Key Laboratory of Ocean Circulation and Waves, the Chinese Academy of Science (no. KLOCAW1401), the Open Fund of the Key Laboratory of Data Analysis and Application, and the State Oceanic Administration (no. LDAA-2013-04).
G. Z. Tu, “A trace identity and its applications to the theory of discrete integrable systems,” Journal of Physics A: Mathematical and General, vol. 23, no. 17, pp. 3903–3922, 1990. View at: Publisher Site | Google Scholar | MathSciNet
Y. F. Zhang, “A generalized Boite-Pempinelli-Tu (BPT) hierarchy and its bi-Hamiltonian structure,” Physics Letters A, vol. 317, no. 3-4, pp. 280–286, 2003. View at: Publisher Site | Google Scholar | MathSciNet
X. R. Wang, Y. Fang, and H. H. Dong, “Component-trace identity for Hamiltonian structure of the integrable couplings of the Giachetti-Johnson (GJ) hierarchy and coupling integrable couplings,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 7, pp. 2680–2688, 2011. View at: Publisher Site | Google Scholar | MathSciNet
H.-H. Dong, “A new loop algebra and two new Liouville integrable hierarchy,” Modern Physics Letters B, vol. 21, no. 11, pp. 663–673, 2007. View at: Publisher Site | Google Scholar | MathSciNet
H. H. Dong, “A subalgebra of Lie algebra
{A}_{2}
and its associated two types of loop algebras, as well as Hamiltonian structures of integrable hierarchy,” Journal of Mathematical Physics, vol. 50, no. 5, Article ID 053519, pp. 2899–2905, 2009. View at: Publisher Site | Google Scholar | MathSciNet
W. X. Ma and B. Fuchssteiner, “Integrable theory of the perturbation equations,” Chaos, Solitons and Fractals, vol. 7, no. 8, pp. 1227–1250, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH
W. X. Ma, “Integrable couplings of vector AKNS soliton equations,” Journal of Mathematical Physics, vol. 46, no. 3, Article ID 033507, 19 pages, 2005. View at: Publisher Site | Google Scholar | MathSciNet
W. X. Ma, “Enlarging spectral problems to construct integrable couplings of soliton equations,” Physics Letters A, vol. 316, no. 1-2, pp. 72–76, 2003. View at: Publisher Site | Google Scholar | MathSciNet
F. Guo, Y. Zhang, and Q. Yan, “New simple method for obtaining integrable hierarchies of soliton equations with multicomponent potential functions,” International Journal of Theoretical Physics, vol. 43, no. 4, pp. 1139–1146, 2004. View at: Publisher Site | Google Scholar | MathSciNet
Y. F. Zhang and X. X. Xu, “A trick loop algebra and a corresponding Liouville integrable hierarchy of evolution equations,” Chaos, Solitons and Fractals, vol. 21, no. 2, pp. 445–456, 2004. View at: Publisher Site | Google Scholar | MathSciNet
H. Dong, X. R. Wang, and W. Zhao, “A new 4-dimensional implicit vector-form loop algebra with arbitrary constants and the corresponding computing formula of constant γ in the variation identity,” Applied Mathematics and Computation, vol. 218, no. 22, pp. 10998–11008, 2012. View at: Publisher Site | Google Scholar | MathSciNet
W. X. Ma, X. X. Xu, and Y. F. Zhang, “Semi-direct sums of Lie algebras and continuous integrable couplings,” Physics Letters A, vol. 351, no. 3, pp. 125–130, 2006. View at: Publisher Site | Google Scholar | MathSciNet
H. X. Yang and X. X. Xu, “Integrable expanding models for discrete systems application: coupled TODa and relativistic TODa lattice,” International Journal of Modern Physics B, vol. 19, no. 13, pp. 2121–2128, 2005. View at: Publisher Site | Google Scholar | MathSciNet
Y. Q. Yao, J. Ji, D. Y. Chen, and Y. B. Zeng, “The quadratic-form identity for constructing the Hamiltonian structures of the discrete integrable systems,” Computers & Mathematics with Applications, vol. 56, no. 11, pp. 2874–2882, 2008. View at: Publisher Site | Google Scholar | MathSciNet
W.-X. Ma, “A discrete variational identity on semi-direct sums of Lie algebras,” Journal of Physics A: Mathematical and Theoretical, vol. 40, no. 50, pp. 15055–15069, 2007. View at: Publisher Site | Google Scholar | MathSciNet
V. K. Mel'Nikov, “On equations for wave interactions,” Letters in Mathematical Physics, vol. 7, no. 2, pp. 129–136, 1983. View at: Publisher Site | Google Scholar | MathSciNet
F. Yu and L. Li, “A blaszak-marciniak lattice hierarchy with self-consistent sources,” International Journal of Modern Physics B, vol. 25, no. 25, pp. 3371–3379, 2011. View at: Publisher Site | Google Scholar | MathSciNet
F. J. Yu, “Non-isospectral integrable couplings of Ablowitz-Ladik hierarchy with self-consistent sources,” Physics Letters A, vol. 372, no. 46, pp. 6909–6915, 2008. View at: Publisher Site | Google Scholar | MathSciNet
T.-C. Xia, “Two new integrable couplings of the soliton hierarchies with self-consistent sources,” Chinese Physics B, vol. 19, no. 10, Article ID 100303, 2010. View at: Publisher Site | Google Scholar
H.-W. Yang, H.-H. Dong, and B.-S. Yin, “Nonlinear integrable couplings of a nonlinear Schrödinger - Modified Korteweg de Vries hierarchy with self-consistent sources,” Chinese Physics B, vol. 21, no. 10, Article ID 100204, 2012. View at: Publisher Site | Google Scholar
H. W. Yang, H. H. Dong, B. S. Yin, and Z. Y. Liu, “Nonlinear bi-integrable couplings of multicomponent Guo hierarchy with self-consistent sources,” Advances in Mathematical Physics, vol. 2012, Article ID 272904, 14 pages, 2012. View at: Publisher Site | Google Scholar | MathSciNet
Y. B. Zeng, W. X. Ma, and R. L. Lin, “Integration of the soliton hierarchy with self-consistent sources,” Journal of Mathematical Physics, vol. 41, no. 8, pp. 5453–5489, 2000. View at: Publisher Site | Google Scholar | MathSciNet
Y. B. Zeng, “New factorization of the Kaup-Newell hierarchy,” Physica D, vol. 73, no. 3, pp. 171–188, 1994. View at: Publisher Site | Google Scholar | MathSciNet
Y. Huang, Y. Zeng, and O. Ragnisco, “The Degasperis-Procesi equation with self-consistent sources,” Journal of Physics A: Mathematical and Theoretical, vol. 41, no. 35, Article ID 355203, 2008. View at: Publisher Site | Google Scholar
Copyright © 2014 Yuqing Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
You have understood that the perceptron can be trained to produce correct outputs by tweaking the regular weights.
However, there are times when a minor adjustment is needed for the perceptron to be more accurate. This supporting role is played by the bias weight. It takes a default input value of 1 and some random weight value.
So now the weighted sum equation should look like:
weighted\ sum = x_1w_1 + x_2w_2 + ... + x_nw_n + 1w_b
How does this change the code so far? You only have to consider two small changes:
Add a 1 to the set of inputs (now there are 3 inputs instead of 2)
Add a bias weight to the list of weights (now there are 3 weights instead of 2)
We’ll automatically make these replacements in the code so you should be good to go!
9. The Bias Weight
|
Mechanical Behavior of a Rephosphorized Steel for Car Body Applications: Effects of Temperature, Strain Rate, and Pretreatment | J. Eng. Mater. Technol. | ASME Digital Collection
Department of Materials and Manufacturing Technology,
, Gothenburg SE-41296, Sweden
e-mail: yu.cao@chalmers.se
Johan Ahlström,
Cao, Y., Ahlström, J., and Karlsson, B. (March 22, 2011). "Mechanical Behavior of a Rephosphorized Steel for Car Body Applications: Effects of Temperature, Strain Rate, and Pretreatment." ASME. J. Eng. Mater. Technol. April 2011; 133(2): 021019. https://doi.org/10.1115/1.4003491
Temperature and strain rate effects on the mechanical behavior of commercial rephosphorized, interstitial free steel have been investigated by uniaxial tensile testing, covering temperatures ranging from
−60°C
+100°C
and strain rates from
1×10−4 s−1
1×102 s−1
encompassing most conditions experienced in automotive crash situations. The effect of prestraining to 3.5% with or without successive annealing at
180°C
for 30 min has also been evaluated. These treatments were used to simulate pressing of the plates and the paint-bake cycle in the production of car bodies. Yield and ultimate tensile strengths, ductility including uniform and total elongation and area reduction, thermal softening effect at high strain rate, and strain rate sensitivity of stress were determined and discussed in all cases. It was found that the Voce equation
[σ=σs−(σs−σ0)exp(ε/ε0)]
can be fitted to the experimental true stress-true plastic strain data with good precision. The parameter values in this equation were evaluated and discussed. Furthermore, temperature and strain rate effects were examined in terms of thermal and athermal components of the flow stresses. Finally, a thermal activation analysis was performed.
annealing, ductility, elongation, plastic flow, pressing, softening, steel, stress-strain relations, tensile testing, thermal analysis, yield strength, rephosphorized interstitial free steel, tensile properties, temperature effects, strain rate sensitivity, strain hardening, Voce equation, thermal and athermal components
Deformation, Steel, Stress, Temperature, Elongation, Mechanical behavior, Work hardening, Flow (Dynamics), Temperature effects
Samet-Meziou
Relation Between the Deformation Sub-Structure After Rolling or Tension and the Recrystallization Mechanisms of an IF Steel
Tensile Behaviors of IF Steel With Different Cold-Rolling Reductions
Strain Rate Sensitivity of Ultra-Low Carbon Steels
Formation of Nano- to Ultrafine Grains in a Severely Cold Rolled Interstitial Free Steel
Mechanical Properties of Ultra Fine-Grained HSLA and Ti-IF Steels
Flow Stress and Microstructure of the Cold-Rolled IF-Steel
In Situ Neutron Diffraction Study of IF and Ultra Low Carbon Steels Upon Tensile Deformation
Dynamic Strain Aging of Various Steels
Effects of Interstitial Solute Atoms on the Very Low Strain-Rate Deformations for an IF Steel and an Ultra-Low Carbon Steel
Static Strain Aging Behavior of Ultra Low Carbon Bake Hardening Steel
Dynamic Bake Hardening of Interstitial-Free Steels
Solid Solution Softening at High Strain Rates in Si- and/or Mn-Added Interstitial Free Steels
Constitution and Properties of Steels
The Contribution of Atmosphere Locking to the Strain-Aging of Low Carbon Steels
Effect of Temperature and Dynamic Loading on the Mechanical Properties of Copper-Alloyed High-Strength Interstitial-Free Steel
, Belgium, pp.
Determining Material True Stress–Strain Curve From Tensile Specimens With Rectangular Cross-Section
W. -S.
The Deformation Behaviour and Microstructure Evolution of High-Strength Alloy Steel at High Rate of Strain
A Microstructural Investigation of Adiabatic Shear Bands in an Interstitial Free Steel
Copreaux
, 2001, On the Constitutive Behavior of the F82H Ferritic/Martensitic Steel, A309-310, pp.
Constitutive Behavior and Fracture Toughness Parameters of the F82H Ferritic/Martensitic Steel
Constitutive Modelling of the High Strain Rate Behaviour of Interstitial-Free Steel
Tensile Properties of AISI Types 304 and 347 Stainless Steels at Moderate Temperatures for Section Sizes Ranging From Bars to Extremely Large Forgings
|
What Is the Kalman Filter? - MATLAB & Simulink - MathWorks ä¸å›½
Starting with initial values for states (x0|0), the initial state variance-covariance matrix (P0|0), and initial values for all unknown parameters (θ0), the simple Kalman filter:
{\stackrel{^}{x}}_{t|tâ1}
{P}_{t|tâ1}
{\stackrel{^}{y}}_{t|tâ1}
{V}_{t|tâ1}
{\stackrel{^}{x}}_{t|t}
{P}_{t|t}
\mathrm{ln}p\left({y}_{T},...,{y}_{1}\right)=\underset{t=1}{\overset{T}{â}}\mathrm{ln}\mathrm{Ï}\left({y}_{t};{\stackrel{^}{y}}_{t|tâ1},{V}_{t|tâ1}\right),
\mathrm{Ï}\left({y}_{t};{\stackrel{^}{y}}_{t|tâ1},{V}_{t|tâ1}\right)
{\stackrel{^}{y}}_{t|tâ1}
{V}_{t|tâ1}
{x}_{t|tâ1}=E\left({x}_{t}|{y}_{tâ1},...,{y}_{1}\right)
{\stackrel{^}{x}}_{t|tâ1}={A}_{t}{\stackrel{^}{x}}_{tâ1|tâ1},
{\stackrel{^}{x}}_{tâ1|tâ1}
{P}_{t|tâ1}={A}_{t}{P}_{tâ1|tâ1}{A}_{t}^{â²}+{B}_{t}{B}_{t}^{â²},
{P}_{tâ1|tâ1}
{\stackrel{^}{y}}_{t|tâ1}={C}_{t}{\stackrel{^}{x}}_{t|tâ1},
{V}_{t|tâ1}=Var\left({y}_{t}|{y}_{tâ1},...,{y}_{1}\right)={C}_{t}{P}_{t|tâ1}{C}_{t}^{â²}+{D}_{t}{D}_{t}^{â²}.
{x}_{t|tâs}=E\left({x}_{t}|{y}_{tâs},...,{y}_{1}\right)
{\stackrel{^}{x}}_{t+s|t}=\left(\underset{j=t+1}{\overset{t+s}{â}}{A}_{j}\right){x}_{t|t}
{\stackrel{^}{y}}_{t+s|t}={C}_{t+s}{\stackrel{^}{x}}_{t+s|t}.
{x}_{t|t}=E\left({x}_{t}|{y}_{t},...,{y}_{1}\right)
{\stackrel{^}{x}}_{t|t}={\stackrel{^}{x}}_{t|tâ1}+{K}_{t}{\stackrel{^}{\mathrm{ε}}}_{t},
{\stackrel{^}{x}}_{t|tâ1}
{\stackrel{^}{\mathrm{ε}}}_{t}={y}_{t}â{C}_{t}{\stackrel{^}{x}}_{t|tâ1}
In other words, the filtered states at period t are the forecasted states at period t plus an adjustment based on the trustworthiness of the observation. Trustworthy observations have very little corresponding observation innovation variance (for example, the maximum eigenvalue of DtDt′ is relatively small). Consequently, for a given estimated observation innovation, the term
{K}_{t}{\stackrel{^}{\mathrm{ε}}}_{t}
{P}_{t|t}={P}_{t|tâ1}â{K}_{t}{C}_{t}{P}_{t|tâ1},
{P}_{t|tâ1}
{x}_{t|T}=E\left({x}_{t}|{y}_{T},...,{y}_{1}\right)
{\stackrel{^}{x}}_{t|T}={\stackrel{^}{x}}_{t|tâ1}+{P}_{t|tâ1}{r}_{t},
{\stackrel{^}{x}}_{t|tâ1}
{P}_{t|tâ1}
{r}_{t}=\underset{s=t}{\overset{T}{â}}\left\{\left[\underset{j=t}{\overset{sâ1}{â}}\left({A}_{t}â{K}_{t}{C}_{t}\right)\right]{C}_{s}^{â²}{V}_{s|sâ1}^{â1}{\mathrm{ν}}_{s}\right\},
{V}_{t|tâ1}={C}_{t}{P}_{t|tâ1}{C}_{t}^{â²}+{D}_{t}{D}_{t}^{â²}
{\mathrm{ν}}_{t}={y}_{t}â{\stackrel{^}{y}}_{t|tâ1}
{u}_{t|T}=E\left({u}_{t}|{y}_{T},...,{y}_{1}\right)
{\stackrel{^}{u}}_{t|T}={B}_{t}^{â²}{r}_{t},
{U}_{t|T}=Iâ{B}_{t}^{â²}{N}_{t}{B}_{t},
{y}_{t|tâ1}=E\left({y}_{t}|{y}_{tâ1},...,{y}_{1}\right)
{\stackrel{^}{y}}_{t|tâ1}={C}_{t}{\stackrel{^}{x}}_{t|tâ1},
{\stackrel{^}{x}}_{t|tâ1}
{V}_{t|tâ1}=Var\left({y}_{t}|{y}_{tâ1},...,{y}_{1}\right)={C}_{t}{P}_{t|tâ1}{C}_{t}^{â²}+{D}_{t}{D}_{t}^{â²}.
{P}_{t|tâ1}
{x}_{t|tâs}=E\left({x}_{t}|{y}_{tâs},...,{y}_{1}\right)
{\stackrel{^}{y}}_{t+s|t}={C}_{t+s}{\stackrel{^}{x}}_{t+s|t}.
{\mathrm{ε}}_{t|T}=E\left({\mathrm{ε}}_{t}|{y}_{T},...,{y}_{1}\right)
{\stackrel{^}{\mathrm{ε}}}_{t}={D}_{t}^{â²}{V}_{t|tâ1}^{â1}{\mathrm{ν}}_{t}â{D}_{t}^{â²}{K}_{t}^{â²}{r}_{t+1},
rt and νt are the variables in the formula to estimate the smoothed states.
{V}_{t|tâ1}={C}_{t}{P}_{t|tâ1}{C}_{t}^{â²}+{D}_{t}{D}_{t}^{â²}
{E}_{t|T}=Iâ{D}_{t}^{â²}\left({V}_{t|tâ1}^{â1}â{K}_{t}^{â²}{N}_{t+1}{K}_{t}\right){D}_{t}.
{K}_{t}={P}_{t|tâ1}{C}_{t}^{â²}{\left({C}_{t}{P}_{t|tâ1}{C}_{t}^{â²}+{D}_{t}{D}_{t}^{â²}\right)}^{â1},
{P}_{t|tâ1}
The value of the raw Kalman gain determines how much weight to put on the observations. For a given estimated observation innovation, if the maximum eigenvalue of DtDt′ is relatively small, then the raw Kalman gain imparts a relatively large weight on the observations. If the maximum eigenvalue of DtDt′ is relatively large, then the raw Kalman gain imparts a relatively small weight on the observations. Consequently, the filtered states at period t are close to the corresponding state forecasts.
{K}_{adj,t}
{\stackrel{^}{\mathrm{ε}}}_{t}
{\stackrel{^}{x}}_{t+1|tâ1}
{\stackrel{^}{x}}_{t+1|t}={A}_{t}{\stackrel{^}{x}}_{t|t}={A}_{t}{\stackrel{^}{x}}_{t|tâ1}+{A}_{t}{K}_{t}{\stackrel{^}{\mathrm{ε}}}_{t}={\stackrel{^}{x}}_{t+1|tâ1}+{K}_{adj,t}{\stackrel{^}{\mathrm{ε}}}_{t}.
{\stackrel{^}{x}}_{t|T}
{P}_{t|T}
{\stackrel{^}{u}}_{t|T}
{U}_{t|T}
{\stackrel{^}{\mathrm{ε}}}_{t|T}
{E}_{t|T}
{\mathrm{μ}}_{0}=\left[\begin{array}{c}{\mathrm{μ}}_{d0}\\ {\mathrm{μ}}_{s0}\end{array}\right]\text{â}\text{â}\text{and}\text{â}\text{â}{\mathrm{Σ}}_{0}=\left[\begin{array}{cc}{\mathrm{Σ}}_{d0}& 0\\ 0& {\mathrm{Σ}}_{s0}\end{array}\right].
μd0 is an m-vector of zeros
μs0 is an n-vector of real numbers
Σd0 = κIm, where Im is the m-by-m identity matrix and κ is a positive real number.
Σs0 is an n-by-n positive definite matrix.
One way to analyze such a model is to set κ to a relatively large, positive real number, and then implement the standard Kalman filter (see ssm). This treatment is an approximation to an analysis that treats the diffuse states as if their initial state covariance approaches infinity.
The diffuse Kalman filter or exact-initial Kalman filter [60] treats the diffuse states by taking κ to ∞. The diffuse Kalman filter filters in two stages: the first stage initializes the model so that it can subsequently be filtered using the standard Kalman filter, which is the second stage. The initialization stage mirrors the standard Kalman filter. It sets all initial filtered states to zero, and then augments that vector of initial filtered states with the identity matrix, which composes an (m + n)-by-(m + n + 1) matrix. After a sufficient number of periods, the precision matrices become nonsingular. That is, the diffuse Kalman filter uses enough periods at the beginning of the series to initialize the model. You can consider this period as presample data.
|
Lemma 26.24.1 (01LC)—The Stacks project
Lemma 26.24.1. Let $f : X \to S$ be a morphism of schemes. If $f$ is quasi-compact and quasi-separated then $f_*$ transforms quasi-coherent $\mathcal{O}_ X$-modules into quasi-coherent $\mathcal{O}_ S$-modules.
Proof. The question is local on $S$ and hence we may assume that $S$ is affine. Because $X$ is quasi-compact we may write $X = \bigcup _{i = 1}^ n U_ i$ with each $U_ i$ open affine. Because $f$ is quasi-separated we may write $U_ i \cap U_ j = \bigcup _{k = 1}^{n_{ij}} U_{ijk}$ for some affine open $U_{ijk}$, see Lemma 26.21.6. Denote $f_ i : U_ i \to S$ and $f_{ijk} : U_{ijk} \to S$ the restrictions of $f$. For any open $V$ of $S$ and any sheaf $\mathcal{F}$ on $X$ we have
\begin{eqnarray*} f_*\mathcal{F}(V) & = & \mathcal{F}(f^{-1}V) \\ & = & \mathop{\mathrm{Ker}}\left( \bigoplus \nolimits _ i \mathcal{F}(f^{-1}V \cap U_ i) \to \bigoplus \nolimits _{i, j, k} \mathcal{F}(f^{-1}V \cap U_{ijk})\right) \\ & = & \mathop{\mathrm{Ker}}\left( \bigoplus \nolimits _ i f_{i, *}(\mathcal{F}|_{U_ i})(V) \to \bigoplus \nolimits _{i, j, k} f_{ijk, *}(\mathcal{F}|_{U_{ijk}})(V)\right) \\ & = & \mathop{\mathrm{Ker}}\left( \bigoplus \nolimits _ i f_{i, *}(\mathcal{F}|_{U_ i}) \to \bigoplus \nolimits _{i, j, k} f_{ijk, *}(\mathcal{F}|_{U_{ijk}})\right)(V) \end{eqnarray*}
In other words there is an exact sequence of sheaves
\[ 0 \to f_*\mathcal{F} \to \bigoplus f_{i, *}\mathcal{F}_ i \to \bigoplus f_{ijk, *}\mathcal{F}_{ijk} \]
where $\mathcal{F}_ i, \mathcal{F}_{ijk}$ denotes the restriction of $\mathcal{F}$ to the corresponding open. If $\mathcal{F}$ is a quasi-coherent $\mathcal{O}_ X$-module then $\mathcal{F}_ i$ is a quasi-coherent $\mathcal{O}_{U_ i}$-module and $\mathcal{F}_{ijk}$ is a quasi-coherent $\mathcal{O}_{U_{ijk}}$-module. Hence by Lemma 26.7.3 we see that the second and third term of the exact sequence are quasi-coherent $\mathcal{O}_ S$-modules. Thus we conclude that $f_*\mathcal{F}$ is a quasi-coherent $\mathcal{O}_ S$-module. $\square$
Typo: the bracket
)(V)
in the third line should be
(V))
Thanks very much. This is a very confusing typo. Fixed here.
Comment #4729 by Simon on December 01, 2019 at 09:18
additional s in last pragraph: it should say "If
\mathcal{F}
is a quasi-coherent
\mathcal{O}_X
-module"
Why does the lemma not hold for general
f
? I cannot figure out which equality not holds if we replace finite directed sum by general infinite product of quasi-coherent sheaves.
@#6664. The product of quasi-coherent modules isn't usually quasi-coherent.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 01LC. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 01LC, in case you are confused.
|
Gradient vector of scalar function - MATLAB gradient - MathWorks
Find Gradient of Function
Plot Gradient of Function
Gradient of Matrix Multiplication
Gradient of Multivariable Function
Compute gradient of symbolic matrix variables
Gradient vector of scalar function
g = gradient(f,v)
g = gradient(f)
gM = gradient(fM,vM)
g = gradient(f,v) finds the gradient vector of the scalar function f with respect to vector v in Cartesian coordinates. The input f is a function of symbolic scalar variables and the vector v specifies the scalar differentiation variables.
g = gradient(f) finds the gradient vector of the scalar function f with respect to a vector constructed from all symbolic scalar variables found in f. The order of variables in this vector is defined by symvar.
gM = gradient(fM,vM) finds the gradient vector of the scalar function fM with respect to vector vM in Cartesian coordinates. The input function fM is a function of symbolic matrix variables and the vector vM is a symbolic matrix variable of size 1-by-N or N-by-1.
The gradient of a scalar function f with respect to the vector v is the vector of the first partial derivatives of f with respect to each element of v.
Find the gradient vector of f(x,y,z) with respect to vector [x,y,z]. The gradient is a vector with these components.
f(x,y,z) = 2*y*z*sin(x) + 3*x*sin(z)*cos(y);
gradient(f,[x,y,z])
\left(\begin{array}{c}3 \mathrm{cos}\left(y\right) \mathrm{sin}\left(z\right)+2 y z \mathrm{cos}\left(x\right)\\ 2 z \mathrm{sin}\left(x\right)-3 x \mathrm{sin}\left(y\right) \mathrm{sin}\left(z\right)\\ 2 y \mathrm{sin}\left(x\right)+3 x \mathrm{cos}\left(y\right) \mathrm{cos}\left(z\right)\end{array}\right)
Find the gradient of a function f(x,y), and plot it as a quiver (velocity) plot.
Find the gradient vector of f(x,y) with respect to vector [x,y]. The gradient is vector g with these components.
f = -(sin(x) + sin(y))^2;
g = gradient(f,[x,y])
\left(\begin{array}{c}-2 \mathrm{cos}\left(x\right) \left(\mathrm{sin}\left(x\right)+\mathrm{sin}\left(y\right)\right)\\ -2 \mathrm{cos}\left(y\right) \left(\mathrm{sin}\left(x\right)+\mathrm{sin}\left(y\right)\right)\end{array}\right)
Now plot the vector field defined by these components. MATLAB® provides the quiver plotting function for this task. The function does not accept symbolic arguments. First, replace symbolic variables in expressions for components of g with numeric values. Then use quiver.
[X, Y] = meshgrid(-1:.1:1,-1:.1:1);
G1 = subs(g(1),[x y],{X,Y});
quiver(X,Y,G1,G2)
Use symbolic matrix variables to define a matrix multiplication that returns a scalar.
A = Y.'*X
{Y}^{\mathrm{T}} X
Find the gradient of the matrix multiplication with respect to
X
gX = gradient(A,X)
gX =
Y
Y
gY = gradient(A,Y)
gY =
X
Find the gradient of the multivariable function
f\left(x\right)={\mathrm{sin}}^{2}\left({x}_{1,1}\right)+{\mathrm{sin}}^{2}\left({x}_{1,2}\right)+{\mathrm{sin}}^{2}\left({x}_{1,3}\right)
x=\left[{x}_{1,1},{x}_{1,2},{x}_{1,3}\right]
Use symbolic matrix variable to express the function
f
and its gradient in terms of the vector
x
f = sin(x)*sin(x).'
\mathrm{sin}\left(x\right) {\mathrm{sin}\left(x\right)}^{\mathrm{T}}
g = gradient(f,x)
2 \left(\mathrm{cos}\left(x\right)\odot {\mathrm{I}}_{3}\right) {\mathrm{sin}\left(x\right)}^{\mathrm{T}}
To show the gradient in terms of the elements of
x
, convert the result to a vector of symbolic scalar variables using symmatrix2sym.
g = symmatrix2sym(g)
\left(\begin{array}{c}2 \mathrm{cos}\left({x}_{1,1}\right) \mathrm{sin}\left({x}_{1,1}\right)\\ 2 \mathrm{cos}\left({x}_{1,2}\right) \mathrm{sin}\left({x}_{1,2}\right)\\ 2 \mathrm{cos}\left({x}_{1,3}\right) \mathrm{sin}\left({x}_{1,3}\right)\end{array}\right)
Alternatively, you can convert
f
x
to symbolic expressions of scalar variables and use them as inputs to the gradient function.
g = gradient(symmatrix2sym(f),symmatrix2sym(x))
\left(\begin{array}{c}2 \mathrm{cos}\left({x}_{1,1}\right) \mathrm{sin}\left({x}_{1,1}\right)\\ 2 \mathrm{cos}\left({x}_{1,2}\right) \mathrm{sin}\left({x}_{1,2}\right)\\ 2 \mathrm{cos}\left({x}_{1,3}\right) \mathrm{sin}\left({x}_{1,3}\right)\end{array}\right)
f — Scalar function of symbolic scalar variables
Scalar function, specified as a symbolic expression or symbolic function that is a function of symbolic scalar variables.
Data Types: sym | symfun
v — Vector with respect to which you find gradient vector
Vector with respect to which you find gradient vector, specified as a symbolic vector. By default, v is a vector constructed from all symbolic scalar variables found in f. The order of variables in this vector is defined by symvar.
If v is a scalar, gradient(f,v) = diff(f,v). If v is an empty symbolic object, such as sym([]), then gradient returns an empty symbolic object.
fM — Scalar function
symbolic expression of symbolic matrix variables
Scalar function, specified as a symbolic expression that is a function of symbolic matrix variables.
vM — Vector with respect to which you find gradient vector
Vector with respect to which you find gradient vector, specified as a symbolic matrix variable of size 1-by-N or N-by-1.
g — Gradient vector
Gradient vector, returned as a symbolic expression or symbolic function that is a function of symbolic scalar variables.
gM — Gradient vector
Gradient vector, returned as a symbolic expression that is a function of symbolic matrix variables.
The gradient vector of f(x) with respect to the vector
x=\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)
is the vector of the first partial derivatives of f.
{\nabla }_{x}f\left(x\right)=\left(\frac{\partial f}{\partial {x}_{1}},\frac{\partial f}{\partial {x}_{2}},\dots ,\frac{\partial f}{\partial {x}_{n}}\right)
R2021b: Compute gradient of symbolic matrix variables
The gradient function accepts input arguments of type symmatrix. For examples, see Gradient of Matrix Multiplication and Gradient of Multivariable Function.
curl | divergence | diff | hessian | jacobian | laplacian | potential | quiver | vectorPotential
|
Does on-policy data collection fix errors in off-policy reinforcement learning? - ΑΙhub
By Aviral Kumar and Abhishek Gupta
Before diving deep into a description of this problem, let us quickly recap some of the main concepts in dynamic programming. Algorithms that apply dynamic programming in conjunction with function approximation are generally referred to as approximate dynamic programming (ADP) methods. ADP algorithms include some of the most popular, state-of-the-art RL methods such as variants of deep Q-networks (DQN) and soft actor-critic (SAC) algorithms. ADP methods based on Q-learning train action-value functions, , via a Bellman backup. In practice, this corresponds to training a parametric function, , by minimizing the mean squared difference to a backup estimate of the Q-function, defined as:
where denotes a previous instance of the original Q-function, , and is commonly referred to as a target network. This update is summarized in the equation below.
An analogous update is also used for actor-critic methods that also maintain an explicitly parametrized policy, , alongside a Q-function. Such an update typically replaces with an expectation under the policy, . We shall use the version for consistency throughout, however, the actor-critic version follows analogously. These ADP methods aim at learning the optimal value function, , by applying the Bellman backup iteratively untill convergence.
A central factor that affects the performance of ADP algorithms is the choice of the training data-distribution, , as shown in the equation above. The choice of is an integral component of the backup, and it affects solutions obtained via ADP methods, especially since function approximation is involved. Unlike tabular settings, function approximation causes the learned Q function to depend on the choice of data distribution , thereby affecting the dynamics of the learning process. We show that on-policy exploration induces distributions such that training Q-functions under may fail to correct systematic errors in the Q-function, even if Bellman error is minimized as much as possible – a phenomenon that we refer to as an absence of corrective feedback.
What is corrective feedback formally? How do we determine if it is present or absent in ADP methods? In order to build intuition, we first present a simple contextual bandit (one step RL) example, where the Q-function is trained to match via supervised updates, without bootstrapping. This enjoys corrective feedback, and we then contrast it with ADP methods, which do not. In this example, the goal is to learn the optimal value function , which, is equal to the reward . At iteration , the algorithm minimizes the estimation error of the Q-function:
Using an -greedy or Boltzmann policy for exploration, denoted by , gives rise to a hard negative mining phenomenon – the policy chooses precisely those actions that correspond to possibly over-estimated Q-values for each state and observes the corresponding, or , as a result. Then, minimizing , on samples collected this way corrects errors in the Q-function, as is pushed closer to match for actions with incorrectly high Q-values, correcting precisely the Q-values which may cause sub-optimal performance. This constructive interaction between online data collection and error correction – where the induced online data distribution corrects errors in the value function – is what we refer to as corrective feedback.
In contrast, we will demonstrate that ADP methods that rely on previous Q-functions to generate targets for training the current Q-function, may not benefit from corrective feedback. This difference between bandits and ADP happens because the target values are computed by applying a Bellman backup on the previous Q-function,
\bar{Q}
(target value), rather than the optimal , so, errors in , at the next states can result in incorrect Q-value targets at the current state. No matter how often the current transition is observed, or how accurately Bellman errors are minimized, the error in the Q-value with respect to the optimal Q-function, , at this state is not reduced. Furthermore, in order to obtain correct target values, we need to ensure that values at state-action pairs occurring at the tail ends of the data distribution , which are primary causes of errors in Q-values at other states, are correct. However, as we will show via a simple didactic example, that this correction process may be extremely slow and may not occur, mainly because of undesirable generalization effects of the function approximator.
Let’s consider a didactic example of a tree-structured deterministic MDP with 7 states and 2 actions, and , at each state.
Figure 1: Run of an ADP algorithm with on-policy data collection. Boxed nodes and circled nodes denote groups of states aliased by function approximation — values of these nodes are affected due to parameter sharing and function approximation.
A run of an ADP algorithm that chooses the current on-policy state-action marginal as on this tree MDP is shown in Figure 1. Thus, the Bellman error at a state is minimized in proportion to the frequency of occurrence of that state in the policy state-action marginal. Since the leaf node states are the least frequent in this on-policy marginal distribution (due to the discounting), the Bellman backup is unable to correct errors in Q-values at such leaf nodes, due to their low frequency and aliasing with other states arising due to function approximation. Using incorrect Q-values at the leaf nodes to generate targets for other nodes in the tree, just gives rise to incorrect values, even if Bellman error is fully minimized at those states. Thus, most of the Bellman updates do not actually bring Q-values at the states of the MDP closer to , since the primary cause of incorrect target values isn’t corrected.
Increasing values of
\mathcal{E}_k
imply that the algorithm is pushing Q-values farther away from , which means that corrective feedback is absent, if this happens over a number of iterations. On the other hand, decreasing values of implies that the algorithm is continuously improving its estimate of , by moving it towards with each iteration, indicating the presence of corrective feedback.
Observe in Figure 3, that ADP methods can suffer from prolonged periods where this global measure of error in the Q-function, , is increasing or fluctuating, and the corresponding returns degrade or stagnate, implying an absence of corrective feedback.
Convergence to suboptimal Q-functions. We find that on-policy sampling can cause ADP to converge to a suboptimal solution, even in the absence of sampling error. Figure 3(a) shows that the value error rapidly decreases initially, and eventually converges to a value significantly greater than 0, from which the learning process never recovers.
We will not go into the full details of our derivation in this article, however, we mention the optimization problem used to obtain a form for this optimal distribution and encourage readers interested in the theory to checkout Section 4 in our paper. In this optimization problem, our goal is to minimize a measure of corrective feedback, given by value error , with respect to the distribution used for Bellman error minimization, at every iteration . This gives rise to the following problem:
We show in our paper that the solution of this optimization problem, that we refer to as the optimal distribution, , is given by:
By simplifying this expression, we obtain a practically viable expression for weights, , at any iteration that can be used to re-weight the data distribution:
where is the accumulated Bellman error over iterations, and it satisfies a convenient recursion making it amenable to practical implementations,
and is the Boltzmann or greedy policy corresponding to the current Q function.
What does this expression for intuitively correspond to? Observe that the term appearing in the exponent in the expression for corresponds to the accumulated Bellman error in the target values. Our choice of , thus, basically down-weights transitions with highly incorrect target values. This technique falls into a broader class of abstention based techniques that are common in supervised learning settings with noisy labels, where down-weighting datapoints (transitions here) with errorful labels (target values here) can boost generalization and correctness properties of the learned model.
Why does our choice of , i.e. the sum of accumulated Bellman errors suffice? This is because this value accounts for how error is propagated in ADP methods. Bellman errors, are propagated under the current policy , and then discounted when computing target values for updates in ADP. captures exactly this, and therefore, using this estimate in our weights suffices.
Our practical algorithm, that we refer to as DisCor (Distribution Correction), is identical to conventional ADP methods like Q-learning, with the exception that it performs a weighted Bellman backup – it assigns a weight to a transition, and performs a Bellman backup weighted by these weights, as shown below.
|
Animation and Model of Automotive Piston - MATLAB & Simulink Example
Step 1: Describe Piston Model
Step 2: Calculate and Plot Piston Height
Step 3: Calculate and Plot Volume of Piston Cylinder
Step 4: Evaluate Piston Motion for Changing Angular Speed
Step 5: Create Animation of Moving Piston
This example shows how to model the motion of an automotive piston by using MATLAB® and Symbolic Math Toolbox™.
Define the motion of an automotive piston and create an animation to model the piston motion.
The following figure shows the model of an automotive piston. The moving parts of the piston consist of a connecting rod (red line), a piston crank (green line), and a piston cylinder head (gray rectangle).
Describe the properties of the piston by defining the parameters:
the cylinder stroke length
\mathit{S}
the piston bore diameter
\mathit{B}
the length of the connecting rod
\mathit{L}
the crank radius
\mathit{a}
the crank angle
\theta
Define the origin O of the coordinate system at the crankshaft location. Label the nearest distance between the piston head and the crankshaft location as bottom dead center (BDC). The height of BDC is
\mathit{L}-\mathit{a}
. Label the farthest distance between the piston head and the crankshaft location as top dead center (TDC). The height of TDC is
\mathit{L}+\mathit{a}
The following figure is a schematic of the crank and connecting rod.
The height of the piston relative to the origin is
\mathit{H}=\mathit{a}\text{\hspace{0.17em}}\mathrm{cos}\theta +\sqrt{\text{\hspace{0.17em}}{\mathit{L}}^{2}-{\mathit{a}}^{2}{\mathrm{sin}\left(\theta \right)}^{2}\text{\hspace{0.17em}}}.
Define the piston height as a symbolic function by using the syms function.
syms pistHeight(L,a,theta)
pistHeight(L,a,theta) = a*cos(theta) + sqrt(L^2-a^2*sin(theta)^2);
Assume that the connecting rod length is
\mathit{L}=150\text{\hspace{0.17em}}\mathrm{mm}
and the crank radius is
\mathit{a}=50\text{\hspace{0.17em}}\mathrm{mm}
. Plot the piston height as a function of the crank angle for one revolution within the interval [0 2*pi].
fplot(pistHeight(150,50,theta),[0 2*pi])
xlabel('Crank angle (rad)')
ylabel('Height (mm)')
The piston head is highest when the piston is at TDC and the crank angle is 0 or 2*pi. The piston head is lowest when the piston is at BDC and the crank angle is pi.
You can also plot the piston height for various values of
\mathit{a}
\theta
. Create a surface plot of the piston height by using the fsurf function. Show the piston height within the interval
30\text{\hspace{0.17em}}\mathrm{mm}<\mathit{a}<60\text{\hspace{0.17em}}\mathrm{mm}
0<\theta \text{\hspace{0.17em}}<2\pi
fsurf(pistHeight(150,a,theta),[30 60 0 2*pi])
xlabel('Crank radius (mm)')
ylabel('Crank angle (rad)')
zlabel('Height (mm)')
The length of the combustion chamber is equal to the difference between the TDC location and the piston height. The volume of the piston cylinder can be expressed as
\mathit{V}=\pi \text{\hspace{0.17em}}{\left(\frac{\mathit{B}}{2}\right)}^{2}\left(\mathit{L}+\mathit{a}-\mathit{H}\right)
Define the piston volume as a symbolic function and substitute the expression for
\mathit{H}
with pistHeight.
syms pistVol(L,a,theta,B)
pistVol(L,a,theta,B) = pi*(B/2)^2*(L+a-pistHeight)
pistVol(L, a, theta, B) =
\frac{\pi {B}^{2} \left(L+a-a \mathrm{cos}\left(\theta \right)-\sqrt{{L}^{2}-{a}^{2} {\mathrm{sin}\left(\theta \right)}^{2}}\right)}{4}
Next, define the values for the following parameters:
\mathit{L}=150\text{\hspace{0.17em}}\mathrm{mm}
\mathit{a}=50\text{\hspace{0.17em}}\mathrm{mm}
the bore diameter
\mathit{B}=86\text{\hspace{0.17em}}\mathrm{mm}
Plot the piston volume as a function of the crank angle for one revolution within the interval [0 2*pi].
fplot(pistVol(150,50,theta,86),[0 2*pi])
ylabel('Volume (mm^3)')
The piston volume is smallest when the piston is at TDC and the crank angle is 0 or 2*pi. The piston volume is largest when the piston is at BDC and the crank angle is pi.
Assume the crank rotates at 30 rpm for the first 3 seconds, then steadily increases from 30 to 80 rpm for the next 4 seconds, and then remains at 80 rpm.
Define the angular speed as a function of time by using the piecewise function. Multiply the angular speed by
2\pi /60
to convert the rotational speed from rpm to rad/sec.
syms t0 t
rpmConv = 2*pi/60;
angVel(t0) = piecewise(t0<=3, 30, t0>3 & t0<=7, 30 + 50/4*(t0-3), t0>7, 80)*rpmConv
angVel(t0) =
\left\{\begin{array}{cl}\pi & \text{ if }{t}_{0}\le 3\\ \frac{\pi \left(\frac{25 {t}_{0}}{2}-\frac{15}{2}\right)}{30}& \text{ if }{t}_{0}\in \left(3,7\right]\\ \frac{8 \pi }{3}& \text{ if }7<{t}_{0}\end{array}
Calculate the crank angle by integrating the angular speed using the int function. Assume an initial crank angle of
\theta =0
. Compute the integral of the angular speed from 0 to t.
angPos(t) = int(angVel,t0,0,t);
Find the piston height as a function of time by substituting the expression angPos for the crank angle.
H(t) = pistHeight(150,50,angPos)
\begin{array}{l}\left\{\begin{array}{cl}200& \text{ if }t=0\\ 100& \text{ if }t=3\\ \sqrt{20625}+25& \text{ if }t=7\\ 50 \mathrm{cos}\left({\sigma }_{1}\right)+\sqrt{22500-2500 {\mathrm{sin}\left({\sigma }_{1}\right)}^{2}}& \text{ if }7<t\\ \sqrt{22500-2500 {\mathrm{sin}\left({\sigma }_{2}\right)}^{2}}-50 \mathrm{cos}\left({\sigma }_{2}\right)& \text{ if }t\in \left(3,7\right]\\ 50 \mathrm{cos}\left(\pi t\right)+\sqrt{22500-2500 {\mathrm{sin}\left(\pi t\right)}^{2}}& \text{ if }t<0\vee t\in \left(0,3\right]\end{array}\\ \\ \mathrm{where}\\ \\ \mathrm{ }{\sigma }_{1}=\frac{31 \pi }{3}+\frac{8 \pi \left(t-7\right)}{3}\\ \\ \mathrm{ }{\sigma }_{2}=\frac{\pi \left(5 t+9\right) \left(t-3\right)}{24}\end{array}
Plot the piston height as a function of time. Notice that the oscillation of the piston height becomes faster between 3 and 7 seconds.
fplot(H(t),[0 10])
Create an animation of the moving piston given a changing angular speed.
First, create a new figure. Plot the cylinder walls that have fixed locations. Set the x-axis and y-axis to be equal length.
plot([-43 -43],[50 210],'k','LineWidth',3)
plot([43 43],[50 210],'k','LineWidth',3)
plot([-43 43],[210 210],'k','LineWidth',3)
Next, create a stop-motion animation object of the piston head by using the fanimator function. By default, fanimator creates an animation object by generating 10 frames per unit time within the range of t from 0 to 10. Model the piston head as a rectangle with a thickness of 10 mm and variable height H(t). Plot the piston head by using the rectangle function.
fanimator(@rectangle,'Position',[-43 H(t) 86 10],'FaceColor',[0.8 0.8 0.8])
Add the animation objects of the connecting rod and the piston crank. Add a piece of text to count the elapsed time.
fanimator(@(t) plot([0 50*sin(angPos(t))],[H(t) 50*cos(angPos(t))],'r-','LineWidth',3))
fanimator(@(t) plot([0 50*sin(angPos(t))],[0 50*cos(angPos(t))],'g-','LineWidth',3))
fanimator(@(t) text(-25,225,"Timer: "+num2str(t,2)));
Use the command playAnimation to play the animation of the moving piston.
|
Orbit equation - WikiMili, The Best Wikipedia Reader
In astrodynamics an orbit equation defines the path of orbiting body
{\displaystyle m_{2}\,\!}
{\displaystyle m_{1}\,\!}
{\displaystyle m_{1}\,\!}
, without specifying position as a function of time. Under standard assumptions, a body moving under the influence of a force, directed to a central body, with a magnitude inversely proportional to the square of the distance (such as gravity), has an orbit that is a conic section (i.e. circular orbit, elliptic orbit, parabolic trajectory, hyperbolic trajectory, or radial trajectory) with the central body located at one of the two foci, or the focus (Kepler's first law).
Central, inverse-square law force
Low-energy trajectories
Categorization of orbits
If the conic section intersects the central body, then the actual trajectory can only be the part above the surface, but for that part the orbit equation and many related formulas still apply, as long as it is a freefall (situation of weightlessness).
Consider a two-body system consisting of a central body of mass M and a much smaller, orbiting body of mass
{\displaystyle m}
, and suppose the two bodies interact via a central, inverse-square law force (such as gravitation). In polar coordinates, the orbit equation can be written as [1]
{\displaystyle r={\frac {\ell ^{2}}{m^{2}\mu }}{\frac {1}{1+e\cos \theta }}}
{\displaystyle r}
is the separation distance between the two bodies and
{\displaystyle \theta }
is the angle that
{\displaystyle \mathbf {r} }
makes with the axis of periapsis (also called the true anomaly ). The parameter
{\displaystyle \ell }
is the angular momentum of the orbiting body about the central body, and is equal to
{\displaystyle mr^{2}{\dot {\theta }}}
, or the mass multiplied by the magnitude of the cross product of the relative position and velocity vectors of the two bodies. [note 1] The parameter
{\displaystyle \mu }
is the constant for which
{\displaystyle \mu /r^{2}}
equals the acceleration of the smaller body (for gravitation,
{\displaystyle \mu }
{\displaystyle -GM}
). For a given orbit, the larger
{\displaystyle \mu }
, the faster the orbiting body moves in it: twice as fast if the attraction is four times as strong. The parameter
{\displaystyle e}
is the eccentricity of the orbit, and is given by [1]
{\displaystyle e={\sqrt {1+{\frac {2E\ell ^{2}}{m^{3}\mu ^{2}}}}}}
{\displaystyle E}
is the energy of the orbit.
The above relation between
{\displaystyle r}
{\displaystyle \theta }
describes a conic section. [1] The value of
{\displaystyle e}
controls what kind of conic section the orbit is :
{\displaystyle e<1}
, the orbit is elliptic;
{\displaystyle e=1}
, the orbit is parabolic;
{\displaystyle e>1}
, the orbit is hyperbolic.
{\displaystyle r}
in the equation is :
{\displaystyle r={{\ell ^{2}} \over {m^{2}\mu }}{{1} \over {1+e}}}
while, if
{\displaystyle e<1}
, the maximum value is :
{\displaystyle r={{\ell ^{2}} \over {m^{2}\mu }}{{1} \over {1-e}}}
If the maximum is less than the radius of the central body, then the conic section is an ellipse which is fully inside the central body and no part of it is a possible trajectory. If the maximum is more, but the minimum is less than the radius, part of the trajectory is possible:
if the energy is non-negative (parabolic or hyperbolic orbit): the motion is either away from the central body, or towards it.
if the energy is negative: the motion can be first away from the central body, up to
{\displaystyle r={{\ell ^{2}} \over {m^{2}\mu }}{{1} \over {1-e}}}
after which the object falls back.
{\displaystyle r}
becomes such that the orbiting body enters an atmosphere, then the standard assumptions no longer apply, as in atmospheric reentry.
If the central body is the Earth, and the energy is only slightly larger than the potential energy at the surface of the Earth, then the orbit is elliptic with eccentricity close to 1 and one end of the ellipse just beyond the center of the Earth, and the other end just above the surface. Only a small part of the ellipse is applicable.
If the horizontal speed is
{\displaystyle v\,\!}
, then the periapsis distance is
{\displaystyle {\frac {v^{2}}{2g}}}
. The energy at the surface of the Earth corresponds to that of an elliptic orbit with
{\displaystyle a=R/2\,\!}
{\displaystyle R\,\!}
the radius of the Earth), which can not actually exist because it is an ellipse fully below the surface. The energy increase with increase of
{\displaystyle a}
is at a rate
{\displaystyle 2g\,\!}
. The maximum height above the surface of the orbit is the length of the ellipse, minus
{\displaystyle R\,\!}
, minus the part "below" the center of the Earth, hence twice the increase of
{\displaystyle a\,\!}
minus the periapsis distance. At the top[ of what? ] the potential energy is
{\displaystyle g}
times this height, and the kinetic energy is
{\displaystyle {\frac {v^{2}}{2}}}
. This adds up to the energy increase just mentioned. The width of the ellipse is 19 minutes[ why? ] times
{\displaystyle v\,\!}
The part of the ellipse above the surface can be approximated by a part of a parabola, which is obtained in a model where gravity is assumed constant. This should be distinguished from the parabolic orbit in the sense of astrodynamics, where the velocity is the escape velocity.
See also trajectory.
Consider orbits which are at one point horizontal, near the surface of the Earth. For increasing speeds at this point the orbits are subsequently:
part of an ellipse with vertical major axis, with the center of the Earth as the far focus (throwing a stone, sub-orbital spaceflight, ballistic missile)
a circle just above the surface of the Earth (Low Earth orbit)
an ellipse with vertical major axis, with the center of the Earth as the near focus
Note that in the sequence above[ where? ],
{\displaystyle h}
{\displaystyle \epsilon }
{\displaystyle a}
increase monotonically, but
{\displaystyle e}
first decreases from 1 to 0, then increases from 0 to infinity. The reversal is when the center of the Earth changes from being the far focus to being the near focus (the other focus starts near the surface and passes the center of the Earth). We have
{\displaystyle e=\left|{\frac {R}{a}}-1\right|}
Extending this to orbits which are horizontal at another height, and orbits of which the extrapolation is horizontal below the surface of the Earth, we get a categorization of all orbits, except the radial trajectories, for which, by the way, the orbit equation can not be used. In this categorization ellipses are considered twice, so for ellipses with both sides above the surface one can restrict oneself to taking the side which is lower as the reference side, while for ellipses of which only one side is above the surface, taking that side.
↑ There is a related parameter, known as the specific relative angular momentum, . It is related to by .
In gravitationally bound systems, the orbital speed of an astronomical body or object is the speed at which it orbits around either the barycenter or, if one object is much more massive than the other bodies in the system, its speed relative to the center of mass of the most massive body..
In astrodynamics, the orbital eccentricity of an astronomical object is a dimensionless parameter that determines the amount by which its orbit around another body deviates from a perfect circle. A value of 0 is a circular orbit, values between 0 and 1 form an elliptic orbit, 1 is a parabolic escape orbit, and greater than 1 is a hyperbola. The term derives its name from the parameters of conic sections, as every Kepler orbit is a conic section. It is normally used for the isolated two-body problem, but extensions exist for objects following a Klemperer rosette orbit through the galaxy.
1 2 3 Fetter, Alexander; Walecka, John (2003). Theoretical Mechanics of Particles and Continua. Dover Publications. pp. 13–22.
|
NOMINATIONS FOR JSA AWARD
Northridge 20 Years After
Seismological Research Letters January 01, 2014, Vol.85, 1-4. doi:https://doi.org/10.1785/0220130194
Preface to the Focus Section on the 20 April 2013 Magnitude 6.6 Lushan, China, Earthquake
Huajian Yao; Zhigang Peng
Focal Mechanisms of the 2013 Mw 6.6 Lushan, China Earthquake and High‐Resolution Aftershock Relocations
Libo Han; Xiangfang Zeng; Changsheng Jiang; Sidao Ni; Haijiang Zhang; Feng Long
Seismological Research Letters January 01, 2014, Vol.85, 8-14. doi:https://doi.org/10.1785/0220130083
Kinematic Rupture Model and Hypocenter Relocation of the 2013 Mw 6.6 Lushan Earthquake Constrained by Strong‐Motion and Teleseismic Data
Yong Zhang; Rongjiang Wang; Yun‐tai Chen; Lisheng Xu; Fang Du; Mingpei Jin; Hongwei Tu; Torsten Dahm
Seismological Research Letters January 01, 2014, Vol.85, 15-22. doi:https://doi.org/10.1785/0220130126
Near‐Source Vertical and Horizontal Strong Ground Motion from the 20 April 2013 Mw 6.8 Lushan Earthquake in China
Junju Xie; Xiaojun Li; Zengping Wen; Chunquan Wu
The 2013 Lushan Ms 7.0 Earthquake: Varied Seismogenic Structure from the 2008 Wenchuan Earthquake
Chen Lichun; Wang Hu; Ran Yongkang; Lei Shengxue; Li Xi; Wu Fuyao; Ma Xingquan; Liu Chenglong; Han Fei
The 2013 Lushan Earthquake in China Tests Hazard Assessments
Mian Liu; Gang Luo; Hui Wang
Stress, Distance, Magnitude, and Clustering Influences on the Success or Failure of an Aftershock Forecast: The 2013 M 6.6 Lushan Earthquake and Other Examples
Tom Parsons; Margaret Segou
Coulomb Stress Change and Evolution Induced by the 2008 Wenchuan Earthquake and its Delayed Triggering of the 2013 Mw 6.6 Lushan Earthquake
Yanzhao Wang; Fan Wang; Min Wang; Zheng‐Kang Shen; Yongge Wan
Ke Jia; Shiyong Zhou; Jiancang Zhuang; Changsheng Jiang
K. M. Scharer; J. B. Salisbury; J R. Arrowsmith; T. K. Rockwell
Model Update January 2013: Upper Mantle Heterogeneity beneath North America from Travel‐Time Tomography with Global and USArray Transportable Array Data
Scott Burdick; Robert D. van der Hilst; Frank L. Vernon; Vladik Martynov; Trilby Cox; Jennifer Eakins; Gulsum H. Karasu; Jonathan Tylell; Luciana Astiz; Gary L. Pavlis
Estimating Subsurface Shear Velocity with Radial to Vertical Ratio of Local P Waves
Sidao Ni; Zhiwei Li; Paul Somerville
Characterization of the 2011 Mineral, Virginia, Earthquake Effects and Epicenter from Website Traffic Analysis
Rémy Bossu; Sandrine Lefebvre; Yves Cansi; Gilles Mazet‐Roux
A Duration Magnitude Scale for the Irpinia Seismic Network, Southern Italy
Simona Colombelli; Antonio Emolo; Aldo Zollo
Seismological Research Letters January 01, 2014, Vol.85, 98-107. doi:https://doi.org/10.1785/0220130055
MOZART: A Seismological Investigation of the East African Rift in Central Mozambique
J. F. B. D. Fonseca; J. Chamussa; A. Domingues; G. Helffrich; E. Antunes; G. van Aswegen; L. V. Pinto; S. Custódio; V. J. Manhiça
Seismic‐Hazard Assessment in the Kachchh Region of Gujarat (India) through Deterministic Modeling Using a Semi‐Empirical Approach
Max Wyss; Zhongliang Wu
L.‐F. Zhao; X.‐B. Xie; W.‐M. Wang; Z.‐X. Yao
Taxonomy of κ: A Review of Definitions and Estimation Approaches Targeted to Applications
Olga‐Joan Ktenidou; Fabrice Cotton; Norman A. Abrahamson; John G. Anderson
J. R. Evans; R. M. Allen; A. I. Chung; E. S. Cochran; R. Guy; M. Hellweg; J. F. Lawrence
Robert Thériault; France St‐Laurent; Friedemann T. Freund; John S. Derr
Comment on “A Unified Seismic Catalog for the Iranian Plateau (1900–2011)” by Mohammad P. Shahvar, Mehdi Zare, and Silvia Castellaro
Noorbakhsh Mirzaei; Elham Shabani; Seyed Hasan Mousavi Bafrouei
Reply to “Comment on ‘A Unified Seismic Catalog for the Iranian Plateau (1900–2011)’ by Mohammad P. Shahvar, Mehdi Zaré, and Silvia Castellaro” by Noorbakhsh Mirzaei, Elham Shabani, and S...
Mohammad P. Shahvar; Mehdi Zaré; Silvia Castellaro
Comment on “Seismic Hazard Analysis for the UK: Sensitivity to Spatial Seismicity Modelling and Ground Motion Prediction Equations” by K. Goda, W. P. Aspinall, and C. A. Taylor
Reply to “Comment on ‘Seismic Hazard Analysis for the U.K.: Sensitivity to Spatial Seismicity Modelling and Ground Motion Prediction Equations’ by Katsuichiro Goda, Willy P. Aspinall, and...
Katsuichiro Goda; Willy P. Aspinall; Colin A. Taylor
The Waveform Browser, an Interactive and Intuitive Tool for Exploring Seismic Event Data
Maximiliano J. Bezada
Instrument Corrections by Time‐Domain Deconvolution
J. F. Anderson; J. M. Lees
Eastern Section‐SSA 2013 Meeting Report
Seismological Research Letters January 01, 2014, Vol.85, 5. doi:https://doi.org/10.1785/0220130205
JESUIT SEISMOLOGICAL ASSOCIATION AWARD FOR CONTRIBUTIONS TO OBSERVATIONAL SEISMOLOGY
Mw
|
# Kevin Leyden and Bill Goodwine, Using Fractional-Order Differential Equations for Health Monitoring of a System of Cooperating Robots. Submitted to the 2016 IEEE International Conference on Robotics and Automation.
# Kevin Leyden and Bill Goodwine, [http://controls.ame.nd.edu/~bill/papers/2016/icra_16.pdf Using Fractional-Order Differential Equations for Health Monitoring of a System of Cooperating Robots]. Submitted to the 2016 IEEE International Conference on Robotics and Automation.
# Bill Goodwine, Towards General Results in Bifurcations in Optimal Solutions for Symmetric Distributed Robotic Formation Control. Submitted to the 2015 IEEE International Symposium on System Integration.
# Baoyang Deng, Michael O'Connor and Bill Goodwine, Bifurcations and Symmetry in Two Optimal Formation Control Problems for Mobile Robotic Systems. Accepted for publication pending revisions in Robotica.
{\displaystyle {\dot {x}}=Ax+Bu}
{\displaystyle {\dot {x}}=f(x)+g(x)u}
|
Set Object Poses
SetObjectPoses
is useful for setting a scene to the exact state of a previous initialization without relying on the Object Position Randomization’s random seed. Use a previous metadata dump of SimObjects to get position/rotation data for each SimObject. Sets up a scene according to the provided objects and poses. Objects which are specified twice will be copied.
All moveable and pickupable objects will be removed from the scene if they are not specified.
action='SetObjectPoses',
objectPoses=[
"objectName": "Alarm_Clock_19",
Set Object Poses Parameters
objectPoses
List of object names and their poses. This information can be dumped from the metadata of a prior episode. Each pose must contain keys for
objectName: The name of the Sim Object reported by the object metadata's
position: Global coordinate position of the object, as reported in the object's
rotation: Local coordinate rotation of the object, as reported in the object's
Set Mass Properties
SetMassProperties
changes the mass properties of any
action="SetMassProperties",
objectId="Apple|+1.25|+0.25|-0.75",
mass=22.5,
drag=15.9,
angularDrag=5.6
Set Mass Properties Parameters
The unique identifier of a
object, found in the object metadata.
The new object's mass, in kilograms. Must be greater than
0
The new drag coefficient of the object, which determines its resistance when it is in motion. Higher values slow the object down more. Must be greater than
0
The new angular drag coefficient of the object, which determines its resistance when it is in rotational motion. Higher values make it harder to rotate the object. Must be greater than
0
Object Type Temperature Decay Time
SetRoomTempDecayTimeForType
changes all objects specified by objectType in the current scene to have a new Room Temp Decay Time specified by TimeUntilRoomTemp. This can be done to adjust the amount of time it takes for specific object types to return to room temperature. By default, all objects begin at Room Temperature. Objects placed on/in Hot or Cold sources (stove burner, microwave, fridge etc.) will have their ObjectTemperature value changed to Hot or Cold. If the object is removed from a Hot or Cold source, they will return to room temperature over time. The default time it takes for an object to decay to Room Temp is 10 seconds.
action="SetRoomTempDecayTimeForType",
objectType="Bread",
TimeUntilRoomTemp=20.0
The object type to change the decay timer of. See a full list of Object Types on the Object Types page.
TimeUntilRoomTemp
The amount of time it will take for an object to decay from Hot/Cold to Room Temp.
Global Temperature Decay Time
SetGlobalRoomTempDecayTime
changes all objects in the current scene to have a new Room Temp Decay Time specified by TimeUntilRoomTemp. By default, all objects begin at Room Temperature. Objects placed on/in Hot or Cold sources (stove burner, microwave, fridge etc.) will have their ObjectTemperature value changed to Hot or Cold. If the object is removed from a Hot or Cold source, they will return to room temperature over time. The default time it takes for an object to decay to Room Temp is 10 seconds.
action="SetGlobalRoomTempDecayTime",
Disable Temperature Decay
SetDecayTemperatureBool
disables the decay over time of the ObjectTemperature of objects. If set to False, objects will not decay to room temperature and will remain Hot/Cold even if removed from the Hot/Cold source.
action='SetDecayTemperatureBool',
allowDecayTemperature=False
allowDecayTemperature
Set to allow object Temperatures to decay over time.
Hiding an object may cause unintended interactions with other objects after an object is disabled. For example, a table with a Plate and Apple resting on top of it will cause both the Plate and Apple to fall to the floor if the table object supporting them is disabled.
completely destroys an removes an object from the scene. This action cannot be undone (without resetting the scene).
action="RemoveFromScene",
objectId="Mug|+0.25|-0.27|+1.05"
Remove from Scene Parameters
The unique identifier of the object in the scene.
disables an object from being visible in the scene. Unlike the RemoveFromScene action, this does not permanently destroy an object in the scene, as it can be toggled back on later in the episode using EnableObject.
action="DisableObject",
objectId="DiningTable|+1.0|+1.0|+1.0"
Disable Objet Parameters
The string id of the sim object to disable. Note that this may cause unintended interactions with other objects after an object is disabled. For example, a table with a Plate and Apple resting on top of it will cause both the Plate and Apple to fall to the floor if the table object supporting them is disabled.
EnableObject
activates an object if it has been previously disabled by DisableObject.
action="EnableObject",
Enable Objet Parameters
The string id of the sim object to reactivate. Note that this may cause unintended interactions with other objects after an object is reactivated. Enabled objects will return to their original location, which may cause clipping or weird collision with other objects if another sim object or the agent has moved into the area where the enabled object is.
|
DeBank 2.0 - ETNA Network || Game
ETNA Network || Game
The Vault aaS
ETNA Token - Governance
MTB Token - P2E Rewards
NFT - The Void
NFT - The Cronos
DeBank is ETNA Network's decentralized lending and borrowing protocol. It allows people to take out loans using a range of assets as collateral including whitelisted NFTs and other nontangible tokens.
In addition to this, the protocol gives out loans at zero interest rate to borrowers who use ETNA Token or ETNA NFTs as collateral.
DeBank - My Loans UI
The following is a detailed description of the protocol
1. List of assets that can be Lent to or borrowed from the protocol:
Only stable coins can be lent to or borrowed from the protocol.
i. On BSC, only BUSD and USDT can be lent or borrowed
ii. On Polygon, only USDC and USDT can be lent or borrowed.
2. List of assets that can be used as collateral:
i. On BSC, BNB, ETNA token and ETNA NFTs are assets that can be used as collateral. Any other ERC20 token on BSC can be integrated at a later time.
ii. On Polygon, Matic, ETNA token and ETNA NFTs are assets that can be used as collateral. Any other ERC20 token on Polygon can be integrated at a later time.
Lending and Borrowing Interest Rates
1. Interest rate of Stable coin lent to or borrowed from the protocol:
The interest rates algorithm is as follows:
This gives the lending rate as:
r_c=r_{c(min)}+\frac{P_B}{0.95}\times(r_{c(max)}-r_{c(min)})
And the borrowing rate as:
r_b=r_{b(min)}+\frac{P_B}{0.95}\times(r_{b(max)}-r_{b(min)})
The above rates are variable so when they change as
P_B
changes, active loan rates and active deposits (assets lent to the platform) rates also changes except for borrowers who took a loan at a fixed rate. In this case, the borrower’s rate is at a premium and it is given by:
r_{b(fix)}=r_f+r_{b(min)}+\frac{P_B}{0.95}\times(r_{b(max)}-r_{b(min)})
i.e, such a borrower's rate is increased by
r_f
and in this case, it stays fixed even if the rates in the protocol changes due to changes in
P_B
P_B
is the ration of amount borrowed to total deposited for any given borrowable asset. It is which given by:
P_B= \frac{Q_{BORROWED}}{Q_{TOTAL}}
P_B
goes above 0.95 (say 0.951), borrowing access for that particular asset stops until the percentage drops either by some borrowers paying back all or part of their loans or more lenders lending to the protocol.
If the total amount of BUSD lent to the protocol at a given time is
Q_{TOTAL}=480,000
and total amount already borrowed is
Q_{BORROWED}=250,000
P_B=\frac{250,000}{480,000} = 0.5208
With the following parameters set as follows:
r_{c(min)}=10\%
r_{c(max)}=20\%
r_{b(min)}=13\%
r_{b(max)}=30\%
r_{f}=5\%
Then lenders get paid:
r_c=10+\frac{0.5208}{0.95}\times(20-10)=15.48\%
And borrowers get charged:
r_b=13+\frac{0.5208}{0.95}\times(30-13)=22.32\%
With both the lender's and the borrower's rates varying as
P_B
Borrowing Fixed Rate:
If the borrower rather takes the loan at a fixed rate, such a borrower will be charged a fixed interest rate of:
r_{b(fix)}=5\%+22.32\% = 27.32\%
Exceptions with ETNA assets used as collateral:
There is an exception when a borrower's loan is collateralized with ETNA tokens or ETNA NFTs. In both cases, the loan are at zero interest rate charged.
Collateral Borrowing Power
Each collateral asset has a borrowing power defined by a borrowing factor
B_F
. For each collateral asset, the maximum amount that can be borrowed
A_{borrow}
A_{borrow}=Col_{value}\times B_F
Col_{value}
is the value of the collateral or collaterals used.
For a collateral asset (say ETNA Token) with
B_F=0.25
. If a borrower uses ETNA that worth $10,000 as collateral, the maximum amount the borrower can take out as loan is:
A_{borrow}=10,000 \times 0.25 = \$2,500
Which can be taken out in BUSD, USDC or USDT.
With this collateral value, the user can take out a maximum of 2500 but can choose borrow any amount between 0 – 2,500 BUSD/USDC, the remaining amount not taken will remain as accessible credit that can be taken at a later time.
With depreciation of the value of the collateral used by a borrower, the loan taken by the borrower is flagged for liquidation if the collateral depreciates up to a certain value.
The value at which liquidation flagging occurs is determined by a liquidation factor
L_F
The value the collateral depreciates to, at which loan is flagged for liquidation (
Col_{value(L)}
) is also dependent on the amount borrowed and it is given by:
Col_{value(L)}=A_{borrowed}\times (1+L_F)
L_F=0.2
and the user takes out the max amount of
2,500
BUSD, then the loan is flagged for liquidation when the collateral value drops to:
Col_{value(L)}=2,500\times (1+0.2)=\$3,000
The borrower can prevent liquidation by paying back some part or the whole loan or by depositing more collateral to increase his borrowable limit.
When liquidation occurs, the total amount by which the loan is deliquescent is liquidated plus a liquidation fee, this is given by:
A_{liquidation}=A_{borrowed}\times (1+L_{fee})
L_{fee}
is the liquidation fee factor.
L_{fee}=0.1
, for the loan in the previous section, the amount of the borrower’s collateral that is liquidated is:
A_{liquidation}=2500\times (1+0.1) =\$2,750
Since the collateral in this example is ETNA, then ETNA amount that worth $2,750 is liquidated.
Again, user can prevent this by paying all or part of the loan back or by depositing more collateral to increase his/her borrowable limit.
The protocol have a function for appointing liquidator account roles. It allows the appointment of up to 100 accounts to be liquidators. When an account is appointed as a liquidator, such an account can see loans that are flagged for liquidation, this displays in the liquidation dashboard.
Accounts appointed are only accounts from users of the vault contract. Top users of the vault are appointed to be liquidators.
Liquidator payment:
The liquidation fee is given by:
Liquidation_{fee}=A_{liquidation}-A_{borrowed} =A_{borrowed}\times L_{fee}
As with the above example where
L_{fee}=0.1
, the total fee charged for liquidation is:
Liquidation_{fee}=2,500\times 0.1 =\$250
The payment for liquidators is a percentage of this fee defined by a payment parameter
P_{liquidator}
. This payment is given by:
Liquidator_{payment}=Liquidation_{fee}\times P_{liquidator}
P_{liquidator}=0.4
, the liquidator's payment is:
Liquidator_{payment}=250\times 0.4 =\$100
Admin payment:
The remaining portion of the liquidation fee is paid as an admin fee and it is given by:
Admin_{payment}=Liquidation_{fee}\times (1-P_{liquidator})
For this example, we have
Admin_{payment}=\$150
Amount Borrowed vs Amount owed
For example, in this senerio, user deposited the following:
10000 ETNA which worths $2000
10 BNB which worths $6000
ETNA Borrowing factor is 0.2 and BNB Borrowing factor is 0.3
This means the user allowed borrowing amount is: 2000x0.2 + 6000x0.3 = $2200
If user borrowed all the $2200.
If user does not pay anything back, with effective apr = 7.5% (assuming constant for calculation purpose), after a year user will owe = 2365
If also the liquidation occurs when calateral value dropped to amount owed*(1+0.25) = 2365 x 1.25 =2956.25
at this time, let us asume the collateral sum total is now 2940 due to market crash. Liquidation should always take the most from BNB and then the rest from ETNA. so if there is any portion of collateral to be returned in the case of multiple collateral, it should be ETNA to be returned since liquidation will always take the most possible from other collaterals used and then what ever remaining to be taken is taken from ETNA
Take the most from other collateral first
Note: All the dollar values are calculated in terms of the collateral used and at the market rate at the time the liquidation occurred.
Key Products - Previous
|
Compare how distance and velocity are related with these two scenarios:
A ferry crosses the bay so that its distance (in miles) from the dock at time
t
d(t) = 1-\operatorname{cos }t
. Find the velocity,
d^\prime(t)
, at times
t = 1
π
5
hours. Explain what concepts of calculus you applied in order to solve this problem.
d^\prime(t) = v(t)
. Describe what velocity looks like on a distance graph.
When a cat chases a mouse, the cat's velocity, measured in feet per second, is
v(t) = 3t
. Sketch a graph and find the distance the cat ran in the first
5
seconds. Explain what concepts of calculus you applied in order to solve this problem.
Describe what distance (or displacement) looks like on a velocity graph.
Both (a) and (b) involve distance and velocity. However, each required a different method or approach. Describe the relationship between distance and velocity, mentioning the derivative and area under a curve.
|
First Contact With Carbon-Based Aliens | Metaculus
First Contact With Carbon-Based Aliens
As of question writing, all known life is carbon-based, in the sense that it needs to contain carbon atoms to survive.
But life could take many forms:
Wikipedia has a handy list of hypothetical types of biochemistry, notably silicon biochemistry.
Life could be based on non-organic chemistry (e.g. inorganic chemistry, or nuclear chemistry in the degenerate crust on the surface of a neutron star*).
Life could also not be chemistry based at all. It could be electrical (e.g. Ems) or mechanical (e.g. clockwork).
Life could operate on vastly different time / space scales from us (e.g. a cloud of interstellar stuff somehow consistently implementing a sentient computation).
These examples are not necessarily mutually exclusive, and I obviously make no claim regarding their respective feasibility/likelihood. They are rather meant to suggest the vastness of design-space.
If we encounter a phenomenon that is widely considered by the scientific community to be an alien life-form, will all simple life-forms we discover be carbon-based?
Life-form details:
The life-form has to have originated independently from earth life. That is: earth life can be a consequence of the alien life-form, they can share a cause, but earth life cannot have caused the alien life.
The life-forms that count for this question are ones on the complexity level of our single-celled organisms or lower (as determined by a poll of xeno-biologists if there is any ambiguity). If there are none, then the simplest life-forms we have found are taken for resolution.
The life-form has to need less than 1% of its atoms to be carbon atoms in order to keep being alive. It can incidentally contain carbon atoms, as long as they could theoretically be absent and the life-form still be alive.
The scientific community has to have reached a consensus as judged by Metaculus admins.
This resolves positive if any life-form we encounter satisfies points 1. 2. and 3.
This resolves negative if all the life-forms we encounter that satisfy points 1. and 2. do not satisfiy point 3.
This resolves ambiguous if no life-form that satisfies point 1. is found before 2500, or if before then we have conclusive evidence that none exists in the observable universe.
This resolves 50 years after we first discover an alien life-form that satisfies condition 1., to give time for consensus forming.
* My thanks to @(Uncle Jeff) for this example.
^†
Note that in this sense Humans are only "based" on hydrogen (60%), oxygen (25%), carbon (10%) and nitrogen (1.5%).
Physical Sciences – Chemistry
|
Quadratic formula - Wikipedia @ WordDisk
Derivations of the formula
In elementary algebra, the quadratic formula is a formula that provides the solution(s) to a quadratic equation. There are other ways of solving a quadratic equation instead of using the quadratic formula, such as factoring (direct factoring, grouping, AC method), completing the square, graphing and others.
Formula that provides the solutions to a quadratic equation
Not to be confused with quadratic function or quadratic equation.
The quadratic function y = 1/2x2 − 5/2x + 2, with roots x = 1 and x = 4.
Given a general quadratic equation of the form
{\displaystyle ax^{2}+bx+c=0}
whose discriminant
{\displaystyle b^{2}-4ac}
is positive (with x representing an unknown, a, b and c representing constants with a ≠ 0), the quadratic formula is:
{\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}\ \ }
where the plus–minus symbol "±" indicates that the quadratic equation has two solutions.[1] Written separately, they become:
{\displaystyle x_{1}={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\quad {\text{and}}\quad x_{2}={\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}}}
Each of these two solutions is also called a root (or zero) of the quadratic equation. Geometrically, these roots represent the x-values at which any parabola, explicitly given as y = ax2 + bx + c, crosses the x-axis.[2]
As well as being a formula that yields the zeros of any parabola, the quadratic formula can also be used to identify the axis of symmetry of the parabola,[3] and the number of real zeros the quadratic equation contains.[4]
This article uses material from the Wikipedia article Quadratic formula, and is written by contributors. Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.
|
Flatness (mathematics) - Wikipedia @ WordDisk
Flatness (mathematics)
In mathematics, the flatness (symbol: ⏥) of a surface is the degree to which it approximates a mathematical plane. The term is often generalized for higher-dimensional manifolds to describe the degree to which they approximate the Euclidean space of the same dimensionality. (See curvature.)[1]
Degree to which a surface approximates a mathematical plain
Flatness in homological algebra and algebraic geometry means, of an object
{\displaystyle A}
in an abelian category, that
{\displaystyle -\otimes A}
is an exact functor. See flat module or, for more generality, flat morphism.[2]
This article uses material from the Wikipedia article Flatness (mathematics), and is written by contributors. Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.
|
Oscillations and chaos in the dynamics of the BCM learning rule | BMC Neuroscience | Full Text
Oscillations and chaos in the dynamics of the BCM learning rule
Lawrence C Udeigwe1,
G Bard Ermentrout2 &
Paul W Munro1
The BCM learning rule originally arose from experiments intended for measuring the selectivity of neurons in the primary visual cortex, and it dependence on input stimuli. This learning rule incorporates a dynamic LTP threshold, which depends on the time averaged postsynaptic activity. Although the BCM learning rule has been well studied and some experimental evidence of neuronal adherence has been found in the other areas of the brain, including the hippocampus, there is still much to be known about the dynamic behavior of this learning rule.
The dynamics of BCM cell can be described as follows:
{\tau }_{w}\frac{dw}{dt}=\nu {x}_{j}^{\left(i\right)}\left(\nu -\theta \right)
{\tau }_{\theta }\frac{d\theta }{dt}={\nu }^{2}-\theta
{x}^{\left(i\right)}=\left({x}_{1}^{\left(i\right)},...,{x}_{n}^{\left(i\right)}\right)
is an input stimulus pattern, and
w=\left({w}_{1},...,{w}_{n}\right)
is the synaptic weights. The postsynaptic activity is computed as
\nu =w.{x}^{\left(i\right)}
\theta
is a "sliding" threshold for the postsynaptic activity. and are constants.
In this work, a mean-field version of the BCM learning rule is studied, and it is shown that if the synaptic weights and the postsynaptic activity threshold share similar time scales, then it is possible to obtain complex dynamics. It is also shown that there exist periodic orbits for certain parametric regions of stimulus orientation and time-scale factor, as evidenced by a Hopf Bifurcation (see Figure 1). Consequently, it is discovered that the synaptic weights exhibit an oscillatory behavior in this region. A preliminary study of two BCM cells coupled by lateral inhibition yields a torus bifurcation, which tends to lead to chaos.
(A) w1-vs-a, where a is the parameterizes of input stimulus. The bold curve shows the periodic orbits as a varies. (B) The green curve is θ-vs-time, the black curve is w1-vs-time, and the red curve is w2-vs-tme. Each exhibiting a stable oscillation when α = 0.128 and τw (C) Two parameter curve of Hopf bifurcations, τw-vs-a. The weights exhibit a winner-take-all behavior in the region above the curve, and an oscillatory behavior in the region below the curve. In each subfigure, τ θ .
School of Information Sciences, University of Pittsburgh, Pittsburgh, PA, 15260, USA
Lawrence C Udeigwe & Paul W Munro
Lawrence C Udeigwe
Correspondence to Lawrence C Udeigwe.
Udeigwe, L.C., Ermentrout, G.B. & Munro, P.W. Oscillations and chaos in the dynamics of the BCM learning rule. BMC Neurosci 14, P285 (2013). https://doi.org/10.1186/1471-2202-14-S1-P285
|
Bob is hanging a swing from a pole high off the ground so that it can swing a total angle of
120º
. Since there is a bush
5
feet in front of the swing and a shed
5
feet behind the swing, Bob wants to ensure that no one will get hurt when they are swinging. What is the maximum length of chain that Bob can use for the swing?
Draw a diagram of this situation.
What is the maximum length of chain that Bob can use? State what tools you used to solve this problem.
\text{cos}30º = \frac{ 5 }{x}
x = \frac{ 5 }{\text{cos}30º}
x\approx5.77
|
(Redirected from Stress-energy-momentum pseudotensor)
{\displaystyle t_{LL}^{\mu \nu }\,}
{\displaystyle t_{LL}^{\mu \nu }=t_{LL}^{\nu \mu }\,}
{\displaystyle T^{\mu \nu }\,}
{\displaystyle t_{LL}^{\mu \nu }=-{\frac {c^{4}}{8\pi G}}G^{\mu \nu }+{\frac {c^{4}}{16\pi G(-g)}}\left((-g)\left(g^{\mu \nu }g^{\alpha \beta }-g^{\mu \alpha }g^{\nu \beta }\right)\right)_{,\alpha \beta }}
{\displaystyle -g}
{\textstyle {}_{,\alpha \beta }={\frac {\partial ^{2}}{\partial x^{\alpha }\partial x^{\beta }}}\,}
{\displaystyle G^{\mu \nu }\,}
{\displaystyle t_{LL}^{\mu \nu }}
{\displaystyle G^{\mu \nu }\,}
{\displaystyle t_{LL}^{\mu \nu }}
{\displaystyle T^{\mu \nu }\,}
{\displaystyle \left(\left(-g\right)\left(T^{\mu \nu }+t_{LL}^{\mu \nu }\right)\right)_{,\mu }=0}
{\displaystyle G^{\mu \nu }\,}
{\displaystyle T^{\mu \nu }\,}
{\displaystyle G^{\mu \nu }\,}
{\displaystyle t_{LL}^{\mu \nu }=0}
{\displaystyle \Lambda \,}
{\displaystyle \Lambda \,}
{\displaystyle t_{LL}^{\mu \nu }=-{\frac {c^{4}}{8\pi G}}\left(G^{\mu \nu }+\Lambda g^{\mu \nu }\right)+{\frac {c^{4}}{16\pi G(-g)}}\left(\left(-g\right)\left(g^{\mu \nu }g^{\alpha \beta }-g^{\mu \alpha }g^{\nu \beta }\right)\right)_{,\alpha \beta }}
{\displaystyle {\begin{aligned}(-g)\left(t_{LL}^{\mu \nu }+{\frac {c^{4}\Lambda g^{\mu \nu }}{8\pi G}}\right)={\frac {c^{4}}{16\pi G}}{\bigg [}&\left({\sqrt {-g}}g^{\mu \nu }\right)_{,\alpha }\left({\sqrt {-g}}g^{\alpha \beta }\right)_{,\beta }-\left({\sqrt {-g}}g^{\mu \alpha }\right)_{,\alpha }\left({\sqrt {-g}}g^{\nu \beta }\right)_{,\beta }+{}\\&{\frac {1}{8}}\left(2g^{\mu \alpha }g^{\nu \beta }-g^{\mu \nu }g^{\alpha \beta }\right)\left(2g_{\sigma \rho }g_{\lambda \omega }-g_{\rho \lambda }g_{\sigma \omega }\right)\left({\sqrt {-g}}g^{\sigma \omega }\right)_{,\alpha }\left({\sqrt {-g}}g^{\rho \lambda }\right)_{,\beta }-{}\\&\left(g^{\mu \alpha }g_{\beta \sigma }\left({\sqrt {-g}}g^{\nu \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\beta \rho }\right)_{,\alpha }+g^{\nu \alpha }g_{\beta \sigma }\left({\sqrt {-g}}g^{\mu \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\beta \rho }\right)_{,\alpha }\right)+{}\\&\left.{\frac {1}{2}}g^{\mu \nu }g_{\alpha \beta }\left({\sqrt {-g}}g^{\alpha \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\rho \beta }\right)_{,\sigma }+g_{\alpha \beta }g^{\sigma \rho }\left({\sqrt {-g}}g^{\mu \alpha }\right)_{,\sigma }\left({\sqrt {-g}}g^{\nu \beta }\right)_{,\rho }\right]\end{aligned}}}
{\displaystyle {\begin{aligned}t_{LL}^{\mu \nu }+{\frac {c^{4}\Lambda g^{\mu \nu }}{8\pi G}}={\frac {c^{4}}{16\pi G}}{\Big [}&\left(2\Gamma _{\alpha \beta }^{\sigma }\Gamma _{\sigma \rho }^{\rho }-\Gamma _{\alpha \rho }^{\sigma }\Gamma _{\beta \sigma }^{\rho }-\Gamma _{\alpha \sigma }^{\sigma }\Gamma _{\beta \rho }^{\rho }\right)\left(g^{\mu \alpha }g^{\nu \beta }-g^{\mu \nu }g^{\alpha \beta }\right)+{}\\&\left(\Gamma _{\alpha \rho }^{\nu }\Gamma _{\beta \sigma }^{\rho }+\Gamma _{\beta \sigma }^{\nu }\Gamma _{\alpha \rho }^{\rho }-\Gamma _{\sigma \rho }^{\nu }\Gamma _{\alpha \beta }^{\rho }-\Gamma _{\alpha \beta }^{\nu }\Gamma _{\sigma \rho }^{\rho }\right)g^{\mu \alpha }g^{\beta \sigma }+\\&\left(\Gamma _{\alpha \rho }^{\mu }\Gamma _{\beta \sigma }^{\rho }+\Gamma _{\beta \sigma }^{\mu }\Gamma _{\alpha \rho }^{\rho }-\Gamma _{\sigma \rho }^{\mu }\Gamma _{\alpha \beta }^{\rho }-\Gamma _{\alpha \beta }^{\mu }\Gamma _{\sigma \rho }^{\rho }\right)g^{\nu \alpha }g^{\beta \sigma }+\\&\left.\left(\Gamma _{\alpha \sigma }^{\mu }\Gamma _{\beta \rho }^{\nu }-\Gamma _{\alpha \beta }^{\mu }\Gamma _{\sigma \rho }^{\nu }\right)g^{\alpha \beta }g^{\sigma \rho }\right]\end{aligned}}}
{\displaystyle {t_{\mu }}^{\nu }={\frac {c^{4}}{16\pi G{\sqrt {-g}}}}\left(\left(g^{\alpha \beta }{\sqrt {-g}}\right)_{,\mu }\left(\Gamma _{\alpha \beta }^{\nu }-\delta _{\beta }^{\nu }\Gamma _{\alpha \sigma }^{\sigma }\right)-\delta _{\mu }^{\nu }g^{\alpha \beta }\left(\Gamma _{\alpha \beta }^{\sigma }\Gamma _{\sigma \rho }^{\rho }-\Gamma _{\alpha \sigma }^{\rho }\Gamma _{\beta \rho }^{\sigma }\right){\sqrt {-g}}\right)}
{\displaystyle \left(\left({T_{\mu }}^{\nu }+{t_{\mu }}^{\nu }\right){\sqrt {-g}}\right)_{,\nu }=0.}
|
Allan variance - MATLAB allanvar - MathWorks 日本
Determine Allan Variance of Single Axis Gyroscope
Determine Allan Deviation at Specific Values of
\mathrm{Ï}
[avar,tau] = allanvar(Omega)
[avar,tau] = allanvar(Omega,m)
[avar,tau] = allanvar(Omega,ptStr)
[avar,tau] = allanvar(___,fs)
Allan variance is used to measure the frequency stability of oscillation for a sequence of data in the time domain. It can also be used to determine the intrinsic noise in a system as a function of the averaging time. The averaging time series τ can be specified as τ = m/fs. Here fs is the sampling frequency of data, and m is a list of ascending averaging factors (such as 1, 2, 4, 8, …).
[avar,tau] = allanvar(Omega) returns the Allan variance avar as a function of averaging time tau. The default averaging time tau is an octave sequence given as (1, 2, ..., 2floor{log2[(N-1)/2]}), where N is the number of samples in Omega. If Omega is specified as a matrix, allanvar operates over the columns of omega.
[avar,tau] = allanvar(Omega,m) returns the Allan variance avar for specific values of tau defined by m. Since the default frequency fs is assumed to be 1, the output tau is exactly same with m.
[avar,tau] = allanvar(Omega,ptStr) sets averaging factor m to the specified point specification, ptStr. Since the default frequency fs is 1, the output tau is exactly equal to the specified m. ptStr can be specified as 'octave' or 'decade'.
[avar,tau] = allanvar(___,fs) also allows you to provide the sampling frequency fs of the input data omega in Hz. This input parameter can be used with any of the previous syntaxes.
Load gyroscope data from a MAT file, including the sample rate of the data in Hz. Calculate the Allan variance.
load('LoggedSingleAxisGyroscope','omega','Fs')
[avar,tau] = allanvar(omega,'octave',Fs);
Plot the Allan variance on a loglog plot.
loglog(tau,avar)
ylabel('\sigma^2(\tau)')
title('Allan Variance')
\mathrm{Ï}
Generate sample gyroscope noise, including angle random walk and rate random walk.
nStd = 1e-3;
kStd = 1e-7;
nNoise = nStd.*randn(numSamples,1);
kNoise = kStd.*cumsum(randn(numSamples,1));
omega = nNoise+kNoise;
Calculate the Allan deviation at specific values of
\mathit{m}=\mathrm{Ï}
. The Allan deviation is the square root of the Allan variance.
m = 2.^(9:18);
[avar,tau] = allanvar(omega,m,Fs);
Plot the Allan deviation on a loglog plot.
loglog(tau,adev)
Omega — Input data
N-by-1 vector | N-by-M matrix
Input data specified as an N-by-1 vector or an N-by-M matrix. N is the number of samples, and M is the number of sample sets. If specified as a matrix, allanvar operates over the columns of Omega.
m — Averaging factor
Averaging factor, specified as a scalar or vector with ascending integer values less than (N-1)/2, where N is the number of samples in Omega.
ptStr — Point specification of m
'octave' (default) | 'decade'
Point specification of m, specified as 'octave' or 'decade'. Based on the value of ptStr, m is specified as following:
If ptStr is specified as 'octave', m is:
\left[{2}^{0},{2}^{1}{...2}^{â{\mathrm{log}}_{2}\left(\frac{Nâ1}{2}\right)â}\right]
If ptStr is specified as 'decade', m is:
\left[{10}^{0},{10}^{1}{...10}^{â{\mathrm{log}}_{10}\left(\frac{Nâ1}{2}\right)â}\right]
N is the number of samples in Omega.
fs — Basic frequency of input data in Hz
Basic frequency of the input data, Omega, in Hz, specified as a positive scalar.
avar — Allan variance of input data
Allan variance of input data at tau, returned as a vector or matrix.
tau — Averaging time of Allan variance
Averaging time of Allan variance, returned as a vector, or a matrix.
gyroparams | imuSensor
|
Lemma 10.134.2 (00S1)—The Stacks project
Lemma 10.134.2. Suppose given a diagram (10.134.1.1). Let $\alpha : P \to S$ and $\alpha ' : P' \to S'$ be presentations.
There exists a morphism of presentations from $\alpha $ to $\alpha '$.
Any two morphisms of presentations induce homotopic morphisms of complexes $\mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits (\alpha ')$.
The construction is compatible with compositions of morphisms of presentations (see proof for exact statement).
If $R \to R'$ and $S \to S'$ are isomorphisms, then for any map $\varphi $ of presentations from $\alpha $ to $\alpha '$ the induced map $\mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits (\alpha ')$ is a homotopy equivalence and a quasi-isomorphism.
In particular, comparing $\alpha $ to the canonical presentation (10.134.0.1) we conclude there is a quasi-isomorphism $\mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits _{S/R}$ well defined up to homotopy and compatible with all functorialities (up to homotopy).
Proof. Since $P$ is a polynomial algebra over $R$ we can write $P = R[x_ a, a \in A]$ for some set $A$. As $\alpha '$ is surjective, we can choose for every $a \in A$ an element $f_ a \in P'$ such that $\alpha '(f_ a) = \phi (\alpha (x_ a))$. Let $\varphi : P = R[x_ a, a \in A] \to P'$ be the unique $R$-algebra map such that $\varphi (x_ a) = f_ a$. This gives the morphism in (1).
Let $\varphi $ and $\varphi '$ morphisms of presentations from $\alpha $ to $\alpha '$. Let $I = \mathop{\mathrm{Ker}}(\alpha )$ and $I' = \mathop{\mathrm{Ker}}(\alpha ')$. We have to construct the diagonal map $h$ in the diagram
\[ \xymatrix{ I/I^2 \ar[r]^-{\text{d}} \ar@<1ex>[d]^{\varphi '_1} \ar@<-1ex>[d]_{\varphi _1} & \Omega _{P/R} \otimes _ P S \ar@<1ex>[d]^{\varphi '_0} \ar@<-1ex>[d]_{\varphi _0} \ar[ld]_ h \\ I'/(I')^2 \ar[r]^-{\text{d}} & \Omega _{P'/R'} \otimes _{P'} S' } \]
where the vertical maps are induced by $\varphi $, $\varphi '$ such that
\[ \varphi _1 - \varphi '_1 = h \circ \text{d} \quad \text{and}\quad \varphi _0 - \varphi '_0 = \text{d} \circ h \]
Consider the map $\varphi - \varphi ' : P \to P'$. Since both $\varphi $ and $\varphi '$ are compatible with $\alpha $ and $\alpha '$ we obtain $\varphi - \varphi ' : P \to I'$. This implies that $\varphi , \varphi ' : P \to P'$ induce the same $P$-module structure on $I'/(I')^2$, since $\varphi (p)i' - \varphi '(p)i' = (\varphi - \varphi ')(p)i' \in (I')^2$. Also $\varphi - \varphi '$ is $R$-linear and
\[ (\varphi - \varphi ')(fg) = \varphi (f)(\varphi - \varphi ')(g) + (\varphi - \varphi ')(f)\varphi '(g) \]
Hence the induced map $D : P \to I'/(I')^2$ is a $R$-derivation. Thus we obtain a canonical map $h : \Omega _{P/R} \otimes _ P S \to I'/(I')^2$ such that $D = h \circ \text{d}$. A calculation (omitted) shows that $h$ is the desired homotopy.
\[ \xymatrix{ S \ar[r]_{\phi } & S' \ar[r]_{\phi '} & S'' \\ R \ar[r] \ar[u] & R' \ar[u] \ar[r] & R'' \ar[u] } \]
$\alpha : P \to S$,
$\alpha ' : P' \to S'$, and
$\alpha '' : P'' \to S''$
are presentations. Suppose that
$\varphi : P \to P$ is a morphism of presentations from $\alpha $ to $\alpha '$ and
$\varphi ' : P' \to P''$ is a morphism of presentations from $\alpha '$ to $\alpha ''$.
Then it is immediate that $\varphi ' \circ \varphi : P \to P''$ is a morphism of presentations from $\alpha $ to $\alpha ''$ and that the induced map $\mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits (\alpha '')$ of naive cotangent complexes is the composition of the maps $\mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits (\alpha ')$ and $\mathop{N\! L}\nolimits (\alpha ') \to \mathop{N\! L}\nolimits (\alpha '')$ induced by $\varphi $ and $\varphi '$.
In the simple case of complexes with 2 terms a quasi-isomorphism is just a map that induces an isomorphism on both the cokernel and the kernel of the maps between the terms. Note that homotopic maps of 2 term complexes (as explained above) define the same maps on kernel and cokernel. Hence if $\varphi $ is a map from a presentation $\alpha $ of $S$ over $R$ to itself, then the induced map $\mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits (\alpha )$ is a quasi-isomorphism being homotopic to the identity by part (2). To prove (4) in full generality, consider a morphism $\varphi '$ from $\alpha '$ to $\alpha $ which exists by (1). The compositions $\mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits (\alpha ') \to \mathop{N\! L}\nolimits (\alpha )$ and $\mathop{N\! L}\nolimits (\alpha ') \to \mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits (\alpha ')$ are homotopic to the identity maps by (3), hence these maps are homotopy equivalences by definition. It follows formally that both maps $\mathop{N\! L}\nolimits (\alpha ) \to \mathop{N\! L}\nolimits (\alpha ')$ and $\mathop{N\! L}\nolimits (\alpha ') \to \mathop{N\! L}\nolimits (\alpha )$ are quasi-isomorphisms. Some details omitted. $\square$
In the first part of the lemma, "there exist" should be "there exists."
Comment #1717 by Yogesh More on December 03, 2015 at 15:56
Trivial remark but might be worth adding, after the line "we conclude that
\phi -\phi:P \to I'
.":
\phi, \phi':P \to P'
induce the same
P
-module structure on
I'/(I')^2
\phi(p)i'-\phi'(p)i'=(\phi-\phi')(p)i' \in (I')^2
I suggest adding this (obvious) remark because you are implicitly using it in the equation showing that
(\phi-\phi')(fg)
satisfies Leibniz rule (which I feel is the key to the result), and also in using the universal property for
\Omega_{P/R}
, i.e. to get the map
\Omega_{P/R} \to I'/(I')^2
, you are considering
I'/(I')^2
P
Comment #2789 by Dario Weißmann on August 27, 2017 at 21:10
Typo in the proof of (2): "Since both
\varphi
\varphi
are compatible with..." should read "Since both
\varphi
\varphi'
are compatible with..."
In the proof of (3) the last line should read: "...and NL
(\alpha') \to
(\alpha'')
\varphi
\varphi'
@#2789, #2791. THanks, fixed here.
|
Click here to return to BVPs Boundary Value Problems
Title: One 1D heat equation with several boundary conditions
Objectives: Specifically what is to be retained by the learner.
Setup of heat equation
Solution of heat equation with homogeneous/nonhomogeneous Dirichlet, Neumann and mixed boudary conditions.
Activities: Content directed at the learner to achieve learning objectives.
Solve specific homework prooblems.
Assessment: Determine lesson effectiveness in achieving the learning Objectives.
1.1 The derivation of the Heat Equation
2.1 How heat flows?
3 Homogeneous Boundary conditions for fixed end temperatures, Dirichlet
4 Lesson on Heat equation in 1D with Nonhomogeneous Dirichlet Boundary Conditions
4.1 Solve for steady state part of the solution '"`UNIQ--postMath-00000039-QINU`"'
7 Mixed: Fixed Temp and Convection
8 Heat 1d : Insulated and convective BCs
The derivation of the Heat EquationEdit
Heat Flow: Fourier's Law
What is heat? Heat is a form of energy that is measured in units of degree Celsius [1].
For a gas this measure is the average kinetic energy
{\displaystyle m|v|^{2}/2}
of the molecules in the gas.
For a solid heat is associated with vibrational energy of the crystalline structure.
Heat is measured by us in units of degrees, these are related to calories by the definition, one calorie is the amount of heat required to increase the temperature of one gram of water(at one atmosphere of pressure) by one degree Celsius.
↑ a historical form of energy measurement is the calorie which is the amount of heat required to increase the temperature of one gram of water(at one atmosphere of pressure) by one degree Celsius.
How heat flows?Edit
{\displaystyle \Omega }
represent a region of space in
{\displaystyle \mathbf {R} ^{3}}
{\displaystyle u(x,y,z,t)}
be the temperature at a point in
{\displaystyle \Omega }
{\displaystyle t}
. From observation you know that heat flows from a high temperature region to a lower temperature region. Mathematically we represent this as
{\displaystyle -\nabla u}
which is a vector pointing in the direction of decreasing temperatures. The negative gradient of
{\displaystyle u}
is a vector that points in the direction that the temperature is decreasing the most. Heat is flowing in that direction, at least locally.
The heat flux vector
{\displaystyle v=-\nabla u}
describes a vector field in
{\displaystyle \Omega }
for the scalar field
{\displaystyle u(x,y,z,t)}
{\displaystyle S}
as a a surface in
{\displaystyle \Omega }
. As shown below.
{\displaystyle {\mbox{heat flux across S in the direction normal to the surface S}}=-\kappa (\nabla {u}\cdot n)A}
Insulated Rod with ends held at fixed temperatures.
The temperature in a bar extending from
{\displaystyle x=a}
{\displaystyle x=b}
{\displaystyle t}
{\displaystyle u(x,t)}
. The flow of heat in the bar leads to the development of the General Heat Equation in one spatial dimension. At this time I have not entered the material for this from my notes. It is found in a variety of standard BVP and DE textbooks. The derivation leads to the following PDE with boundary conditions and initial condition.
The general form for the heat equation in one spatial dimension is:
{\displaystyle u_{xx}+g(x)={\frac {1}{k}}u_{t}}
{\displaystyle \displaystyle \alpha _{11}u(a,t)+\alpha _{12}u_{x}(a,t)=\gamma _{1}(t)}
{\displaystyle \displaystyle \alpha _{21}u(b,t)+\alpha _{22}u_{x}(b,t)=\gamma _{2}(t)}
{\displaystyle u(x,0)=f(x)}
{\displaystyle a\leq x\leq b}
Dirichlet Boundary Conditions: If
{\displaystyle \alpha _{12}=\alpha _{22}=0}
the problem is said to have Dirichlet boundary conditions.
{\displaystyle \displaystyle u(0,t)=3,u(L,t)=10}
{\displaystyle a=0,b=L}
are Dirichlet BCs. For this introductory work
{\displaystyle g(x)=0}
{\displaystyle \gamma _{1}(t),\gamma _{2}(t)}
This will give us the simpler problem
{\displaystyle \displaystyle u_{xx}={\frac {1}{k}}u_{t}}
{\displaystyle \displaystyle \alpha _{11}u(a,t)+\alpha _{12}u_{x}(a,t)=\gamma _{1}=constant}
{\displaystyle \displaystyle \alpha _{21}u(b,t)+\alpha _{22}u_{x}(b,t)=\gamma _{2}=constant}
{\displaystyle \displaystyle u(x,0)=f(x)}
{\displaystyle \displaystyle a\leq x\leq b}
We first solve the problem with the BCs set to a fixed temperature of
{\displaystyle \displaystyle u(a,t)=0,u(b,t)=0}
. The case is the result of setting
{\displaystyle \displaystyle \alpha _{11}=\alpha {21}=1,\alpha _{21}=\alpha {22}=0}
{\displaystyle \displaystyle \gamma _{1}=\gamma _{2}=0}
Homogeneous Boundary conditions for fixed end temperatures, DirichletEdit
The rod is insulated along it's length and contains no sources or sinks is
{\displaystyle \displaystyle g(x)=0}
{\displaystyle \displaystyle u_{xx}={\frac {1}{k}}u_{t}}
The general form for the accompanying boundary conditions at either end of the rod is:
{\displaystyle \displaystyle u(a,t)=0{\mbox{ and }}u(b,t)=0}
{\displaystyle a=0,b=L}
{\displaystyle u(0,t)=0,u(L,t)=0}
A quick knowledge check:
1 Which of the following represents a Dirichlet BC?
{\displaystyle \displaystyle u(a,t)=20}
{\displaystyle \displaystyle u_{x}(0,t)=0}
{\displaystyle \displaystyle u(L,t)=u_{x}(L,t)}
2 The BC
{\displaystyle \displaystyle u_{x}(a,t)=\gamma }
is homogeneous if
{\displaystyle \displaystyle \gamma =}
Lesson on Heat equation in 1D with Nonhomogeneous Dirichlet Boundary ConditionsEdit
Solve a nonhomogeneous Dirichlet BC problems.
{\displaystyle \displaystyle \gamma _{1}(a,t)\neq 0,\gamma _{2}(b,t)\neq 0}
{\displaystyle \alpha _{12}=\alpha _{22}=0}
Setup the PDE and BCs
Suppose the solution consists of a transitory component
{\displaystyle w(x,t)}
and a steady state component
{\displaystyle v(x)}
{\displaystyle u(x,t)}
we will develop two problems. One will be assigned the nonhomogeneous BCs,
{\displaystyle v_{xx}=0}
with nonzero end conditions
{\displaystyle v(a),v(b)}
and the second problem will be assigned homogeneous BCs and the IC,
{\displaystyle w_{xx}={\frac {1}{k}}w_{t}}
with zero end conditions
{\displaystyle w(a,t)=0,w(b,t)=0}
{\displaystyle w(x,0)=f(x)-v(x)}
Solve for steady state part of the solution
{\displaystyle v(x)}
Video OK click on play. Setup for steady state v(x) part of the solution.
Just CLICK on the play button and the video will work , currently the thumnails for the uploads are messed up.
Video OK Click on play.
NeumannEdit
The Neumann end condition is a first derivative condition on the heat flow across an end of the bar. The homogeneous case would have no heat flow, across a boundary
{\displaystyle \displaystyle \alpha _{11}=\alpha _{21}=0}
{\displaystyle \alpha _{12}\neq 0,\alpha _{22}\neq 0}
for all "t" and so
{\displaystyle u_{x}(a,t)=0}
{\displaystyle u_{x}(b,t)=0}
at the left and right boundaries when
{\displaystyle \gamma _{1}(t)=\gamma _{2}(t)=0}
Insulated Rod with Homogeneous Neumann
These are conditions where no heat flows across the ends of the rod. Thus no energy may enter or leave the rod.
Click on the PLAY arrow, video works. The thumbnails' jpeg images are the only thing messed up.
Mixed: Fixed Temp and ConvectionEdit
{\displaystyle \displaystyle u_{xx}={\frac {1}{k}}u_{t}}
{\displaystyle \displaystyle u(0,t)=T_{0}}
{\displaystyle \displaystyle -\kappa u_{x}(b,t)=h(u(L,t)-\mathrm {B} )}
{\displaystyle \displaystyle u(x,0)=f(x)}
{\displaystyle \displaystyle a\leq x\leq b}
Heat Equation 1D mixed boundary conditions.
Lecture on setup of Heat equation for an insulated bar with one end held at a fixed temperature and the convective cooling applied to the second.
Lecture on solving for the steady steady
{\displaystyle v(x)}
of Heat equation for an insulated bar with one end held at a fixed temperature and the convective cooling applied to the second.
Heat 1d : Insulated and convective BCsEdit
A bar of length L is insulated along it's length. One end is open to the air which is at
{\displaystyle \mathrm {B} }
degrees and the other end is insulated so that there is no flow across the boundary.
The heat equation is then:
{\displaystyle \displaystyle u_{xx}={\frac {1}{k}}u_{t}}
with the boundary condition s
{\displaystyle u_{x}(0,t)=0}
at the insulated end and
{\displaystyle -\kappa u_{x}(L,t)=h(u(L,t)-\mathrm {B} )}
at the convective end.
Heat Equation 1D mixed boundary conditions: insulated and convective BCs.
The intial temperature distribution is given by the function:
{\displaystyle u(x,0)=f(x)=-x^{2}+10x+23}
Just click on the Play button and video works!. The first video sets up the problem and starts the solution process.
The next video continues with the solution of the transient part of the problem.
This last video completes the transient solution and writes the series solution u(x,t).
|
Implicit solver for discrete-time algebraic Riccati equations - MATLAB idare - MathWorks United Kingdom
{A}^{T}XA-{E}^{T}XE-\left({A}^{T}XB+S\right){\left({B}^{T}XB+R\right)}^{-1}{\left({A}^{T}XB+S\right)}^{T}+Q\text{ }=\text{ }0
A=\left[\begin{array}{cc}-0.9& -3\\ 0.7& 0.1\end{array}\right]\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}B=\left[\begin{array}{c}1\\ 1\end{array}\right]\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}Q=\left[\begin{array}{cc}1& 0\\ 0& 3\end{array}\right]\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}R=0.1.
A=\left[\begin{array}{cc}-0.9& -3\\ 0.7& 0.1\end{array}\right]\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}B=\left[\begin{array}{c}1\\ 1\end{array}\right]\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}Q=\left[\begin{array}{cc}1& 0\\ 0& 3\end{array}\right]\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}R=0.1.
{a}_{i,\text{\hspace{0.17em}}j}={\overline{a}}_{j,\text{\hspace{0.17em}}i}
K\text{ }=\text{ }{\left({B}^{T}XB+R\right)}^{-1}\left({B}^{T}XA+{S}^{T}\right).
L\text{ }=\text{ }eig\left(A-BK,E\right).
\left[\begin{array}{cc}\begin{array}{c}Q\\ {S}^{T}\end{array}& \begin{array}{c}S\\ R\end{array}\end{array}\right]\text{ }\ge \text{ }0
\left[\begin{array}{cc}A-B{R}^{-1}{S}^{T}& Q-S{R}^{-1}{S}^{T}\end{array}\right]
M-zN\text{ }=\text{ }\left[\begin{array}{ccc}A& 0& B\\ -Q& {E}^{T}& -S\\ {S}^{T}& 0& R\end{array}\right]-z\left[\begin{array}{ccc}E& 0& 0\\ 0& {A}^{T}& 0\\ 0& -{B}^{T}& 0\end{array}\right]
\begin{array}{l}X\text{ }={\text{ D}}_{x}\text{ }V{U}^{-1}{\text{ D}}_{x}\text{ }{E}^{-1}\text{,}\\ K\text{ }=\text{ }-{D}_{r}{\text{ WU}}^{-1}{\text{ D}}_{x},\end{array}
\begin{array}{l}{D}_{x}\text{ }=\text{ }\mathrm{diag}\left({S}_{x}\right),\\ {D}_{r}\text{ }=\text{ }\mathrm{diag}\left({S}_{r}\right).\end{array}
|
Image Classification Index 2021-06-14 | Metaculus
Image Classification Index 2021-06-14
We take the average (arithmetic mean) of
- ln (error)
of the state-of-the-art performance across all benchmarks in the index
The following benchmarks are included in the Image Classification Performance Index:
Image classification on: ImageNet (in top-1 accuracy), STL-10, CIFAR-100, SVHN, MiniImagenet 5-way (1-shot), Tiered ImageNet 5-way (1-shot), CUB 200 5-way 1-shot, Stanford Cars, CUB200, FGVC Aircraft
Historical data on the Image Classification Performance Index may be found here. As of writing this question, the index is at 114.88 for December 2020.
What will the value of the herein defined Image Classification Performance Index be on 2021-06-14?
- ln (error)
for that benchmark exceeds 10
In case error is not natively reported, it is constructed by taking 1-accuracy/100.
|
Doubling_time Knowpia
The doubling time is the time it takes for a population to double in size/value. It is applied to population growth, inflation, resource extraction, consumption of goods, compound interest, the volume of malignant tumours, and many other things that tend to grow over time. When the relative growth rate (not the absolute growth rate) is constant, the quantity undergoes exponential growth and has a constant doubling time or period, which can be calculated directly from the growth rate.
This time can be calculated by dividing the natural alogarithm of 2 by the exponent of growth, or approximated by dividing 70 by the percentage growth rate[1] (more roughly but roundly, dividing 72; see the rule of 72 for details and derivations of this formula).
The doubling time is a characteristic unit (a natural unit of scale) for the exponential growth equation, and its converse for exponential decay is the half-life.
For example, given Canada's net population growth of 0.9% in the year 2006, dividing 70 by 0.9 gives an approximate doubling time of 78 years. Thus if the growth rate remains constant, Canada's population would double from its 2006 figure of 33 million to 66 million by 2084.
The notion of doubling time dates to interest on loans in Babylonian mathematics. Clay tablets from circa 2000 BCE include the exercise "Given an interest rate of 1/60 per month (no compounding), come the doubling time." This yields an annual interest rate of 12/60 = 20%, and hence a doubling time of 100% growth/20% growth per year = 5 years.[2][3] Further, repaying double the initial amount of a loan, after a fixed time, was common commercial practice of the period: a common Assyrian loan of 1900 BCE consisted of loaning 2 minas of gold, getting back 4 in five years,[2] and an Egyptian proverb of the time was "If wealth is placed where it bears interest, it comes back to you redoubled."[2][4]
Examining the doubling time can give a more intuitive sense of the long-term impact of growth than simply viewing the percentage growth rate.
For a constant growth rate of r % within time t, the formula for the doubling time Td is given by
{\displaystyle T_{d}=t{\frac {\ln(2)}{\ln(1+{\frac {r}{100}})}}\approx t{\frac {70}{r}}}
Some doubling times calculated with this formula are shown in this table.
Simple doubling time formula:
{\displaystyle N(t)=N_{0}2^{t/T_{d}}}
N(t) = the number of objects at time t
Td = doubling period (time it takes for object to double in number)
N0 = initial number of objects
Doubling times Td given constant r% growth
For example, with an annual growth rate of 4.8% the doubling time is 14.78 years, and a doubling time of 10 years corresponds to a growth rate between 7% and 7.5% (actually about 7.18%).
When applied to the constant growth in consumption of a resource, the total amount consumed in one doubling period equals the total amount consumed in all previous periods. This enabled U.S. President Jimmy Carter to note in a speech in 1977 that in each of the previous two decades the world had used more oil than in all of previous history (The roughly exponential growth in world oil consumption between 1950 and 1970 had a doubling period of under a decade).
Given two measurements of a growing quantity, q1 at time t1 and q2 at time t2, and assuming a constant growth rate, the doubling time can be calculated as
{\displaystyle T_{d}=(t_{2}-t_{1})\cdot {\frac {\ln(2)}{\ln({\frac {q_{2}}{q_{1}}})}}.}
Where is it useful?Edit
In practice eventually other constraints become important, exponential growth stops and the doubling time changes or becomes inapplicable. Limited food supply or other resources at high population densities will reduce growth, or needing a wheel-barrow full of notes to buy a loaf of bread will reduce the acceptance of paper money. While using doubling times is convenient and simple, we should not apply the idea without considering factors which may affect future growth. In the 1950s Canada's population growth rate was over 3% per year, so extrapolating the current growth rate of 0.9% for many decades (implied by the doubling time) is unjustified unless we have examined the underlying causes of the growth and determined they will not be changing significantly over that period.
The equivalent concept to doubling time for a material undergoing a constant negative relative growth rate or exponential decay is the half-life.
The equivalent concept in base-e is e-folding.
Graphs comparing doubling times and half lives of exponential growths (bold lines) and decay (faint lines), and their 70/t and 72/t approximations. In the SVG version, hover over a graph to highlight it and its complement.
Cell culture doubling timeEdit
Cell doubling time can be calculated in the following way using growth rate (amount of doubling in one unit of time)
{\displaystyle N(t)=N_{0}e^{rt}}
{\displaystyle r={\frac {\ln \left(N(t)/N_{0}\right)}{t}}}
{\displaystyle N(t)}
= the number of cells at time t
{\displaystyle N_{0}}
= the number of cells at time 0
{\displaystyle r}
{\displaystyle t}
= time (usually in hours)
{\displaystyle {\text{doubling time}}={\frac {\ln(2)}{\text{growth rate}}}}
The following is the known doubling time for the following cells:
Mesenchymal Stem Cell Mouse 21–23 hours[5]
Cardiac/heart stem cell Human 29 ± 10 hours[6]
^ Donella Meadows, Thinking in Systems: A Primer, Chelsea Green Publishing, 2008, page 33 (box "Hint on reinforcing feedback loops and doubling time").
^ a b c Why the “Miracle of Compound Interest” leads to Financial Crises, by Michael Hudson
^ Have we caught your interest? by John H. Webb
^ Miriam Lichtheim, Ancient Egyptian Literature, II:135.
^ "Life Technologies" (PDF).
^ "Human cardiac stem cells". {{cite journal}}: Cite journal requires |journal= (help)
Reference 6 is controversial.
See:- https://www.statnews.com/2018/10/14/harvard-brigham-retractions-stem-cell/ https://www.nytimes.com/2018/10/15/health/piero-anversa-fraud-retractions.html
|
Optimal Kickstarter · New Things Under the Sun
The way the website works is that every project has a calculator to explain how much patron money you can purchase for the project. For example: maybe there’s an open source software program that you think would be really useful. It is currently funded at $88,209. The site tells you that you can increase the funding to this project by $595 for the price of $1, $1333 for $5, $1888 for $10, and so on (you need a calculator because the price of patron money is not constant).
Why so complicated? Public goods are projects that are, by their nature, enjoyable by many users at once and costly to exclude people from using. Think open source software, art, research, national defense, herd immunity, etc. These goods cannot be efficiently provided by the private market. Since the good can be simultaneously enjoyed by many at once, efficient provision would make it free. But if the good is given away for free, it is impossible to cover the fixed costs necessary to create it. And if you try to charge a price greater than zero (to cover fixed costs), it’s doubly inefficient since it’s costly to prevent people from accessing the good. You have to waste more effort wrapping the project in IP, DRM, etc.
Such goods can be crowd-funded, but under standard economic models, they’ll be drastically underfunded. This is because individuals only consider the private benefits they derive from contributing, not the benefits that accrue to other users. Suppose, for example, for $1800 programmers can optimize the program mentioned above and make it run 50% faster. Let’s say the value of that is $10 per person. If there are 180 users, the total value of that optimization justifies the cost.
\textrm{Total Funding}=(\sqrt{c_1}+\sqrt{c_2}+...+\sqrt{c_N})^2
\textrm{Total Funding}=(\sqrt{c_1})^2=c_1
A few platforms do use the Buterin, Hitzig, and Weyl algorithm to fund public goods. Real world implementations do have some complications not present in the simple model.
Buterin, Vitalik, Zoë Hitzig, E. Glen Weyl. 2019. A Flexible Design for Funding Public Goods. Management Science 65(11): 5171-5187. https://doi.org/10.1287/mnsc.2019.3337
Denote total funding for a project by F. The value of the public good to person i is
V_i(F)
, and the total value of the project is the sum of
V_i(F)
across all individuals i. Assume
V_i(F)
is concave, continuous, etc. The optimal level of funding F for the public good satisfies:
\sum_iV_i'(F)=1
An individual using optimal kickstarter only considers the value they receive, but also only cares about the cost of their own money. They are maximizing
V_i(F)-c_i
, which at the optimum satisfies:
V_i'(F) \frac{\partial F}{\partial c_i}=1
\frac{\partial F}{\partial c_i}=\frac{\sum_j \sqrt{c_j}}{\sqrt{c_i}}
V_i'(F)=\frac{\sqrt{c_i}}{\sum_j \sqrt{c_j}}
\sum_iV_i'(F)=1
|
Franchise P/E Definition
What Is Franchise P/E?
Franchise P/E (price-to-earnings) is the present value of new business opportunities available to a business. When added together, a firm's tangible P/E (sometimes called base P/E) and franchise P/E equal its intrinsic P/E. Franchise P/E is a function of the excess return on those new investments (the franchise factor) relative to the size of the opportunity (the growth factor).
Understanding Franchise P/E
Franchise P/E is mainly determined by the differences between the return on the new business opportunity and the cost of equity. Companies with high franchise P/E ratios are those that are able to continually capitalize on core strengths. Their franchise value measures their capacity to expand over time through investments that provide above-market returns. Companies that increase their asset turnover or widen their profit margin, will increase their Franchise P/E and its observed P/E ratio.
A firm’s equity value or market value is the sum of its tangible value and franchise value. Breaking down the P/E ratio results in two major components, the tangible P/E (the base P/E of a firm with constant earnings), and the franchise factor, which captures the returns associated with new investments. Franchise factor contributes to the P/E ratio in the same way that franchise value contributes to share value.
Franchise P/E is a firm's potential growth factor. based on future business opportunities.
Franchise P/E plus tangible (static) P/E is a firm's intrinsic P/E value.
High franchise P/E values indicate a high degree of potential growth.
Calculating Franchise P/E
The formula for franchise P/E is:
Franchise P/E Formula
\begin{aligned}&\text{Franchise } \frac{P}{E}\\&\quad=\text{(observed) intrinsic }\frac{P}{E}\\&\qquad-\text{tangible }\frac{P}{E}=\text{franchise factor}\\&\qquad\times\text{growth factor}\\&\textbf{where:}\\&\text{Intrinsic }P/E=\text{Tangible }P/E\\&\quad+\text{franchise }P/E\\&\text{Tangible }P/E=\text{Firm's static value}\\&\text{Franchise }P/E=\text{Firm's growth value}\\&\text{Franchise factor (FF)}=\text{Incorporates the}\\&\quad\text{required return on new investments}\\&\text{Growth factor (G)=Factors in the present}\\&\quad\text{value of the excess return from new}\\&\quad\text{investments}\end{aligned}
Franchise EP=(observed) intrinsic EP−tangible EP=franchise factor×growth factorwhere:Intrinsic P/E=Tangible P/E+franchise P/ETangible P/E=Firm’s static valueFranchise P/E=Firm’s growth valueFranchise factor (FF)=Incorporates therequired return on new investmentsGrowth factor (G)=Factors in the presentvalue of the excess return from newinvestments
Franchise Factor Formula
\textit{franchise factor}=\frac{1}{r}-\frac{1}{\textit{ROE}}
franchise factor=r1−ROE1
Growth Factor (G)
\begin{aligned}G=\textit{growth factor}=\frac{g}{r-g}&\\g=\textit{ROE}\times b=\textit{ROE}\frac{1}{d}&\\d=\frac{D_1}{E_1}=\frac{1-g}{\textit{ROE}}&\end{aligned}
G=growth factor=r−ggg=ROE×b=ROEd1d=E1D1=ROE1−g
These can further be modified:
Intrinsic leading P/E = P0 / E1 = (1 - b) / (r - g) = (1 / r) + [1 / r - 1 / ROE]*g / (r - g)
Intrinsic trailing P/E = P0 / E0 = (1 / r) + [1 / r - 1 / ROE + (1 - g / ROE)]*g / (r - g)
Using Franchise P/E
Using the franchise factor the impact on a company's price-earnings ratio (P/E ratio) per unit growth in new investment can be calculated. For example, a franchise factor of 3 would indicate that the P/E ratio of a company would increase by three units for every unit of growth in the company's book value. The franchise factor can be calculated as the product of annual investment returns in excess of market returns and the duration of the returns.
A higher asset turnover ratio increases the franchise P/E ratio, one of the components of the intrinsic P/E value. This is according to Du Pont analysis, which breaks up return on equity into three basic components: net profit margin, asset turnover, and the equity multiplier.
DuPont Analysis = Net Profit Margin * Asset Turnover * Equity Multiplier
\begin{aligned}\textit{DuPont Analysis}&=\textit{Net Profit Margin}\\&\quad\times\textit{Asset Turnover}\\&\quad\times\textit{Equity Multiplier}\end{aligned}
DuPont Analysis=Net Profit Margin×Asset Turnover×Equity Multiplier
Thus we can use the DuPont equation:
ROE (↑) = NI/E = NI/revenue * revenue/A (↑) * A/E
g (↑) = ROE (↑) * (1-d)
Intrinsic P/E = (1/r) + (((1/r) - (1/ROE(↑)))* g(↑)/(r-g(↑)))
= (1/r) + (((1/r) - (1/ROE)(↓))* (g/(r-g))(↑))
= intrinsic P/E (↑)
And when firms pay out more dividends, a firm's intrinsic P/E value decreases:
g (↓) = ROE * (1-d(↑))
Intrinsic P/E = (1/r) + (((1/r) - (1/ROE))* g(↓)/(r-g(↓)))
= (1/r) + (((1/r) - (1/ROE))* (g/(r-g))(↓))
= intrinsic P/E (↓)
|
Elk Multitask
For a while now I have been thinking about the problem of running multiple basic applications on very low-end hardware, in a safe and efficient manner. This idea probably first came to me when I was looking at the PineTime a while back. Lup Yuen and I were discussing the possible of running multiple apps on the smart watch at the same time using something like WASM, although the porting process is non-trivial 1.
Possibly from this line of enquiry, I have been looking to build a Linux PDA for a while now. I want a simple open source personal device with limited capability. I did begin to explore the possibility of using the Pine Cube, unfortunately the Pine Cube did not work out as a platform due to the lack of touchscreen support - but also the form factor was entirely wrong anyway. I spoke about multiple options at the end of that article, but all of them are quite hardware intensive.
I recently got pointed at the m5paper device, a $100 e-ink portable device with battery, touch display and WiFi. One caveat though: It has an ESP32. This would not be a Linux device on such low-end hardware.
Thinking back to this idea of running some form of byte code, I again thought that it could be possible to run some form of high-level language that interacts with systems both on the Linux desktop and on this low-end device.
The obvious choice for me was Java - it was literally designed with portability in mind.
First I checked out NanoVM, it's very much a contender requiring less than 8kB of flash memory, at least 512 bytes of code storage for Java byte code and at least 768 bytes of RAM to run. It's a little on the high side, but manageable. There are of course some downsides as to why I did not use it... There is no native port to the ESP32, meaning I would need to entirely port it myself. It's not at all clear the generated code is actually so portable anyway and it relies heavily on custom C functions and a decent chunk of the Java core library needing to be rewritten for the platform.
Next I considered uJ, a JVM for microcontrollers. This implementation appears to take up about 80kB of code space and also requires several kilobytes of RAM. There is currently no ESP32 implementation (people asked for help in the comments but got no reply) and it looks like it would be a pain to actually implement.
I also considered microej, but this appears to not be open source and therefore against my goal, despite appearing to have a pretty decent JVM implementation for the ESP32 (or at least some form of development board based on it).
Enter Hacker News. I saw a post named: 'Elk: A low footprint JavaScript engine for embedded systems' - interesting! I wasn't expecting so much, but I checked out the GitHub repository anyway. Apparently the compiled code takes about 20kB of flash, and as few as 100 bytes for the VM! Okay, this is getting very interesting.
Elk is surprisingly cool and simple. The abstract hooked me:
The feature set was also super interesting:
Cross platform. Works anywhere from 8-bit microcontrollers to 64-bit servers
Zero dependencies. Builds cleanly by ISO C or ISO C++ compilers
Easy to embed: just copy elk.c and elk.h to your source tree
Very small and simple embedding API
Can call native C/C++ functions from JavaScript and vice versa
Does not use malloc. Operates with a given memory buffer only
Small footprint: about 20KB on flash/disk, about 100 bytes RAM for core VM
No bytecode. Interprets JS code directly
It's incredibly simple and seems to be able to run on basically anything - it looks as if it should run happily on either desktop or microcontroller happily, blissfully unaware of which it is operating on!
So what benefits would there to be using such a system? Firstly, anyone writing applications for the device would not need a compiler. All they need to do is run the code and they're on their way. This really lowers the bar-to-entry for third-party applications.
What disadvantages are there? Of course, the cost of not being either a binary or bytecode is that there is some super speed disadvantage. As the readme page suggests:
0001 let a = 0; // 97 milliseconds on a 16Mhz 8-bit Atmega328P (Arduino Uno and alike)
0002 while (a < 100) // 16 milliseconds on a 48Mhz SAMD21
0003 a++; // 5 milliseconds on a 133Mhz Raspberry RP2040
0004 // 2 milliseconds on a 240Mhz ESP32
Okay, it's pretty slow. We can likely do some patches to speed it up, but we won't be talking about order of magnitude here. Ideally we want to outsource as much of the heavy lifting as possible with C-based helper functions and avoid spending too long doing any real logic in the JS.
So it passes the basic sniff test, what now? Well ideally we need it to be able to run multiple JS programs at once. We need some form of multi-tasking. Of course, we don't have multiple cores available on most of the microcontrollers we will want to run on, so something smarter will be required!
One method to implement this could be to cooperative multitasking, where each JS program either yields (voluntarily gives up computing to allow another application to run). This can be achieved by either each program manually calling a yield() function, or a yield being called on behalf of the program when they access some external function. This of course relies on every application behaving well (no bugs) and every application being considerate of others. Given we want to lower the boundary to entry, it would make more sense to choose another method.
Another method to consider preemptive multitasking, where each application it "interrupted" manually and time is given to another. This means that even if one application is misbehaving or consuming excessive resources, it is possible for another application to get computation time.
A better architecture still would be to have some prioritization process. For an input-focussed embedded device, the most important application will likely be the one currently on the display with other "background" applications likely only needing occasional computation. It would be relatively easy to implement such a prioritization system.
Currently, Elk doesn't support multitasking of any kind. There is an evaluation function that takes the entire JS code, processes it and then only returns once execution ends. This is obviously bad for sharing processing time, but also bad for preventing a badly behaving JS code from running indefinitely.
Okay, so how do we make some form of basic multitasking?
The way this normally works is that you run js_eval(), and this simply checks the length of the code and then runs js_eval_nogc(). This function is the meat grinder:
0005 static jsval_t js_eval_nogc(struct js *js, const char *buf, jsoff_t len) {
0006 jsval_t res = mkval(T_UNDEF, 0);
0007 js->tok = TOK_ERR;
0008 js->code = buf;
0009 js->clen = len;
0010 js->pos = 0;
0011 while (js->tok != TOK_EOF && !is_err(res)) {
0012 js->pos = skiptonext(js->code, js->clen, js->pos);
0013 if (js->pos >= js->clen) break;
0014 res = js_stmt(js, TOK_SEMICOLON);
It initialises the engine and then runs the code until EOF (end of file), an error occurs or the code simply completes execution. It then returns the result (if there is one) and all is good. Seems simple enough.
The problem is, with this implementation we would need to implement some form of threading in order to have multiple JS programs executed at the same time. A thread is not so heavy for most things, but when you're talking about a few kilobytes of RAM, a thread is a significantly heavy object.
We can of course split up the function js_eval_nogc() and get it to operate how we want! In elk.h I added two additional functions:
js_init() - Initialises the JS engine.
js_run() - Runs the JS code one section at a time.
In the header this looks like this:
0019 jsval_t js_eval(struct js *, const char *, size_t); // Execute JS code
0020 void js_init(struct js *, const char *); // Initialise JS code
0021 jsval_t js_run(struct js *); // Run JS code
0022 jsval_t js_glob(struct js *); // Return global object
For the implementation of js_init() we are simply "resetting" the JS engine like so:
0024 void js_init(struct js *js, const char *buf) {
0027 js->clen = strlen(buf);
And then for execution function js_run() we are simply removing the loop:
0030 jsval_t js_run(struct js *js) {
0032 if (js->tok != TOK_EOF && !is_err(res)) {
0034 if (js->pos >= js->clen) return res;
This is not an efficient implementation as it calls mkval() on each call (which is simply a waste). We could also likely get rid of a lot of the checking.
And now for the test program, built on top of the test examples on their GitHub page:
0040 #include "../elk/elk.c"
0042 char mem1[256];
0044 char* src1 =
0045 "let a = 1;"
0046 "let b = 2;"
0047 "a = sum(a, b);"
0048 "b = sum(b, a);"
0049 "sum(a, b);";
0052 "while(a < 128){"
0053 "a = sum(a, a);"
0054 "}"
0055 "a;";
0058 * sum()
0060 * Sum two integers together and return the result.
0062 * @param a The first integer.
0063 * @param b The second integer.
0064 * @return The summed integers.
0066 int sum(int a, int b){
0067 return a + b;
0071 * main()
0073 * A test function for checking elk concurrency.
0075 * @param argv The command line arguments.
0076 * @param argc The number of command line arguments.
0077 * @return The exit status of the program.
0079 int main(char** argv, int argc){
0080 /* Setup JS environments in the given memory */
0081 struct js* js1 = js_create(mem1, sizeof(mem1));
0083 /* Give external C functions */
0084 js_set(js1, js_glob(js1), "sum", js_import(js1, (uintptr_t)sum, "iii"));
0086 /* Variable to return result */
0087 jsval_t r1;
0089 /* Initialise the code */
0090 js_init(js1, src1);
0092 /* Run main loop */
0093 bool running = true;
0094 while(running){
0095 running = false;
0096 if(js1->tok != TOK_EOF && js1->pos < js1->clen){
0097 printf("js1: %i\n", js1->pos);
0098 r1 = js_run(js1);
0099 running = true;
0107 printf("result1: %s\n", js_str(js1, r1));
The implementation is pretty simple, it just runs both JS engines until both of them are complete. The sum() function ran in the JS is a native C function, which proves that we can leverage C functions for greater speed, despite just operating in JS.
The programs being run are pretty simple on purpose. For the first engine, we run the following JS:
0111 let a = 1;
0112 let b = 2;
0113 a = sum(a, b);
0114 b = sum(b, a);
0115 sum(a, b);
This program simply does the following:
a=1
b=2
a=a+b=1+2=3
b=b+a=2+3=5
a+b=5+3=8
For the second program, we wanted to do something different (to prove the executions do not interfere with one another) and run for slightly longer:
0117 while(a < 128){
0118 a = sum(a, a);
0120 a;
a=1
a=a+a=1+1
Repeat the previous step until
a\ge 128
a\equiv 128
So now for the big question: Does it work?
0121 $ ./elktest
0122 js1: 0
0124 js1: 10
0137 result1: 8
0138 result2: 128
Yes. We can clearly see that it swaps back and fourth between the two different JS 'programs', finishing the execution of the first program and finally displaying the results once the second program has also finished running.
I am very sure it is not perfect - I'm pretty sure that it will not evenly distribute time across the two different programs, but for an initial test, it does in fact seem to be plausible to 'multitask' on a single core in limited memory - cool!
Going forwards, I believe this could very much be a viable way forwards. It seems as though each application could in theory run in limited memory and not run the risk of crashing neighbouring applications. Also, given the ability to bind C functions, each application could feasibly leverage the speed of C functions and run just surface level logic in JS.
I will very much be considering using this in a PDA implementation. In theory this could allow for easy syncing between a desktop and microcontroller implementation, running near identical code. It would also allow for the rapid implementation of new apps and programs on the PDA device.
The end of this month going well, I think I will look towards treating myself to such a device as the m5paper (subject to availability). Let's call it a celebration and early Christmas present if that helps.
Unfortunately nothing ever came from our discussions, but I still believe it could be something worth exploring.↩
|
2 Lesson on Vectors
2.1 Definition of a scalar:
2.2 Definition of a real vector:
2.3 Dimension of a vector:
2.4 Vector Operations:
3 Lesson on Scalar and Vector Fields
3.1 Lesson on Operations on Scalar and Vector Fields
3.2 Lesson on Solving Boundary Value Problems with Nonhomogeneous BCs
Lesson PlanEdit
Requirements of student preparation: The student needs to have worked with vectors. If not the student should obtain suitable instruction in vector calculus.
Subject Area: A review of vectors, vector operations, the gradient, scalar fields ,vector fields, curl, and divergence.
Objectives: The learner needs to understand the conceptual and procedural knowledge associated with each of the following
Definition of vectors in
{\displaystyle {\mathcal {R}}^{n}}
{\displaystyle n=1,2,3}
{\displaystyle \nabla }
, divergence
{\displaystyle \nabla \circ }
, curl
{\displaystyle \nabla \times }
and covariant derivatives on fields
Composite operators such as
{\displaystyle \nabla \circ \nabla \mathbf {v} }
Activities: These structures are to help you understand and aid long-term retention of the material.
Lesson on Vectors, their associated properties and operations that use vectors.
Lesson on Scalar and Vector fields
Lesson on Operations on scalar and vector fields
Assessment: These items are to determine the effectiveness of the learning activities in achieving the lesson objectives.
Challenging extended problems.
Student survey/feedback
Lesson on VectorsEdit
We will be using only real numbers in this course. The set of all real numbers will be represented by
{\displaystyle {\mathcal {R}}}
Definition of a scalar:Edit
A scalar is a single real number,
{\displaystyle \displaystyle a\in {\mathcal {R}}}
{\displaystyle 3}
Definition of a real vector:Edit
A real vector,
{\displaystyle \displaystyle v}
is an ordered set of two or more real numbers.
{\displaystyle \displaystyle v=(1,2)}
{\displaystyle \displaystyle w=(5,0,50,-1.25)}
are both vectors. We will use the notation of
{\displaystyle \displaystyle v_{i}}
where the lower index
{\displaystyle \displaystyle i=1..n}
represents the individual elements of a vector in the appropropriate order.
Ex: The vector
{\displaystyle \displaystyle v=(3,-7)}
has two elements, the first element is designated
{\displaystyle \displaystyle v_{1}=3}
{\displaystyle \displaystyle v_{2}=-7}
Dimension of a vector:Edit
The dimension of a vector is the number of elements in the vector.
Ex: Dimension of
{\displaystyle \displaystyle v=(-1.25,0,-2,-2)}
{\displaystyle n=4}
Vector Operations:Edit
To refresh your memory, for vectors of the same dimension the following are valid operations:
{\displaystyle v=(a,b)}
{\displaystyle u=(c,d)}
for each of the following statements.
{\displaystyle \mathbf {v+w} =(a,b)+(c,d)=(a+c)+(b+d)}
{\displaystyle v=(2,5)}
{\displaystyle u=(6,1)}
{\displaystyle v+w=(8,6)}
Multiplication by a scalar,
{\displaystyle \displaystyle k}
{\displaystyle k(v)=k(a,b)=(ka,kb)}
{\displaystyle k=2}
{\displaystyle k(-2,4)=2(-2,4)=(-4,8)}
{\displaystyle \mathbf {u} \times \mathbf {v} =\mathbf {w} }
{\displaystyle \mathbf {u=(2,3,4){\mbox{ and }}v=(-1,4,-3)} }
{\displaystyle \mathbf {u} \times \mathbf {v} =\left[{\begin{array}{ccc}i&j&k\\2&3&4\\-1&4&-3\end{array}}\right]=\mathbf {i} (3(-3)-4^{2})-\mathbf {j} (2(-3)-4(-1))+\mathbf {k} (2(4)-3(-1)}
{\displaystyle \mathbf {u} \times \mathbf {v} =-25\mathbf {i} +\mathbf {j} +11\mathbf {k} }
Lesson on Scalar and Vector FieldsEdit
Lesson on Operations on Scalar and Vector FieldsEdit
Lesson on Solving Boundary Value Problems with Nonhomogeneous BCsEdit
Watch File:Temp.ogg.
|
Peculiar_velocity Knowpia
Local objects are commonly examined as to their vectors of position angle and radial velocity. These can be combined through vector addition to state the object's motion relative to the Sun. Velocities for local objects are sometimes reported with respect to the local standard of rest (LSR) – the average local motion of material in the galaxy – instead of the Sun's rest frame. Translating between the LSR and heliocentric rest frames requires the calculation of the Sun's peculiar velocity in the LSR.[1]
{\displaystyle 1+z_{pec}={\sqrt {\frac {1+v/c}{1-v/c}}}}
{\displaystyle z\approx v/c}
for low velocities (small redshifts). This combines with the redshift from the Hubble flow and the redshift from our own motion
{\displaystyle z_{\odot }}
to give the observed redshift[3]
{\displaystyle 1+z_{obs}=(1+z_{pec})(1+z_{H})(1+z_{\odot }).}
(There may also be a gravitational redshift to consider.[3])
{\displaystyle v_{r}=H_{0}d+v_{pec}}
with contributions from both the Hubble flow and peculiar velocity terms, where
{\displaystyle H_{0}}
is the Hubble constant an{\displaystyle d}
Redshift-space distortions can cause the spatial distributions of cosmological objects to appear elongated or flattened out, depending on the cause of the peculiar velocities.[4] Elongation, sometimes referred to as the "Fingers of God" effect, is caused by random thermal motion of objects; however, correlated peculiar velocities from gravitational infall are the cause of a flattening effect.[5] The main consequence is that, in determining the distance of a single galaxy, a possible error must be assumed. This error becomes smaller as distance increases. For example, in surveys of type Ia supernovae, peculiar velocities have a significant influence on measurements out to redshifts around 0.5, leading to errors of several percent when calculating cosmological parameters.[3][6]
Peculiar velocities can also contain useful information about the universe. The connection between correlated peculiar velocities and mass distribution has been suggested as a tool for determining constraints for cosmological parameters using peculiar velocity surveys.[7][8]
^ Schönrich, R.; Binney, J. (2010). "Local kinematics and the local standard of rest". Monthly Notices of the Royal Astronomical Society. 403 (4): 1829–1833. arXiv:0912.3693. Bibcode:2010MNRAS.403.1829S. doi:10.1111/j.1365-2966.2010.16253.x.
^ Girardi, M.; Biviano, A.; Giuricin, G.; Mardirossian, F.; Mezzetti, M. (1993). "Velocity dispersions in galaxy clusters". The Astrophysical Journal. 404: 38–50. Bibcode:1993ApJ...404...38G. doi:10.1086/172256.
^ a b c Davis, T. M.; Hui, L.; Frieman, J. A.; Haugbølle, T.; Kessler, R.; Sinclair, B.; Sollerman, J.; Bassett, B.; Marriner, J.; Mörtsell, E.; Nichol, R. C.; Richmond, M. W.; Sako, M.; Schneider, D. P.; Smith, M. (2011). "The Effect of Peculiar Velocities on Supernova Cosmology". The Astrophysical Journal. 741 (1): 67. arXiv:1012.2912. Bibcode:2011ApJ...741...67D. doi:10.1088/0004-637X/741/1/67.
^ Kaiser, N. (1987). "Clustering in real space and in redshift space". Monthly Notices of the Royal Astronomical Society. 227 (1): 1–21. Bibcode:1987MNRAS.227....1K. doi:10.1093/mnras/227.1.1.
^ Percival, W. J.; Samushia, L.; Ross, A. J.; Shapiro, C.; Raccanelli, A. (2011). "Redshift-space distortions". Philosophical Transactions of the Royal Society A. 369 (1957): 5058–5067. Bibcode:2011RSPTA.369.5058P. doi:10.1098/rsta.2011.0370. PMID 22084293.
^ Sugiura, N.; Sugiyama, N.; Sasaki, M. (1999). "Anisotropies in Luminosity Distance". Progress of Theoretical Physics. 101 (4): 903–922. Bibcode:1999PThPh.101..903S. doi:10.1143/ptp.101.903.
^ Odderskov, I.; Hannestad, S. (1 January 2017). "Measuring the velocity field from type Ia supernovae in an LSST-like sky survey". Journal of Cosmology and Astroparticle Physics. 2017 (1): 60. arXiv:1608.04446. Bibcode:2017JCAP...01..060O. doi:10.1088/1475-7516/2017/01/060. S2CID 119255726.
^ Weinberg, D. H.; Mortonson, M. J.; Eisenstein, D. J.; Hirata, C.; Riess, A. G.; Rozo, E. (2013). "Observational probes of cosmic acceleration". Physics Reports. 530 (2): 87–255. arXiv:1201.2434. Bibcode:2013PhR...530...87W. doi:10.1016/j.physrep.2013.05.001. S2CID 119305962.
|
Hunter's Bow - Ring of Brodgar
Skill(s) Required Archery
Object(s) Required Leather x1, Tree Bough x2, String x8
Slot(s) Occupied 5L and 5R
Ammunition Stone Arrow, Bone Arrow, Metal Arrow
Craft > Clothes & Equipment > Weapons > Hunter's Bow
A Hunter's Bow, is a medium ranged weapon. It fires (Stone, Bone or Metal Arrows) which affect the damage dealt. A player aims the bow at a location, when the player releases their arrow, it will fly in that trajectory.
The Ranger's Bow is a significant upgrade to the Hunter's bow.
When crafting, the qualities of a bow are softcapped by
{\displaystyle {\sqrt[{3}]{Dexterity*Carpentry*Marksmanship}}}
Damage scales with Marksmanship skill and bow quality using the following formula:
{\displaystyle Damage=(BowBaseDamage)*{\sqrt {{\sqrt {Marksmanship*{q}Bow}}/10}}+_{base}ArrowDamage*{\sqrt {_{q}Arrow/10}}{\Big )}}
Stone Arrows have a base damage of 10. Bone Arrows have a base damage of 15. Metal Arrows have a base damage of 20. The arrows have 10%, 15%, and 20% armor penetration respectively.
A hunter's bow used as a wall decoration.
Game Development (manual)
Colorful Troll (2016-03-10) >"Holding an object aloft no longer blocks arrows from hitting you."
Colorful Troll (2016-03-10) >"you should also now be able to fire through open gates."
Colorful Troll (2016-03-10) >"You can no longer fight or aim a bow while carrying things aloft."
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Hunter%27s_Bow&oldid=94439"
|
Rational Expressions | Brilliant Math & Science Wiki
Ashley Toh, Lavisha Parab, Gene Keun Chung, and
\frac{A}{B}
A
B
are polynomials, and
B \neq 0
Here are a few examples of rational expressions where the denominator is simply
1
2x, 2x^2, 2x^2 +1.
The following are a few examples of rational expressions where the denominator is a constant:
\frac{2x}{3}, \frac{2x^2}{5}, \frac{2x^2 +1}{4}.
Also, the following are a few examples of rational expressions where the denominator contains variables:
\frac{1}{x}, \frac{x+1}{x}, \frac{x+1}{x-3}.
Simplifying Rational Expressions using the Laws of Exponents
Simplifying Rational Expressions by Factoring
A
B
C
be real numbers or variable expressions, where
B \neq 0
C \neq 0
\frac{AC}{BC} = \frac{A}{B}
: You can divide out the top and bottom by a common factor
C
. This is also known as "canceling"
C
\frac{A}{B} = \frac{A \times C}{B \times C}
: You can multiply the top and bottom by a common factor
C
\frac{ 15xy^2 }{ 12y }.
\frac{ 15xy^2 }{ 12y } = \frac{ 3 \cdot 5 xy^2 }{ 4 \cdot 3 y } = \frac{ 5 }{ 4 }xy^{2-1} = \frac{ 5 }{ 4 }xy. \ _\square
\left( \frac{a^5b^{-3}}{a^3b^8} \right)^2 .
\left( \frac{a^5b^{-3}}{a^3b^8} \right)^2 = \left(a^{5-3}b^{-3-8}\right)^2 = \left(a^2b^{-11}\right)^2 = a^4b^{-22} = \frac{a^4}{b^{22}}. \ _\square
For more examples applying the laws of exponents, see Simplifying Expressions with Exponents.
\frac {x^2 - 9}{x + 3}
x = 10
Factorizing the numerator of the expression gives
\frac {x^2 - 9}{x + 3} = \frac{(x-3)(x+3)}{x+3}.
Canceling out the common factor
x + 3
\frac{(x-3)(x+3)}{x+3} = x - 3.
x = 10
x - 3 = 10 - 3 = 7
_\square
\frac{3x^3 - 6x^2}{9x^2}.
Dividing both the numerator and denominator by a common factor of
3x^2
\frac{3x^3 - 6x^2}{9x^2} = \frac{x - 2}{3}. \ _\square
\frac{x^2 - x - 2}{x^2 - 2x}.
The expression can be factored as
\frac{x^2 - x - 2}{x^2 - 2x} =\frac{(x-2)(x+1)}{x(x-2)}.
x - 2
\frac{(x-2)(x+1)}{x(x-2)} = \frac{x+1}{x}.\ _\square
\frac{6x^2 - x - 2}{10x^2 + 3x - 1} .
\begin{aligned} \frac{6x^2 - x - 2}{10x^2 + 3x - 1} &=\frac{ (2x + 1)(3x - 2) }{ (2x + 1)(5x - 1) } \\ &= \frac{ 3x - 2 }{ 5x - 1 }.\ _\square \end{aligned}
\frac{x^2 - y^2}{x^3 - y^3}.
\frac{x^2 - y^2}{x^3 - y^3} =\frac{ (x - y)(x + y) }{ (x - y)(x^2 + xy + y^2) }.
x - y
\frac{ (x - y)(x + y) }{ (x - y)(x^2 + xy + y^2) } = \frac{x+y}{x^2 + xy + y^2}. \ _\square
Next, see the Simplifying Rational Expressions page to learn how to multiply, divide, add, and subtract rational expressions.
Cite as: Rational Expressions. Brilliant.org. Retrieved from https://brilliant.org/wiki/simplify-fractions/
|
Base (mathematics) - Simple English Wikipedia, the free encyclopedia
A base is usually a whole number bigger than 1, although non-integer bases are also mathematically possible. The base of a number may be written next to the number: for instance,
{\displaystyle 23_{8}}
means 23 in base 8 (which is equal to 19 in base 10).
3 Writing bases
4 Numbers in different bases
In computers[change | change source]
Writing bases[change | change source]
Numbers in different bases[change | change source]
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Base_(mathematics)&oldid=8100679"
|
Calculating Masses of Products - Course Hero
General Chemistry/Stoichiometry of Chemical Reactions/Calculating Masses of Products
One of the major applications of the mole-mass conversion is calculating the product amounts from a balanced chemical equation. A balanced equation provides only the number of moles of reactants and the number of moles of products. The ratio between the number of moles of any two substances in a chemical reaction is called the mole ratio. In a laboratory, the mass of a substance is measured, not the number of moles, so converting from moles to mass is of great importance.
Consider the calculation of the amount of product from a reaction for which the masses are given. There are four steps to follow.
2. Find the number of moles of the reactants.
3. Calculate the number of moles of the products using the mole ratio from the balanced chemical equation.
4. Convert the number of moles of the products into mass.
Calculating Product Mass When Reactant Mass is Given
What mass of sodium sulfate (Na2SO4) is produced when 6.8 g of sodium hydroxide (NaOH) reacts with sulfuric acid (H2SO4)?
2\rm{NaOH}+{\rm H}_2{\rm{SO}}_4\rightarrow{\rm{Na}}_2{\rm{SO}}_4+2{\rm H}_2\rm O
Convert the given mass to moles.
First, determine the molar mass of NaOH using the atomic mass of each element from the periodic table.
\begin{aligned}{\text{Molar mass of }{\rm NaOH}}& = (1)(22.99{\rm{\; g/mol}})+(1)(16.00{\rm{\; g/mol}})+(1)(1.01{\rm{\; g/mol}})\\&= 40.00{\rm{\; g/mol}}\end{aligned}
Then, divide the given mass by the molar mass to convert to moles.
\begin{aligned}{{\text{Moles of }{\rm NaOH}}}&=\frac{{\text{Mass of }{\rm NaOH}}}{{\text{Molar mass of }{\rm NaOH}}}\\&= \frac{6.8{\rm{\; g}}}{40.00{\rm{\; g/mol}}}\\& = 0.17{\rm{\; mol\; NaOH}}\end{aligned}
Use the coefficients of the balanced equation to determine the number of moles of the products that form.
\begin{aligned}{{\text{Moles of }{\rm Na_{2}SO}}}_4&=(0.17{\rm{\; mol\; NaOH}})\!\left( {\frac{1{\rm{\; mol\; Na}_2{\rm{SO}}_4}}{2{\rm{\; mol\; NaOH}}}} \right)\\& = 0.085{\rm{\; mol\; Na}}_2{\rm{SO}}_4\end{aligned}
Determine the molar mass of Na2SO4 using the atomic mass of each element from the periodic table.
\begin{aligned}{{\text{Molar mass of }{\rm Na_{2}SO_{4}}}}& = (2)(22.99{\rm{\; g/mol}})+(1)(32.06{\rm{\; g/mol}})+(4)(16.00{\rm{\; g/mol}})\\&= 142.04{\rm{\; g/mol}}\end{aligned}
The mass of the product is the number of moles multiplied by the molar mass.
\begin{aligned}\text{Mass of }{\rm Na_{2}SO_{4}} &= ( 0.085{\rm{\; mol}})( 142.04{\rm{\; g/mol}}) \\&= 12{\rm{\; g\; Na}}_{2}{\rm{SO}}_{4}\end{aligned}
With the appropriate significant figures, the mass of Na2SO4 is 12 g.
Calculating Reactant Mass Required to Produce a Given Product Mass
Calculate the mass of sodium hydroxide (NaOH) required to produce 20.5 g of magnesium hydroxide, Mg(OH)2, when it reacts with magnesium chloride, MgCl2.
{\rm{MgCl}}_2+2\rm{NaOH}\rightarrow\rm{Mg}(\rm{OH})_2+2\rm{NaCl}
Calculate the molar mass of Mg(OH)2 using the number of each type of atom in the formula and the atomic mass of each element from the periodic table.
\begin{aligned}\text{Molar mass of }{\rm Mg(OH)_{2}}&=(1)(24.30\;\rm{g/mol})+(2)(16.00\;\rm{g/mol})+(2)(1.01\;\rm{g/mol})\\&=58.33\rm{\;g/mol}\end{aligned}
Use the molar mass to convert the mass of Mg(OH)2 to number of moles of Mg(OH)2.
\begin{aligned}\text{Moles of }{\rm Mg(OH)_{2}}&=\frac{20.5\rm{\; g\; Mg(OH)}_2}{58.33\rm{\; g/mol}}\\&=0.3514\rm{\; mol\; Mg(OH)}_2\end{aligned}
An extra digit is included in the answer because this is an intermediate calculation.
Use the coefficients of the balanced equation to convert moles of Mg(OH)2 to moles of NaOH.
\begin{aligned}\text{Moles of }{\rm NaOH}&=\left(0.3514\rm{\; mol\; Mg(OH)}_2\right)\!\left(\frac{1\;\rm {mol\; NaOH}}{1\rm{\; mol\; Mg(OH)}_2}\right)\\&=0.7028\rm{\; mol\; NaOH}\end{aligned}
The molar mass of NaOH is needed in order to find the mass. Use the periodic table to determine the atomic mass of each atom. Then, multiply the corresponding molar mass of each atom by the number of atoms shown in the formula for sodium hydroxide.
\begin{aligned}\text{Molar mass of }{\rm NaOH}&=(1)(24.30\;\rm{g/mol})+(2)(16.00\;\rm{g/mol})+(2)(1.01\;\rm{g/mol})\\&=58.33\;\rm{g/mol}\end{aligned}
Use the molar mass of NaOH to calculate the mass of NaOH, expressed to three significant figures.
\begin{aligned}\text{Mass of }{\rm NaOH}&=(\text{Moles of }{\rm NaOH})(\text{Molar mass of }{\rm NaOH})\\&=(0.7028\rm{\; mol})(58.33\rm{\; g/mol})\\&=41.0\rm{\; g}\end{aligned}
<Mole Concept and Molar Mass>Converting between Moles of Different Substances
|
Modeling the effects of anomalous diffusion on synaptic plasticity | BMC Neuroscience | Full Text
Modeling the effects of anomalous diffusion on synaptic plasticity
Toma Marinov1 &
The diffusion of cytosolic intracellular signals in spiny dendrites is anomalous due to spine trapping [1]. During anomalous diffusion the mean square displacement (MSD) of diffusing molecules follows a power law, MSD ~ tα, with α called the anomalous exponent. We have shown that α depends on the density and structure of spines and could be a general property of all spiny dendrites [2]. Anomalous diffusion affects the spatial spread and temporal concentration profiles of cytosolic molecules, thus potentially affecting the specificity and reliability of synaptic plasticity. Here we study the effect of anomalous diffusion on the spatial and the temporal distribution of signals involved in the expression of long term depression (LTD) in Purkinje cells (PCs). LTD depends on the PKC-MAPK positive feedback cascade. Increased [Ca2+] activates PKC, which in turn activates MAPK. Activated MAPK and [Ca2+] results in production of arachidonic acid which then activates PKC. The activated PKC either further activates MAPK or phosphorylates AMPARs, which are then removed from the synapse [3].
We use the fractional diffusion formulation of anomalous diffusion. In such a framework the diffusion-reaction equation for a given reactant is:
\frac{{\partial }^{\alpha }{C}_{{R}_{i}}}{\partial {t}^{\alpha }}=\gamma {\nabla }^{2}{C}_{{R}_{i}}+f\left({C}_{{R}_{i}},{C}_{{R}_{j}}\right)
where α depends on the spine density along the dendrite, γ(t) is the generalized transport coefficient, CRi(t) is the concentration of the reactant Ri and f(CRi, CRj) defines the reaction terms of the specific biochemical reaction. Solving a system of coupled fractional diffusion-reaction equations for [Ca2+], PKC and MAPK is computationally expensive. To address this problem we recently developed a Fractional Integration Toolbox (FIT) [4].
We have solved a simplified LTD model. In this model [Ca2+] does not undergo anomalous diffusion [1]. However, since PKC and MAPK are large proteins, they are susceptible to molecular trapping by spines resulting in anomalous diffusion. Our results show that in spiny dendrites (α < 1) the diffusion of either PKC or MAPK is slower than in the case of diffusion in spineless dendrites (α = 1) (Figure 1A). Under anomalous diffusion there is a longer activation of the PKC-MAPK positive feedback loop. Once activated, PKC and MAPK stay activated longer (Figure 1B), implying a lower [Ca2+] activation threshold. Thus, anomalous diffusion affects not only the spatial spread of molecules produced during LTD but also the activation threshold of the synaptic plasticity process.
Diffusion of PKC along a dendrite with no spines (α = 1) or high spine density (α = 0.4). (A) Spatial profile of PKC at t = 1 sec after release at × = 0, the anomalous diffusing PKC remains longer and with higher amplitude than the normally diffusing molecule. (B) The logarithmic transformation of the amplitude decay at × = 0 from the simulations in A show that the anomalous diffusing PKC stays activated longer than the normally diffusing PKC.
Santamaria F, Wils S, de Schutter E, Augustine GJ: Anomalous diffusion in Purkinje cell dendrites caused by spines. Neuron. 2006, 52 (4):
Santamaria F, Wils S, de Schutter E, Augustine GJ: The diffusional properties of dendrites depend on the density of dendritic spines. European Journal of Neuroscience. 2011, 34 (4):
Ogasawara H, Doi T, Kawato M: Systems biology perspectives on cerebellar LTD. Neurosignals. 2008, 16 (4):
Marinov T, Santamaria F: Fractional Integration Toolbox. Fractional Calculus and Applied Analysis. accepted
Toma Marinov & Fidel Santamaria
Correspondence to Toma Marinov.
Marinov, T., Santamaria, F. Modeling the effects of anomalous diffusion on synaptic plasticity. BMC Neurosci 14, P343 (2013). https://doi.org/10.1186/1471-2202-14-S1-P343
|
Dauerhaftes Ansprechen bei nicht resezierbarem HCC
Medikamenteninduzierte Leberschädigung?
Beeinflusst Antikoagulation den fäkalen immunochemischen Test?
Management der oberen gastrointestinalen Blutung
Sonja Kierschke, Dieter Schilling
Die gastrointestinale Blutung ist der häufigste Notfall in der Gastroenterologie und mit einer hohen Mortalität von 6 – 15 % verbunden. In etwa 90 % der Fälle handelt es sich hierbei um eine Blutung oberhalb des Treitz’schen Bandes. Etwa 9 % der Blutungen finden sich im Kolorektum, der geringste Anteil im Dünndarm zwischen Treitz’schen Band und Ileozökalklappe.
Hypnotherapie beim Reizdarmsyndrom
Mikrobiommodulation
Es gibt heute zahlreiche wissenschaftliche Untersuchungen zu oralen Probiotika bei unterschiedlichen Erkrankungen sowie zu deren Wirkmechanismen. Inzwischen ist bekannt, dass die Wirksamkeit von probiotischen Bakterien sowohl stamm- als auch krankheitsspezifisch ist.
Interdisziplinäre Viszeralmedizin — Erkrankungen mit systemischer Bedeutung
Gastroenterologen und Viszeralchirurgen haben unter dem Motto „Viszeralmedizin 2019 — Interdisziplinär im Mittelpunkt der Medizin“ auf ihrem gemeinsamen Jahreskongress vom 2. bis 5. Oktober in Wiesbaden diskutiert, wie Patienten mit Erkrankungen an Magen, Darm, Leber oder Bauchspeicheldrüse optimal behandelt werden können. Die interdisziplinäre Zusammenarbeit mit verschiedensten Fachgruppen reflektiert...
Heute Heilung in wenigen Wochen möglich
Isao Tanihata, Kazuyuki Ogata
. Lithium-11 is a showcase of neutron halos and has been providing new insight to nuclear structure researches. Recently, an isoscalar resonance at low excitation energy (
E_\mathrm{x} \sim 1
E x ∼ 1 MeV) has been confirmed by the development of a low-energy high-intensity beam of 11Li at TRIUMF. By inelastic scattering measurements $ (\mathrm{p}, \mathrm{p}^{\prime})$ ( p , p ) ...
Photonuclear reactions: Achievements and perspectives
Norbert Pietralla, Johann Isaak, Volker Werner
. Probing the structure of an atomic nucleus by the electromagnetic interaction can be the cleanest and most direct way to obtain information on how the constituting nucleons are organizing themselves within the nucleus. Precise characterization of photonuclear reactions has contributed significantly to the establishment of modern nuclear physics. A brief overview on the Nuclear Resonance Fluorescence...
Pygmy resonances and symmetry energy
. I present a brief summary of the first three decades of studies of pygmy resonances in nuclei and their relation to the symmetry energy of nuclear matter. I discuss the first experiments and theories dedicated to study the electromagnetic response in halo nuclei and how a low energy peak was initially identified as a candidate for the pygmy resonance. This is followed by the description of a collective...
V. Yu. Ponomarev, D. H. Jakubassa-Amundsen, A. Richter, J. Wambach
. To complete earlier studies of the properties of the electric pygmy dipole resonance (PDR) obtained in various nuclear reactions, the excitation of the 1- states in 140Ce by $ (e,e')$ ( e , e ) scattering for momentum transfers
q = 0.1\mbox{--}1.2
q = 0 . 1 -- 1 . 2 fm-1 is calculated within the plane-wave and distorted-wave Born approximations. The excited states of the nucleus...
A. Repko, V. O. Nesterenko, J. Kvasil, P. -G. Reinhard
. We analyze the relation between isoscalar toroidal modes and so-called pygmy dipole resonance (PDR), which both appear in the same region of low-energy dipole excitations. To this end, we use a theoretical description within the fully self-consistent Skyrme quasiparticle random-phase approximation (QRPA). Test cases are spherical nuclei 40, 48Ca, 58, 72Ni, 90, 100Zr, and 100, 120, 132Sn which cover...
Dissolution of shell structures and the polarizability of dripline nuclei
Horst Lenske, Nadia Tsoneva
. Nuclear matter and finite nuclei at extreme isospin are studied within the microscopical Giessen energy density functional (GEDF). The structure of the GEDF is discussed. Quasiparticle wave equations and the residual interactions are derived by variational methods. Applications to nuclear ground and excited states by HFB and QRPA methods and extensions to multi-phonon theory are indicated. Pairing...
. The electric dipole response of neutron-rich nuclei is discussed from an experimental perspective using selected examples. After introducing the main experimental method, which is relativistic Coulomb excitation in conjunction with invariant-mass spectroscopy, the response of neutron-rich nuclei is discussed separately for light halo nuclei and heavier neutron-rich nuclei. Finally, the perspective...
Low-lying dipole and quadrupole states
E. G. Lanza, L. Pellegri, M. V. Andrés, F. Catara, more
. We briefly review the main properties of the low-lying dipole states known as Pygmy Dipole Resonance trying to select the main one which could define this new excitation mode. A good candidate seems to be the isoscalar-isovector mixing. This mixing, more effective at the nuclear surface, has been proved by both theoretical and experimental investigations. On the other hand, the study of the low-lying...
Characterization of vorticity in pygmy resonances and soft-dipole modes with two-nucleon transfer reactions
R. A. Broglia, F. Barranco, G. Potel, E. Vigezzi
. The properties of the two-quasiparticle-like soft E1-modes and Pygmy Dipole Resonances (PDR) have been and are systematically studied with the help of inelastic and electromagnetic experiments which essentially probe the particle-hole components of these vibrations. It is shown that further insight in their characterisation can be achieved with the help of two-nucleon transfer reactions, in particular...
First principles electromagnetic responses in medium-mass nuclei
Johannes Simonis, Sonia Bacca, Gaute Hagen
. We review the recent progress made in the computation of electromagnetic response functions in light and medium-mass nuclei using coupled-cluster theory. We show how a many-body formulation of the Lorentz integral transform method allows to calculate the photoabsorption cross sections of 16, 22O and 40Ca. Then, we discuss electromagnetic sum rules, with particular emphasis on the electric dipole...
S. Péru, I. Deloncle, S. Hilaire, S. Goriely, more
. The success encountered in the systematic studies of the electric and magnetic strength functions for almost all even-even nuclei proves the quality of our HFB+QRPA calculations which use the Gogny interaction within the whole nuclear chart. In this paper, we study the dipole electromagnetic strength distribution in 158-166Dy. The scalar or the vector nature, in the isospin as well as in the spin...
A. Bracco, F. Camera, F. C. L. Crespi, B. Million, more
. A review of selected experimental works on the gamma-decay from the Giant and Pygmy Dipole Resonances is presented. The common feature of these experiments is that gamma-decay originates from dipole states populated using reactions induced by heavy ions. The focus is the investigation of dipole modes built on the ground and excited states. The major developments made during the years regarding the...
|
Existential quantification — Wikipedia Republished // WIKI 2
Logical quantification stating that a statement holds for at least one object
"∃" redirects here. Not to be confused with Ǝ or ヨ.
"∄" redirects here. For the Ukrainian nightclub of that name, see K41 (nightclub).
In predicate logic, an existential quantification is a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". It is usually denoted by the logical operator symbol ∃, which, when used together with a predicate variable, is called an existential quantifier ("∃x" or "∃(x)"). Existential quantification is distinct from universal quantification ("for all"), which asserts that the property or relation holds for all members of the domain.[1][2] Some sources use the term existentialization to refer to existential quantification.[3]
Universal and Existential Quantifiers, ∀ "For All" and ∃ "There Exists"
Existential Quantifiers - Examples
Mod-01 Lec-24 Existential Quantification
Logic: Stratified Existential Quantification and the Unique Existential Quantifier
1 Basics
2.3 The empty set
3 As adjoint
4 Encoding
Consider a formula that states that some natural number multiplied by itself is 25.
0·0 = 25, or 1·1 = 25, or 2·2 = 25, or 3·3 = 25, ...
This would seem to be a logical disjunction because of the repeated use of "or". However, the ellipses make this impossible to integrate and to interpret it as a disjunction in formal logic. Instead, the statement could be rephrased more formally as
This statement is more precise than the original one, since the phrase "and so on" does not necessarily include all natural numbers and exclude everything else. And since the domain was not stated explicitly, the phrase could not be interpreted formally. In the quantified statement, however, the natural numbers are mentioned explicitly.
This particular example is true, because 5 is a natural number, and when we substitute 5 for n, we produce "5·5 = 25", which is true. It does not matter that "n·n = 25" is only true for a single natural number, 5; even the existence of a single solution is enough to prove this existential quantification as being true. In contrast, "For some even number n, n·n = 25" is false, because there are no even solutions.
The domain of discourse, which specifies the values the variable n is allowed to take, is therefore critical to a statement's trueness or falseness. Logical conjunctions are used to restrict the domain of discourse to fulfill a given predicate. For example:
For some positive odd number n, n·n = 25
is logically equivalent to
For some natural number n, n is odd and n·n = 25.
Here, "and" is the logical conjunction.
In symbolic logic, "∃" (a rotated letter "E", in a sans-serif font) is used to indicate existential quantification.[4] Thus, if P(a, b, c) is the predicate "a·b = c", and
{\displaystyle \mathbb {N} }
is the set of natural numbers, then
{\displaystyle \exists {n}{\in }\mathbb {N} \,P(n,n,25)}
{\displaystyle \exists {n}{\in }\mathbb {N} \,{\big (}Q(n)\;\!\;\!{\wedge }\;\!\;\!P(n,n,25){\big )}}
For some natural number n, n is even and n·n = 25.
In mathematics, the proof of a "some" statement may be achieved either by a constructive proof, which exhibits an object satisfying the "some" statement, or by a nonconstructive proof, which shows that there must be such an object but without exhibiting one.
A quantified propositional function is a statement; thus, like statements, quantified functions can be negated. The
{\displaystyle \lnot \ }
symbol is used to denote negation.
For example, if P(x) is the predicate "x is greater than 0 and less than 1", then, for a domain of discourse X of all natural numbers, the existential quantification "There exists a natural number x which is greater than 0 and less than 1" can be symbolically stated as:
{\displaystyle \exists {x}{\in }\mathbf {X} \,P(x)}
This can be demonstrated to be false. Truthfully, it must be said, "It is not the case that there is a natural number x that is greater than 0 and less than 1", or, symbolically:
{\displaystyle \lnot \ \exists {x}{\in }\mathbf {X} \,P(x)}
If there is no element of the domain of discourse for which the statement is true, then it must be false for all of those elements. That is, the negation of
{\displaystyle \exists {x}{\in }\mathbf {X} \,P(x)}
is logically equivalent to "For any natural number x, x is not greater than 0 and less than 1", or:
{\displaystyle \forall {x}{\in }\mathbf {X} \,\lnot P(x)}
Generally, then, the negation of a propositional function's existential quantification is a universal quantification of that propositional function's negation; symbolically,
{\displaystyle \lnot \ \exists {x}{\in }\mathbf {X} \,P(x)\equiv \ \forall {x}{\in }\mathbf {X} \,\lnot P(x)}
(This is a generalization of De Morgan's laws to predicate logic.)
A common error is stating "all persons are not married" (i.e., "there exists no person who is married"), when "not all persons are married" (i.e., "there exists a person who is not married") is intended:
{\displaystyle \lnot \ \exists {x}{\in }\mathbf {X} \,P(x)\equiv \ \forall {x}{\in }\mathbf {X} \,\lnot P(x)\not \equiv \ \lnot \ \forall {x}{\in }\mathbf {X} \,P(x)\equiv \ \exists {x}{\in }\mathbf {X} \,\lnot P(x)}
Negation is also expressible through a statement of "for no", as opposed to "for some":
{\displaystyle \nexists {x}{\in }\mathbf {X} \,P(x)\equiv \lnot \ \exists {x}{\in }\mathbf {X} \,P(x)}
Unlike the universal quantifier, the existential quantifier distributes over logical disjunctions:
{\displaystyle \exists {x}{\in }\mathbf {X} \,P(x)\lor Q(x)\to \ (\exists {x}{\in }\mathbf {X} \,P(x)\lor \exists {x}{\in }\mathbf {X} \,Q(x))}
Transformation rules
Implication introduction / elimination (modus ponens)
Biconditional introduction / elimination
Conjunction introduction / elimination
Disjunction introduction / elimination
Disjunctive / hypothetical syllogism
Constructive / destructive dilemma
Absorption / modus tollens / modus ponendo tollens
Negation introduction
Rules of replacement
Predicate logic
Universal generalization / instantiation
Existential generalization / instantiation
A rule of inference is a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference which utilize the existential quantifier.
Existential introduction (∃I) concludes that, if the propositional function is known to be true for a particular element of the domain of discourse, then it must be true that there exists an element for which the proposition function is true. Symbolically,
{\displaystyle P(a)\to \ \exists {x}{\in }\mathbf {X} \,P(x)}
Existential instantiation, when conducted in a Fitch style deduction, proceeds by entering a new sub-derivation while substituting an existentially quantified variable for a subject—which does not appear within any active sub-derivation. If a conclusion can be reached within this sub-derivation in which the substituted subject does not appear, then one can exit that sub-derivation with that conclusion. The reasoning behind existential elimination (∃E) is as follows: If it is given that there exists an element for which the proposition function is true, and if a conclusion can be reached by giving that element an arbitrary name, that conclusion is necessarily true, as long as it does not contain the name. Symbolically, for an arbitrary c and for a proposition Q in which c does not appear:
{\displaystyle \exists {x}{\in }\mathbf {X} \,P(x)\to \ ((P(c)\to \ Q)\to \ Q)}
{\displaystyle P(c)\to \ Q}
must be true for all values of c over the same domain X; else, the logic does not follow: If c is not arbitrary, and is instead a specific element of the domain of discourse, then stating P(c) might unjustifiably give more information about that object.
{\displaystyle \exists {x}{\in }\emptyset \,P(x)}
is always false, regardless of P(x). This is because
{\displaystyle \emptyset }
denotes the empty set, and no x of any description – let alone an x fulfilling a given predicate P(x) – exist in the empty set. See also Vacuous truth for more information.
Main article: Universal quantification § As adjoint
In category theory and the theory of elementary topoi, the existential quantifier can be understood as the left adjoint of a functor between power sets, the inverse image functor of a function between sets; likewise, the universal quantifier is the right adjoint.[5]
In Unicode and HTML, symbols are encoded U+2203 ∃ THERE EXISTS (∃, ∃ · as a mathematical symbol) and U+2204 ∄ THERE DOES NOT EXIST (∄, ∄, ∄).
In TeX, the symbol is produced with "\exists".
Existential clause
Existence theorem
Lindström quantifier
List of logic symbols – for the unicode symbol ∃
Quantifier variance
Uniqueness quantification
^ "Predicates and Quantifiers". www.csm.ornl.gov. Retrieved 2020-09-04.
^ "1.2 Quantifiers". www.whitman.edu. Retrieved 2020-09-04.
^ Allen, Colin; Hand, Michael (2001). Logic Primer. MIT Press. ISBN 0262303965.
^ This symbol is also known as the existential operator. It is sometimes represented with V.
^ Saunders Mac Lane, Ieke Moerdijk, (1992): Sheaves in Geometry and Logic Springer-Verlag ISBN 0-387-97710-4. See p. 58.
|
Part 4 in the series of notes on regression analysis derives the OLS formula through the maximum likelihood approach. Maximum likelihood involves finding the value of the parameters that maximise the probability of the observed data by assuming a particular functional form distribution.
Bernoulli example
Take for example a dataset consisting of results from a series of coin flips. The coin may be biased and we want to find an estimator for the probability of the coin landing heads. A fair assumption is that observations are drawn from
n
independent coin flips that come from a
Bernoulli(p)
distribution. This means that the probability mass function of a single observation
x_{i}
f(x_{i};p) = p^{x_{i}}(1-p)^{1-x_{i}}
x_{i}
is a single observation and takes the value of 0 or 1. The likelihood function is simply the joint distribution expressed as a function of its parameters:
\begin{aligned} L(p) &= P(X_{1}=x_{1}, X_{2}=x_{2},...,X_{n}=x_{n}; p) \\ &= \prod_{i=1}^{n} f(x_{i};p)~~(\text{by independence})\\ &= p^{\sum x_{i}} (1-p)^{n - \sum x_{i}} \end{aligned}
Now we want to find the value
p
that maximises the likelihood,
L(p)
. A simplier alternatively is to maximise the log likelihood.1 The maximum likelihood estimate can then be calculated by finding the value
p
that maximises the log likelihood:
\begin{aligned} ln L(p) &= (\sum x_{i}) ln~p + (n - \sum x_{i}) ln (1-p) \\ \frac{\partial ln L(p)}{\partial p} &= \frac{\sum x_{i}}{p} - \frac{n - \sum x_{i}}{1 -p} \\ 0 &= (1-p)\sum x_{i} - p(n - \sum x_{i}) \\ \hat{p} &= \frac{\sum x_{i}}{n} \end{aligned}
Not surprisingly, the probability that the biased coin will land heads is simply the average number of heads across all observations.
Similarly, one can derive the formula for the OLS estimator through the maximum likelihood approach. Recall that linearity implies the following specification for the regression model:
y_{i} = \mathbf{x}_{i}'\beta + u_{i}
. In the maximum likelihood approach, we need to assume that the error terms conditional on
x_{i}
are normally distributed with unknown variance i.e.
u_{i}\vert\mathbf{x}_{i} \sim N(0,\sigma^{2})
. The PDF of a single observation is given by:
f(y_{i},\mathbf{x}_{i};\beta, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}} exp(\frac{-(y_{i} -\mathbf{x}'_{i}\beta)^2}{2\sigma^2})
The likelihood or the joint PDF is:
L(\beta, \sigma^2) = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi\sigma^2}} exp(\frac{-(y_{i} -\mathbf{x}'_{i}\beta)^2}{2\sigma^2})
The log likelihood can be written as:
\begin{aligned} ln~L(\beta, \sigma^2) &= ln \prod_{i=1}^{n} f(y_{i},\mathbf{x}_{i};\beta, \sigma^2) \\ &= \sum_{i=1}^{n} ln~f(y_{i},\mathbf{x}_{i};\beta, \sigma^2) \\ &= \sum_{i=1}^{n} ln~\Big( \frac{1}{\sqrt{2\pi\sigma^2}} exp(\frac{-(y_{i} -\mathbf{x}'_{i}\beta)^2}{2\sigma^2}) \Big) \\ &= -\frac{n}{2} ln~2\pi -\frac{n}{2} ln~\sigma^2 -\frac{1}{2\sigma^2}\sum_{i=1}^{n}(y_{i} -\mathbf{x}'_{i}\beta)^2 \end{aligned}
Take the derivative with respect to
\beta
\sigma
to derive the maximum likelihood estimator:
\begin{aligned} \hat{\beta}_{ML} &= (\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{x}_{i}')^{-1}(\sum_{i=1}^{n}\mathbf{x}_{i}y_{i}) \\ \hat{\sigma}^{2}_{ML} &= \frac{\sum_{i=1}^{n}(y_{i} -\mathbf{x}'_{i}\beta)^2}{n} \end{aligned}
While the maximum likelihood estimator can only be derived under very strong assumptions of the functional form which the error term takes, it is nonetheless a very popular method in statistics and has widespread applications. For example, binary choice models such as probit and logit assume that the dependent variable takes the value of 0 or 1 and could be modelled using the following functional form:
P(y_{i}=1 | \mathbf{x}_{i}) = f(\mathbf{x}_{i}'\beta)
where f is the CDF for the normal distribution in the case of the probit model or the logistic CDF for the logistic regression.2
Unlike the case of the linear regression presented above, for most other problems, there may be no explicit solution for the maximisation problem and the solution has to be derived using numerical optimisation.
ln
function is monotonic, the parameter value that maximises the log likelihood will also maximise the likelihood. ↩
This corresponds to a latent variable model where the error terms are iid drawn from a normal or logistic distribution. ↩
|
Theoretical physics - Wikipedia
The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations.[a] For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether.[1] Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation.[2]
6 Thought experiments vs real experiments
A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results.[3][4] A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms.[b]
{\displaystyle \mathrm {Ric} =k\,g}
The equations for an Einstein manifold, used in general relativity to describe the curvature of spacetime
Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding.[c] "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems.[d] Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether.[e] Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled;[f] e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics.
Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity).[10] They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method.
Further information: History of physics
Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light.
The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively.
The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle.
Einstein–Rosen Bridge (Wormhole)
^ There is some debate as to whether or not theoretical physics uses mathematics to build intuition and illustrativeness to extract physical insight (especially when normal experience fails), rather than as a tool in formalizing theories. This links to the question of it using mathematics in a less formally rigorous, and more intuitive or heuristic way than, say, mathematical physics.
^ Sometimes the word "theory" can be used ambiguously in this sense, not to describe scientific theories, but research (sub)fields and programmes. Examples: relativity theory, quantum field theory, string theory.
^ van Dongen, Jeroen (2009). "On the role of the Michelson-Morley experiment: Einstein in Chicago". Archive for History of Exact Sciences. 63 (6): 655–663. arXiv:0908.1545. doi:10.1007/s00407-009-0050-5.
^ Theorems and Theories Archived 2014-08-19 at the Wayback Machine, Sam Nelson.
^ Mark C. Chu-Carroll, March 13, 2007:Theorems, Lemmas, and Corollaries. Good Math, Bad Math blog.
^ Singiresu S. Rao (2007). Vibration of Continuous Systems (illustrated ed.). John Wiley & Sons. 5,12. ISBN 978-0471771715. ISBN 9780471771715
^ Eli Maor (2007). The Pythagorean Theorem: A 4,000-year History (illustrated ed.). Princeton University Press. pp. 18–20. ISBN 978-0691125268. ISBN 9780691125268
^ Simplicity in the Philosophy of Science (retrieved 19 Aug 2014), Internet Encyclopedia of Philosophy.
^ See 'Correspondence of Isaac Newton, vol.2, 1676–1687' ed. H W Turnbull, Cambridge University Press 1960; at page 297, document #235, letter from Hooke to Newton dated 24 November 1679.
^ Penrose, R (2004). The Road to Reality. Jonathan Cape. p. 471.
Physical Sciences. Encyclopædia Britannica (Macropaedia). Vol. 25 (15th ed.). 1994.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Theoretical_physics&oldid=1086084893"
|
Imaging Questions & Answers | Hamamatsu Photonics
I captured an image of the same sample with two cameras, but the output data from Camera A is overall a higher number than Camera B. Does that mean that Camera B is detecting less light?
Why turn to InGaAs for NIR detection?
What is InGaAs “standard wavelength” or “extended wavelength”?
How can we suppress the dark current of InGaAs image sensors?
There is no light to my camera, but I still have signal. What does this mean?
Not necessarily. There are many parameters that go into designing a camera that affect the output number given the same input photons. Each model of camera, even if it uses the same sensor, may have the parameters adjusted differently, resulting in a different output count per detected photon. This ratio of detected photons (photons converted to electrons in each sensor pixel) to the output count is the conversion factor (CF) for the camera, in units of electrons/count and determined by the manufacturer. The conversion factor can also be approximated by the following equation.
CF\cong \frac{Full Well Capacity}{\left(BitDepth-Digital Offset\right)}
The way to compare two cameras is to calculate back to the number of sample photons that the output represents in each case.
The sensor detects photons (P), which are collected as photoelectrons (e-). The number of detected photons depends on the quantum efficiency (QE, in %), which is wavelength dependent, and on the pixel area, which determines how much of the sample emission is covered with each pixel. The photoelectrons are then converted to a voltage in the readout circuit of the sensor. Gain (G), a multiplication factor, may be added before (EM gain) or after (analog gain) the voltage conversion. This voltage goes into a digitizer which outputs a value represented by a whole number, ranging from the digital offset to the maximum value of the digitizer in units of counts. The equation to calculate the input photons from the output counts is derived from going backwards through the process.
Photons=\frac{\left(Counts-Digital Offset\right)×CF}{G×\left(QE\left(\lambda \right)/100\right)}
If the pixel sizes in the cameras being evaluated are different, then the number of photons per unit should be calculated and compared using the pixel dimensions, photons/µm2.
As an example of comparing two outputs, let’s use one camera that can output the data in either 16-bit or 12-bit format. The conversion factor would be the only parameter that would change between the two modes.
Given a camera with the following specifications:
30,000 e- full well capacity
100 count digital offset
82% QE at 550 nm
No gain, G = 1
10,000 input photons at 550 nm
The conversion factors for 16 and 12 bits are:
C{F}_{16}\cong \frac{30000}{65535-100}=0.46{e}^{-}/count
C{F}_{12}\cong \frac{30000}{4095-100}=7.51{e}^{-}/count
Rewriting equation 2 to solve for counts, we get:
Counts=\frac{Photons × G × \left(QE\left(\lambda \right)/100\right)}{CF}+Digital Offset
Count{s}_{16}=\frac{10,000×1×0.82}{0.46}+100=\mathbf{17,926}
Count{s}_{12}=\frac{10,000×1×0.82}{7.51}+100=\mathbf{1,192}
We can see that the output in the 16-bit mode is a higher number than the 12-bit mode, but the input number of photons is the same. The 16-bit mode is not detecting more photons than the 12-bit mode.
When comparing image data between two cameras, or even the same camera with different camera settings, it is important to look at the data and think in photons.
InGaAs is an alloy which belongs to the InGaAsP quaternary system that consists of indium arsenide (InAs), gallium arsenide (GaAs), indium phosphide (InP), and gallium phosphide (GaP). These binary materials and their alloys are all III-V compound semiconductors.
The energy bandgap of InGaAs alloys depends on the ratio of indium and gallium content. At room temperature (300 K), the dependency of the energy bandgap on the indium content x (0~1) can be calculated using the formula: Eg(x) = 1.425eV - 1.501eV*x + 0.436eV*x2. The corresponding cutoff wavelength that can be detected is in the range of 870nm~3.4µm.
Indium Content x
Energy Gap Eg eV
Corresponding Wavelength nm
The most used substrate for InGaAs is InP. The InGaAs alloy having x=0.530 has the same lattice constant as InP, which is called "standard InGaAs." This combination brings high quality thin films and results in the cutoff wavelength of 1.7µm.
However, many applications require longer wavelengths. Hamamatsu offers both linear and area InGaAs image sensors with cutoff wavelengths up to 2.6µm, which are called “extended wavelength.” Due to the mismatch of the lattice constant of InGaAs and InP, the quality of the thin films is reduced. However, Hamamatsu put in a lot of effort to guarantee top-quality extended InGaAs.
The dark current of Hamamatsu InGaAs image sensors is successfully minimized by operating the photodiode array at zero bias condition. Moreover, one-stage TEC (thermoelectric cooler) or multiple-stage TEC can be added into the sensor package to stabilize the sensor temperature and reduce the dark current efficiently.
Rewording this question into camera terms, we can say, “The input to the camera sensor is blocked from detecting any photons, but the image data on my computer has non-zero values.”
This is an important feature of a scientific digital camera used for quantitative image measurements. To understand why this is the case we need to understand, in very high level terms, the conversion of photons to image data. The sensor detects the photons which are collected as photoelectrons and then passed along as a voltage in the readout circuit of the sensor. This voltage goes into a digitizer, which outputs a value represented by a whole number ranging from 0 to the maximum value of the digitizer. This whole number is referred to as counts, gray values, or gray levels.
The readout of the sensor pixel is an imperfect process and noise is introduced into the signal as it is converted to a voltage reading. This noise is a small fluctuating voltage around the nominal signal. If that signal is 0, then the voltage fluctuates into negative values. Since the digitizer in the camera does not contain values less than zero, these negative voltages would be clipped and data would be lost. To avoid the loss of data, the camera designer will set the zero voltage to be a number higher than zero that will accommodate the noise fluctuation, for example 100 counts on the digitizer. In this case, fluctuations below 0 in voltage would be represented by output counts less than 100 counts.
This non-zero output value for the zero photon input is called the digital offset. The camera manual or camera manufacturer can provide the digital offset number for your camera model. You will need to subtract this digital offset number from each intensity value to determine the true output signal from your camera.
Shelley Brankner is an Applications Engineer specializing in scientific cameras and x-ray imaging products. For customers that need high-level synchronization between their camera and peripheral devices, she can provide the expertise on timing and modes of operation in these imaging products. She has a passion for asking the question, “How does that work?” and a desire for sharing the answer with others. When she isn’t knocking down the technical questions that cross her path, she can be found knocking down pins at a bowling alley.
|
I was reading an article on obesity and the misconceptions by experts in the field. I found this quite interesting and thought I would share some ideas that I currently have after doing the keto diet for 10 months.
I’ve wanted to assume that the experts I interview can be trusted to understand their subjects. Put simply, to get it right. But watching researchers in the field of obesity almost blindly follow a failed paradigm has led me to cross a line that few journalists ever do, to publicly embrace and promote a minority opinion that many in the obesity field think is quackery.
The term 'expert' simply means that somebody is knowledgable and generally accepted as being an authoritative voice in a field. It doesn't mean that they are correct on everything - even everything in their own field. The Scientific method dictates that they must accept the possibility they are potentially wrong. It's even dangerous for experts to claim that a field is completely known or even explored.
When it comes to obesity, or more generally biology and the human body, the verdict is very much still out. The human body is an exceptionally complex machine that has evolved over many millions of years and Science only really came about relatively recently. It stands to reason that nature still has a thing or two to teach us.
The equation they propose is pretty simple (quoting a HackerNews comment by amelius):
ES=CI-CO
ES
is energy stored,
CI
is calories in and
CO
is calories out. The comment then goes on to point out that
CO
is at least a little more complicated, being:
CO={CO}_{P}+{CO}_{E}
{CO}_{P}
is physically burned calories out and
{CO}_{E}
is excess burned calories out. Both
{CO}_{P}
{CO}_{E}
will be largely different from person to person, even varying based on time, age, stress, etc.
Another comment by AndrewDucker points out that
CI
is not even the same for each person either. Your digestive system, health, micro biome, etc, all play a part in how you process those calories.
Instead of constants, we can think of these values as functions tailored to an individual,
i
. We can then rewrite the original equation as:
ES\left(i\right)=CI\left(i\right)-\left({CO}_{P}\left(i\right)+{CO}_{E}\left(i\right)\right)
But again, we are still making an assumption here - that all calories are the same. If ketosis has taught be one thing, it's that this is entirely not true. Your body runs entirely on sugar, but your three main macro nutrients are protein, carbohydrates and fats. Proteins tend to be used for muscle growth/repair, unless you consume too much, in which it is converted to carbohydrates. Carbohydrates are essentially your sugars, they can be easily stored as fat if you have an excess.
Fats on the other hand can be burned as is - or is broken down, absorbed, recombined and then stored. Fats tend to be about double the calorific value per gram than carbohydrates, but the way in which the body processes them is entirely different. You must also consider that there are different types of fats (animal fats, trans fats, etc) - all of which are processed slightly differently and have different health benefits/negatives.
And don't even get me started on Fructose! Not even all carbohydrates are burned the same!
And there you have it. Obesity is at least in large part due to a large increase in carbohydrates, not calories. There is a fundamental misunderstanding of weight gain that has propagated all the way up to the WHO whom then disseminate it as fact.
In the U.S., 12% of Americans lived with obesity 60 years ago; more than 40% do today.
Exceptional. Looking around in New Zealand, an astonishing number of people are obese. I'm not a slim bean myself, but the ease at which I can find clothing is actually concerning, it used to be much harder to find anything 2XL and above.
That all said, burning fat happens when alternative fuel is not available. If you run at a calorie deficit, you will burn fat. If you require a certain amount of energy to run and that's not available through your food, you body will locate another source. Calorie reduction will allow you to lose weight.
Your body will first locate a quick fuel source I won't go into here. It will then seek to use carbohydrates, essentially sourced from sugar in your blood. Next, as long as there is body fat available, it will mostly try to use this. Weight loss should be done with at least some amount of exercise to prevent the body from also using protein (lean muscle) as a fuel source, which includes your heart.
If you simply remove carbohydrates (easy to burn fuel) from your diet, you will definitely burn fat. If you reduce your intake fat, this will be taken from your body's fat stores. This is essentially the fundamental concept of the keto diet and is why it has been so unbelievably successful in not only reducing weight but reducing the affects of type 2 diabetes for some people, even putting them in remission.
How to start a diet? Small, incremental, sustainable changes. One day at a time, one meal at a time, one bite at a time. It's as simple as that I promise.
I am currently working on a book as a step-by-step guide to following a keto diet, the intention is to provide a small but impactful piece of information daily, at useful intervals. These are still early days, but currently I look towards having a free sample of the programme available by Christmas.
|
Volume Knowpia
In calculus, a branch of mathematics, the volume of a region D in R3 is given by a triple integral of the constant function
{\displaystyle f(x,y,z)=1}
over the region and is usually written as:
{\displaystyle \iiint \limits _{D}1\,dx\,dy\,dz.}
{\displaystyle \iiint \limits _{D}r\,dr\,d\theta \,dz,}
In spherical coordinates (using the convention for angles with
{\displaystyle \theta }
{\displaystyle \varphi }
measured from the polar axis; see more on conventions), the volume integral is
{\displaystyle \iiint \limits _{D}\rho ^{2}\sin \varphi \,d\rho \,d\theta \,d\varphi .}
{\displaystyle a^{3}}
{\displaystyle abc}
{\displaystyle Bh}
{\displaystyle {\frac {1}{3}}Bh}
{\displaystyle abc{\sqrt {K}}}
{\displaystyle {\begin{aligned}K=1&+2\cos(\alpha )\cos(\beta )\cos(\gamma )\\&-\cos ^{2}(\alpha )-\cos ^{2}(\beta )-\cos ^{2}(\gamma )\end{aligned}}}
{\displaystyle {{\sqrt {2}} \over 12}a^{3}\,}
{\displaystyle {\frac {4}{3}}\pi r^{3}}
{\displaystyle {\frac {4}{3}}\pi (R^{3}-r^{3})}
{\displaystyle {\frac {4}{3}}\pi abc}
Circular Cylinder
{\displaystyle \pi r^{2}h}
{\displaystyle {\frac {1}{3}}\pi r^{2}h}
Solid torus
{\displaystyle 2\pi ^{2}Rr^{2}}
Solid of revolution
{\displaystyle \pi \cdot \int _{a}^{b}f(x)^{2}\mathrm {d} x}
{\displaystyle A(x)}
of its cross sections
{\displaystyle \int _{a}^{b}A(x)\mathrm {d} x}
For the solid of revolution above:
{\displaystyle A(x)=\pi f(x)^{2}}
Ratios of volumes of a cone, sphere and cylinder of the same radius and heightEdit
{\displaystyle {\frac {1}{3}}\pi r^{2}h={\frac {1}{3}}\pi r^{2}\left(2r\right)=\left({\frac {2}{3}}\pi r^{3}\right)\times 1,}
{\displaystyle {\frac {4}{3}}\pi r^{3}=\left({\frac {2}{3}}\pi r^{3}\right)\times 2,}
{\displaystyle \pi r^{2}h=\pi r^{2}(2r)=\left({\frac {2}{3}}\pi r^{3}\right)\times 3.}
Formula derivationsEdit
The surface area of the circular disk is
{\displaystyle \pi r^{2}}
{\displaystyle y={\sqrt {r^{2}-x^{2}}}}
{\displaystyle z={\sqrt {r^{2}-x^{2}}}}
{\displaystyle \int _{-r}^{r}\pi y^{2}\,dx=\int _{-r}^{r}\pi \left(r^{2}-x^{2}\right)\,dx.}
{\displaystyle \int _{-r}^{r}\pi r^{2}\,dx-\int _{-r}^{r}\pi x^{2}\,dx=\pi \left(r^{3}+r^{3}\right)-{\frac {\pi }{3}}\left(r^{3}+r^{3}\right)=2\pi r^{3}-{\frac {2\pi r^{3}}{3}}.}
Combining yields
{\displaystyle V={\frac {4}{3}}\pi r^{3}.}
This formula can be derived more quickly using the formula for the sphere's surface area, which is
{\displaystyle 4\pi r^{2}}
. The volume of the sphere consists of layers of infinitesimally thin spherical shells, and the sphere volume is equal to
{\displaystyle \int _{0}^{r}4\pi r^{2}\,dr={\frac {4}{3}}\pi r^{3}.}
ConeEdit
{\displaystyle r{\frac {h-x}{h}}.}
{\displaystyle \pi \left(r{\frac {h-x}{h}}\right)^{2}=\pi r^{2}{\frac {(h-x)^{2}}{h^{2}}}.}
{\displaystyle \int _{0}^{h}\pi r^{2}{\frac {(h-x)^{2}}{h^{2}}}dx,}
{\displaystyle {\frac {\pi r^{2}}{h^{2}}}\int _{0}^{h}(h-x)^{2}dx}
{\displaystyle {\frac {\pi r^{2}}{h^{2}}}\left({\frac {h^{3}}{3}}\right)={\frac {1}{3}}\pi r^{2}h.}
{\displaystyle \omega ={\sqrt {|g|}}\,dx^{1}\wedge \dots \wedge dx^{n},}
{\displaystyle dx^{i}}
are 1-forms that form a positively oriented basis for the cotangent bundle of the manifold, and
{\displaystyle g}
is the determinant of the matrix representation of the metric tensor on the manifold in terms of the same basis.
|
Congruence_relation Knowpia
The prototypical example of a congruence relation is congruence modulo
{\displaystyle n}
on the set of integers. For a given positive integer
{\displaystyle n}
, two integers
{\displaystyle a}nd
{\displaystyle b}
are called congruent modulo
{\displaystyle n}
{\displaystyle a\equiv b{\pmod {n}}}
{\displaystyle a-b}
{\displaystyle n}
(or equivalently if
{\displaystyle a}nd
{\displaystyle b}
have the same remainder when divided by
{\displaystyle n}
{\displaystyle 37}
{\displaystyle 57}
are congruent modulo
{\displaystyle 10}
{\displaystyle 37\equiv 57{\pmod {10}}}
{\displaystyle 37-57=-20}
is a multiple of 10, or equivalently since both
{\displaystyle 37}
{\displaystyle 57}
have a remainder of
{\displaystyle 7}
{\displaystyle 10}
Congruence modulo
{\displaystyle n}
(for a fixed
{\displaystyle n}
) is compatible with both addition and multiplication on the integers. That is,
{\displaystyle a_{1}\equiv a_{2}{\pmod {n}}}
{\displaystyle b_{1}\equiv b_{2}{\pmod {n}}}
{\displaystyle a_{1}+b_{1}\equiv a_{2}+b_{2}{\pmod {n}}}
{\displaystyle a_{1}b_{1}\equiv a_{2}b_{2}{\pmod {n}}}
The corresponding addition and multiplication of equivalence classes is known as modular arithmetic. From the point of view of abstract algebra, congruence modulo
{\displaystyle n}
is a congruence relation on the ring of integers, and arithmetic modulo
{\displaystyle n}
occurs on the corresponding quotient ring.
Example: GroupsEdit
For example, a group is an algebraic object consisting of a set together with a single binary operation, satisfying certain axioms. If
{\displaystyle G}
is a group with operation
{\displaystyle \ast }
, a congruence relation on
{\displaystyle G}
{\displaystyle \equiv }
on the elements of
{\displaystyle G}
{\displaystyle g_{1}\equiv g_{2}\ \ \,}
{\displaystyle \ \ \,h_{1}\equiv h_{2}\implies g_{1}\ast h_{1}\equiv g_{2}\ast h_{2}}
{\displaystyle g_{1},g_{2},h_{1},h_{2}\in G}
. For a congruence on a group, the equivalence class containing the identity element is always a normal subgroup, and the other equivalence classes are the other cosets of this subgroup. Together, these equivalence classes are the elements of a quotient group.
Example: RingsEdit
{\displaystyle r_{1}+s_{1}\equiv r_{2}+s_{2}}
{\displaystyle r_{1}s_{1}\equiv r_{2}s_{2}}
{\displaystyle r_{1}\equiv r_{2}}
{\displaystyle s_{1}\equiv s_{2}}
. For a congruence on a ring, the equivalence class containing 0 is always a two-sided ideal, and the two operations on the set of equivalence classes define the corresponding quotient ring.
The general notion of a congruence relation can be formally defined in the context of universal algebra, a field which studies ideas common to all algebraic structures. In this setting, a relation
{\displaystyle R}
on a given algebraic structure is called compatible if
{\displaystyle n}
{\displaystyle n}
{\displaystyle \mu }
defined on the structure: whenever
{\displaystyle a_{1}\mathrel {R} a'_{1}}
and ... and
{\displaystyle a_{n}\mathrel {R} a'_{n}}
{\displaystyle \mu (a_{1},\ldots ,a_{n})\mathrel {R} \mu (a'_{1},\ldots ,a'_{n})}
Relation with homomorphismsEdit
{\displaystyle f:A\,\rightarrow B}
is a homomorphism between two algebraic structures (such as homomorphism of groups, or a linear map between vector spaces), then the relation
{\displaystyle R}
{\displaystyle a_{1}\,R\,a_{2}}
{\displaystyle f(a_{1})=f(a_{2})}
is a congruence relation on
{\displaystyle A}
. By the first isomorphism theorem, the image of A under
{\displaystyle f}
is a substructure of B isomorphic to the quotient of A by this congruence.
On the other hand, the congruence relation
{\displaystyle R}
induces a unique homomorphism
{\displaystyle f:A\rightarrow A/R}
{\displaystyle f(x)=\{y\mid x\,R\,y\}}
Congruences of groups, and normal subgroups and idealsEdit
Given any element a of G, a ~ a (reflexivity);
Given any elements a and b of G, if a ~ b, then b ~ a (symmetry);
Given any elements a, b, and c of G, if a ~ b and b ~ c, then a ~ c (transitivity);
Given any elements a, a' , b, and b' of G, if a ~ a' and b ~ b' , then a * b ~ a' * b' ;
Given any elements a and a' of G, if a ~ a' , then a−1 ~ a' −1 (this can actually be proven from the other four,[note 1][citation needed] so is strictly redundant).
Conditions 1, 2, and 3 say that ~ is an equivalence relation.
A congruence ~ is determined entirely by the set {a ∈ G : a ~ e} of those elements of G that are congruent to the identity element, and this set is a normal subgroup. Specifically, a ~ b if and only if b−1 * a ~ e. So instead of talking about congruences on groups, people usually speak in terms of normal subgroups of them; in fact, every congruence corresponds uniquely to some normal subgroup of G.
Ideals of rings and the general caseEdit
Universal algebraEdit
In a group a congruence is determined if we know a single congruence class, in particular if we know the normal subgroup which is the class containing the identity. Similarly, in a ring a congruence is determined if we know the ideal which is the congruence class containing the zero. In semigroups there is no such fortunate occurrence, and we are therefore faced with the necessity of studying congruences as such. More than anything else, it is this necessity that gives semigroup theory its characteristic flavour. Semigroups are in fact the first and simplest type of algebra to which the methods of universal algebra must be applied…[5]
^ Since a' −1 = a' −1 * a * a−1 ~ a' −1 * a' * a−1 = a−1
^ Hungerford, Thomas W.. Algebra. Springer-Verlag, 1974, p. 27
^ Hungerford, 1974, p. 26
^ Henk Barendregt (1990). "Functional Programming and Lambda Calculus". In Jan van Leeuwen (ed.). Formal Models and Semantics. Handbook of Theoretical Computer Science. Vol. B. Elsevier. pp. 321–364. ISBN 0-444-88074-7. Here: Def.3.1.1, p.338.
^ a b Clifford Bergman, Universal Algebra: Fundamentals and Selected Topics, Taylor & Francis (2011), Sect. 1.5 and Exercise 1(a) in Exercise Set 1.26 (Bergman uses the expression having the substitution property for being compatible)
^ J. M. Howie (1975) An Introduction to Semigroup Theory, page v, Academic Press
Horn and Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-38632-2. (Section 4.5 discusses congruency of matrices.)
Rosen, Kenneth H (2012). Discrete Mathematics and Its Applications. McGraw-Hill Education. ISBN 978-0077418939.
|
Predictions of energy efficient Berger-Levy model neurons with constraints | BMC Neuroscience | Full Text
Predictions of energy efficient Berger-Levy model neurons with constraints
Siavash Ghavami1,
Farshad Lahouti1 &
Lars Schwabe2
Information theory has been extensively applied to neuroscience problems. The mutual information between input and output has been postulated as an objective, which neuronal systems may optimize. However, only recently the energy efficiency has been addressed within an information-theoretic framework [1]. Here, the key idea is to consider capacity per unit cost (measured in bits per joule, bpj) as the objective. We are interested in how biologically plausible constraints affect predictions made by this new theory for bpj-maximizing model neurons.
More specifically, in our contribution, in line with [1] and [2], a neuron is modeled as a memory-less constant communication channel with a Gamma conditional probability distribution function (PDF) [1]. In this setting, the channel input and output are the excitatory postsynaptic potential intensity,
\phantom{\rule{0.1em}{0ex}}\lambda
, and the inter spike interval (ISI),
\phantom{\rule{0.1em}{0ex}}t
, with PDFs
{f}_{\Lambda }\left(\lambda \right)
{f}_{T}\left(t\right)
, respectively. We then formulate two new constraints: First, we impose a lower bound
{t}_{\mathsf{\text{min}}}
on the duration
\phantom{\rule{0.1em}{0ex}}t
of ISIs. The rational for this is to account for a maximal firing rate. Second, we consider a peak energy expenditure constraint per ISI as compared to only bounding the expected energy expenditure. This translates into an upper bound
{t}_{\mathsf{\text{max}}}
on the ISI duration. We then derive the
{f}_{T}\left(t\right)
(corresponding to valid
{f}_{\Lambda }\left(\lambda \right)
) of a bpj-maximizing neuron for the original unconstrained setting from [1] and in the presence of the above two constraints for different expected ISIs. (Details omitted here for brevity.) Figure 1 shows three
{f}_{T}\left(t\right)
s obtained in the unconstrained (dashed curves) and constrained settings (solid curves) for
{t}_{\mathsf{\text{min}}}=1
{t}_{\mathsf{\text{max}}}=5
. While the constrained and unconstrained solutions have the same mean, the shape of their
{f}_{T}\left(t\right)
differ. For comparison with experimental data, we computed the coefficient of variation (CV) as a function of the mean ISI as an "observable" (Figure 2), which is easier to measure experimentally than the full distribution
{f}_{T}\left(t\right)
. Interestingly, the CV is predicted i) to be lower in the constrained setting, and ii) to increase and then decrease with the mean ISI while it only decreases in the unconstrained setting. Thus, we demonstrated that constraints can affect predictions based on bpj-maximization, and should be explicitly taken into account. Ongoing work makes these predictions more quantitative via simulating biophysically realistic model neurons.
Berger T, Levy WB: A Mathematical Theory of Energy Efficient Neural Computation and Communication. IEEE Trans on Information Theory. 2010, 56 (2): 852-874.
Xing J, Berger T, Sejnowski TJ: A Berger-Levy energy efficient neuron model with unequal synaptic weights. Proc of IEEE Int Symp on Information Theory. 2012, 2964-2968.
This research has been supported in part by the DAAD (German-Arabic/Iranian Higher Education Dialogue).
School of Electrical and Computer Engineering, University of Tehran, Tehran, 14395-515, Iran
Siavash Ghavami & Farshad Lahouti
Faculty of Computer Science and Electrical Engineering, Universität Rostock, 18059, Germany
Correspondence to Siavash Ghavami, Farshad Lahouti or Lars Schwabe.
Ghavami, S., Lahouti, F. & Schwabe, L. Predictions of energy efficient Berger-Levy model neurons with constraints. BMC Neurosci 14, P349 (2013). https://doi.org/10.1186/1471-2202-14-S1-P349
|
An object moves along the x-axis with an initial position of x(0) = 2. The velocity of the object when t > 0 is given by the equation
v ( t ) = 5 \operatorname { cos } ( \frac { \pi } { 2 } t ) + 4 t
What is the acceleration of the object when t = 4?
What is the total distance the object covers during the interval 0 ≤ t ≤ 4?
What is the position of the object when t = 4?
This is v′(4).
This is x(4) where:
x(t)=\int v(t)dt
This is x(4) – x(0).
Check your answers using Desmos:
|
{∫}_{0}^{3}{∫}_{0}^{2 x}\left({x}^{2}+{y}^{2}\right) \mathit{ⅆ}y \mathit{ⅆ}x
\mathrm{dy} \mathrm{dx}
\mathrm{dx} \mathrm{dy}
{∫}_{0}^{3}{∫}_{0}^{2 x}\left({x}^{2}+{y}^{2}\right) \mathit{ⅆ}y \mathit{ⅆ}x=\frac{189}{2}
{∫}_{0}^{6}{∫}_{y/2}^{3}\left({x}^{2}+{y}^{2}\right) \mathit{ⅆ}x \mathit{ⅆ}y=\frac{189}{2}
{∫}_{0}^{3}{∫}_{0}^{2 x}\left({x}^{2}+{y}^{2}\right) \mathit{ⅆ}y \mathit{ⅆ}x
\frac{\textcolor[rgb]{0,0,1}{189}}{\textcolor[rgb]{0,0,1}{2}}
{∬}_{R}\mathrm{Ψ}\left(x,y\right) \mathrm{dA}
R
\textcolor[rgb]{0,0,1}{\mathrm{dA}}
\mathrm{Ψ}=
G=
b=
g=
a=
\mathrm{dy} \mathrm{dx}
{∫}_{0}^{6}{∫}_{y/2}^{3}\left({x}^{2}+{y}^{2}\right) \mathit{ⅆ}x \mathit{ⅆ}y
\frac{\textcolor[rgb]{0,0,1}{189}}{\textcolor[rgb]{0,0,1}{2}}
Use the visualization task template in Table 5.4.1(b) to obtain the value of the integral with the reversed order of integration and a graph of its region of integration.
{∬}_{R}\mathrm{Ψ}\left(x,y\right) \mathrm{dA}
R
\textcolor[rgb]{0,0,1}{\mathrm{dA}}
\mathrm{Ψ}=
G=
b=
g=
a=
Table 5.4.1(b) Integration in the order
\mathrm{dx} \mathrm{dy}
\mathrm{int}\left({x}^{2}+{y}^{2},\left[y=0..2 x,x=0..3\right]\right)
\frac{\textcolor[rgb]{0,0,1}{189}}{\textcolor[rgb]{0,0,1}{2}}
\mathrm{plots}:-\mathrm{inequal}\left(\left\{x≥0,x≤3,y≥0,y≤2 x\right\},x=0..3,y=0..6\right)
q≔\mathrm{Int}\left({x}^{2}+{y}^{2},\left[x=y/2..3,y=0..6\right]\right)
{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\textcolor[rgb]{0,0,1}{0}}^{\textcolor[rgb]{0,0,1}{6}}{\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{∫}}_{\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{3}}\left({\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{x}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{ⅆ}\textcolor[rgb]{0,0,1}{y}
\mathrm{value}\left(q\right)
\frac{\textcolor[rgb]{0,0,1}{189}}{\textcolor[rgb]{0,0,1}{2}}
|
Notes on Regression - Projection
This is one of my favourite ways of establishing the traditional OLS formula. I remember being totally amazed when I first found out how to derive the OLS formula in a class on linear algebra. Understanding regression through the perspective of projections also shows the connection between the least squares method and linear algebra. It also gives a nice way of visualising the geometry of the OLS technique.
This set of notes is largely inspired by a section in Gilbert Strang's course on linear algebra.1 I will use the same terminology as in the previous post.
Recall the standard regression model and observe the similarities with the commonly used expression in linear algebra written below:
\begin{aligned} \mathbf{y} &= \mathbf{X}\mathbf{\beta} \\ b &= Ax \end{aligned}
Thus, the OLS regression can be motivated as a means of finding the projection of
\mathbf{y}
on the space span by
\mathbf{X}
.2 Or to put it another way, we want to find the vector
\beta
that would be the closest to
\mathbf{y}
(\mathbf{y} - \mathbf{X}\beta)
Span (\mathbf{X})
i.e. it is in the left nullspace of
\mathbf{X}
. By the definition of nullspace:
\begin{aligned} \mathbf{X}'(\mathbf{y} -\mathbf{X}\hat{\beta}) &= 0 \\ \mathbf{X}'\mathbf{y} &= \mathbf{X}'\mathbf{X}\hat{\beta} \\ \hat{\beta} &= (\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{y} \end{aligned}
\mathbf{X}\hat{\beta} = \mathbf{X}(\mathbf{X}'\mathbf{X})^{-1}\mathbf{X}'\mathbf{y} = P_{x}
is also known as the orthogonal projection matrix. The matrix is
n~\times~n
dimension. As given by its name, for any vector
b \in R^{n}
P_{x}b \in Span(X)
\mathbf{y} - \mathbf{X}\hat{\beta}
is simply the vector of residuals and can be written in the following form:
\begin{aligned} \hat{u} &= \mathbf{y} - \mathbf{X}\hat{\beta} \\ &= \mathbf{y} - P_{x}\mathbf{y} \\ &= (I_{n} - P_{x})\mathbf{y} \\ &= M_{x}\mathbf{y} \end{aligned}
M_{x}
is the projection onto the space orthogonal to
Span(X)
The projection matrices have the following four properties:
P_{x} + M_{x} = I_{n}
, Symmetry (
A'=A
AA=A
), Orthogonal (
P_{x}M_{x} = 0
The idea of seeing fitted values and residuals in terms of projections and orthogonal spaces have further applications in econometrics. See for example the derivation of the partitioned regression formula.
As a fun exercise one can try to derive the OLS formula for a weighted regression
\mathbf{W}\mathbf{X}\beta = \mathbf{W}\mathbf{y}
\mathbf{W}
n \times n
matrix of weights using the same idea.
My two favourite sources covering the basics of linear algebra are Hefferon's linear algebra, a free textbook, and Gilbert Strang's course mentioned above. Hefferon provides a very clear treatment on the more theoretical aspects of the subject, while the latter highlights the many possibilities and applications that one can do with it. ↩
The span of the vectors in
\mathbf{X}
(column space) is the set of all vectors in
R^{n}
that can be written as linear combinations of the columns of
\mathbf{X}
|
Commutative property - Wikiversity
The commutative property is one of the three basic laws of regular algebras that greatly simplify equations and the derivation of their solutions. In essence, it assumes that one can commute, or interchange the positions of two variables
{\displaystyle x}
{\displaystyle y}
without changing the value of the result, (such as a product for multiplication, or a sum for addition), under an operation "
{\displaystyle *}
", that is:
{\displaystyle x*y=y*x.}
Thus, commutativity can also be considered as a kind of symmetry caused by mirroring.
The associativity and distributivity laws are the other two simplifying properties of algebraic variables for operations such as multiplication and addition.
1.Through duality the three regular properties of an algebra are induced also in its corresponding geometric structure or 'space'. Thus, our graphical representations of two-dimensional and three-dimensional surfaces are commutative, and Geometry was implicitly thought as being commutative.
2. By extending through duality the concept of geometric structure to the dual 'space' of a noncommutative algebra, in 1979, the term "noncomutative geometry" has been established in mathematics, and an applied area of physical mathematics with the same name; the latter has been developed in a manner consistent with the Standard Model (SUSY) of modern physics.
3. Interestingly, certain operators of quantum mechanical observables, such as position and momentum do not commute, and therefore quantum mechanics is a noncommutative theory. In general, this noncommutation of certain pairs of quantum operators gives rise to the uncertainty relations or the Heisenberg Principle of Quantum Mechanics. On the other hand, in classical mechanics and Einstein's relativity theories the position and momentum operators commute, and position and momentum can be both determined simultaneously and precisely for any massive object.
4. In Category Theory a special form of commutativity occurs as a generalization of the commutativity property defining Abelian groups. More general categories that are dissimilar to the category of Abelian groups are called non-Abelian, or nonabelian.
Retrieved from "https://en.wikiversity.org/w/index.php?title=Commutative_property&oldid=919149"
|
Gridap-jl/community - Gitter
Gridap.jl/community
kishore-nori opened #789
santiagobadia labeled #788
santiagobadia assigned #788
santiagobadia opened #788
fverdugo on gh-pages
build based on d464956 (compare)
santiagobadia on generalized_alpha
santiagobadia on master
Generalized-alpha method Fixes in generalized alpha updated NEWS.md and 3 more (compare)
santiagobadia closed #781
santiagobadia synchronize #781
Added ModalC0 Bases Added ModalC0 RefFEs Added Modal C0 RefFEs for Seren… and 10 more (compare)
amartinhuertas commented #787
kishore-nori edited #787
build based on 447d58e (compare)
@Kevin-Mattheus-Moerman perhaps it is related with your problem. Try to build the jacobian with automatic differenciation to see if you get an improvement. I.e. Build the FEOperator only from the residual: FEOperator(res,U,V) instead of FEOperator(res,jac,U,V)
What is the typo?
@fverdugo using FEOperator(res,U,V) helped to remove the hiccup. What is the typo for dS, I currently have:
function dS(∇du,∇u)
Cinv = inv(C(F(∇u)))
_dE = dE(∇du,∇u)
λ*(Cinv⊙_dE)*Cinv + 2*(μ-λ*log(J(F(∇u))))*Cinv⋅_dE⋅(Cinv')
@bhaveshshrimali
@Kevin-Mattheus-Moerman Looking at it quickly, I think it should be
λ*J(F(∇u))*(Cinv⊙_dE)*Cinv + 2*(μ-λ*log(J(F(∇u))))*Cinv⋅_dE⋅(Cinv')
with the extra J coming from the derivative ∂J/∂E (assuming I didn't mess up any signs). Does this help?
Simply recording the animation for those values doesn't have that weird looking step 7 in my case:
Marie726pro
@marie-prog0627
Hi everyone, new user here, loving the library so far. I am trying to implement an external force in linear elasticity, I have no idea to configure the node external force vector in Gridap.
I would be grateful if you would send me how to code. Sorry to ask something so elementary.
@barche
Hi all, is there an overview of the different operators that are avialable, and what their effect is in terms of index notation? Specifically, I'd like to be sure about the effects (and when to use)
\cdot
\odot
and *.
hi @barche unfortunatelly this is only partially documented. Now de easest way to see the available operators is going to src/TensorValues/Operations.jl and see the source code
In the mean time, I had an idea for a JuliaCon proposal comparing my experience between Gridap and our Coolfluid C++ code ( https://www.hindawi.com/journals/sp/2015/797325/ ). I'd also like to compare performance, is there a way to time the linear system assembly?
@barche the current way of benchmarking the assembly loop is by using the low level assembly routines: I.e. instead of writting op=AffineFEOperator(a,l,U,V) use:
du = get_trial_fe_basis(U)
dv = get_fe_basis(V)
uhd = zero(U)
data = collect_cell_matrix_and_vector(U,V,a,l,uhd)
Tm = SparseMatrixCSC{Float64,Int32}
Tv = Vector{Float64}
assem = SparseMatrixAssembler(Tm,Tv,U,V)
A, b = assemble_matrix_and_vector(assem,data) # This is the assembly loop + allocation and compression of the matrix
assemble_matrix_and_vector!(A,b,assem,data) # This is the in-place assembly loop on a previously allocated matrix/vector.
Hi! Applications for this year's Google Summer of Code #GSOC are still open! If you want to participate within the Gridap project, see our project ideas page https://github.com/gridap/GSoC/blob/main/2022/ideas-list.md
@sbrisard:matrix.org
thanks for the excellent work put in Gridap.jl. I'm setting up a numerical homogenization simulation with periodic boundary conditions, in two dimensions. The model was generated with GMSH, here is an extract of the *.geo file (I know I can use the Julia interface to GMSH as well)
bottom() = Line In BoundingBox{-0.00075, -0.00075, -0.00075, 5.00075, 0.00075, 0.00075};
top() = Line In BoundingBox{-0.00075, 4.99925, -0.00075, 5.00075, 5.00075, 0.00075};
Periodic Line{top()} = {bottom()} Translate{0, 5.0, 0};
left() = Line In BoundingBox{-0.00075, -0.00075, -0.00075, 0.00075, 5.00075, 0.00075};
right() = Line In BoundingBox{4.99925, -0.00075, -0.00075, 5.00075, 5.00075, 0.00075};
Periodic Line{right()} = {left()} Translate{5.0, 0, 0};
The model is then retrieved from within Julia with
model = GmshDiscreteModel(joinpath("..", "validation", "f=0.3", "N=25", "h=0.075", "00001.msh"))
and the FE spaces are defined as follows (as indicated by Tutorial 12, periodic BCs are automatically accouted for)
reffe = ReferenceFE(lagrangian, Float64, order)
V = TestFESpace(model, reffe; conformity=:H1, constraint=:zeromean)
I then solve a standard conductivity problem. The solution, u₁ does not however seem to be periodic
u₁(Point(0., 1.)) = 0.4216567688688445
u₁(Point(5., 1.)) = -0.49267721257058655
What might have I done wrong? How can I check that periodic boundary conditions are indeed enforced by Gridap upon reading the periodic GMSH mesh?
Hi @sbrisard:matrix.org, regarding periodic BCs I have found and fixed several bugs last week. Try to use the latest version in GridapGmsh#master and also a recent version of gmsh itself. (I found that old version of gmsh had a bug in periodic BCs in 3D, but this not the problem in your case)
By the way. I've tried to import a quadratic mesh (T6) with curved boundaries. This is not implemented (yet) in Gridap?
It is not implemented in GridapGmsh, but Gridap should work also with high order meshes
It should be relativelly easy to add it to GridapGmsh though
Great! Even with curved boundaries ? That used to be not supported by FEniCS (I believe FEniCS-X does support this feature, now) nor FreeFem++ as far as I am aware
yes, curved boundaries are supported in Gridap
but you need to feed gridap with a curved mesh and this is the part not yet implemented
That would be great. Thank you for this work. I'm really excited with Gridap
@claussm:tu-dresden.de
Hello, I am new to FEM simulation with julia. To get started, I would like to build a model that connects multiple boxes via contact conditions (tied and frictionless contact). Can I use gridap for this? Unfortunately I have not found any example.
carlodev
@carlodev
Hello, I want to study the case of the turbulent channel. I have found something strange in creating geometry with periodic boundary conditions on multiprocessors.
Taking the Tutorial 16 as an example, I just replace the model generation line with
model = CartesianDiscreteModel(domain,mesh_partition,isperiodic = (true, false))
to create a geometry with periodic boundary conditions in X direction, single-process
model = CartesianDiscreteModel(parts,domain,mesh_partition, isperiodic = (true, false))
to create a geometry with periodic boundary conditions in X direction, multi-process
However, the resulting geometry is different in the two cases. The single-process case is apparently the right one. In the multi-process case, the geometry is both 'closed' and with an extra cell in the periodic direction.
I have noticed that you have this kind of problem when you have more than one part in the periodic direction (in this case with a partition of partition = (1,2) and launching the script on 2 cores). I was wondering what I am doing incorrectly
I converted a fun evening project into a little blog post.
https://jonasisensee.de/posts/2022-04-29-amazeing-fem/
Very nice post @JonasIsensee Thanks for sharing!
herveta
@herveta
Hi. I want to use a simpler mesh generator than gmsh. I found DistMesh2D, which has interesting features. But I have trouble figuring out how to plus the computed mesh into Gridap. Anyone has used this library ? (distmesh2D https://github.com/jstarczewski/DistMesh2D.jl) Or anyone tried to input a mesh from "delaunay" like computations ? Thank you !
Once you have a vector of nodal coordinates and cell connecivity its pretty easy to build a mesh in gridap. You will also need to identify boundaries if you want to impose boundary conditions
Hello everyone, as a test I tried to solve the transient Poisson equation on parallel distributed. I merely copied the code of tutorial 17 into the first example of tutorial 16. Then I create the model on multi-cores with the line https://github.com/carlodev/Channel_flow/blob/9878fe7d1943a663bc1bdbab9b4fdd9de04bc317/Channel_Multicore_PETSc/TutorialTest/D1_transient.jl#L13
MethodError: no method matching HomogeneousTrialFESpace(::GridapDistributed.DistributedSingleFieldFESpace{MPIData{Gridap.FESpaces.UnconstrainedFESpace{Vector{Float64}, Gridap.FESpaces.NodeToDofGlue{Int32}}, 2}, PRange{MPIData{IndexSet, 2}, Exchanger{MPIData{Vector{Int32}, 2}, MPIData{Table{Int32}, 2}}, Nothing}, PVector{Float64, MPIData{Vector{Float64}, 2}, PRange{MPIData{IndexSet, 2},
Exchanger{MPIData{Vector{Int32}, 2}, MPIData{Table{Int32}, 2}}, Nothing}}})
Is that a bug or am I doing something incorrectly? If I don’t split the model on multi cores it works
@oriolcg
Can you try linking with GridapDistributed#master?
@johntfoster
Is there a way to use CarioMakie with GridapMakie in a GridapDistributed run to write plot files? (in contrast to using VTK files)
How would you compute the surface curvature of a mesh in gridap? (for surface tension effects)
Z J Wegert
@zjwegert
I don’t know the answer to this but I imagine there is a standard way using FEM? E.g., https://arxiv.org/pdf/1703.05745.pdf. Though, it could be already inbuilt in Gridap
What would be possible reasons for LUSolver and PardisoSolver to give different results?
LUSolver gives seemingly correct result while PardisoSolver returns complete garbage
@Dies-Das
Hi! I hope this is the right place to ask.
I would like to implement TraceFEM for surfaces, I need the tangential projection to do that. How does one compute the outer product in Gridap/Gridapembedded?
Hi @Dies-Das , I think @eneiva_gitlab is working along these lines
Hey @carlodev, you should post your Gridap SUPG videos here !
I implemented the supg method using gridap, and I wanted to share with you this satisfying visualization of vortex shedding on a cylinder at Re 1000
rshankar1069
@rshankar1069
Hi everyone! I have a quick query regarding the implementation of Gridap. Is there a way to solve a transient PDE problem involving spatially varying coefficients using Gridap?
Hi @rshankar1069, yes you can solve PDEs with space and time-dependent coefficients. Take a look at this tutorial: https://gridap.github.io/Tutorials/dev/pages/t017_poisson_transient/#Tutorial-17:-Transient-Poisson-equation-1
is there any way to speed up operator assembly?
With Pardiso and 32 threads, I can get solutions within seconds but assembly somehow takes minutes.
Perhaps it is worth to understand why is assembly so slow before going parallel. The linear solve should be by far the bottleneck
|
Week 1. Foundations of Algorithms | Algorithms and Data Structures
Week 1. Foundations of Algorithms
2 Week 1. Foundations of Algorithms
Reading 1 Goodrich, Tamassia, & Goldwasser: Chapter (1), 2, 4.1, and 4.4.
Chapter 1 and most of Chapter 2 is cursory background material, but it is useful reading because it will help you connect the theoretical and high level contents of the module to your concrete experience with Java programming.
Algorithm Theory works with Language-Independent Models. We do not want to be tied up in the details of particular programming languages. This is important. We choose a simpler language to be understandable by a wider audience, across different language traditions (C, Java, Ada, Python).
The first thing to do in this module is to learn the basics of this language. We shall try to map the concepts to Java jargon as we go along.
2.1.1 Description of Algorithms
Consider the sorting of a bridge hand. Thirteen cards to be sorted in increasing order.
We sort first by suit, so that
\[ ♠>♡>♢>♣ \]
and then within each suit so that
A>K>Q>Kn>10>9>8>7>6>5>4>3>2.
Step 1. Specification of the Concrete Problem
Input. A hand of 13 cards.
Output. A hand of 13 cards sorted in increasing order.
Step 2. Specification of the Abstract Problem
Input. An array of
n
objects,
{A}_{1},{A}_{2},\dots ,{A}_{n}
Output. An array of
n
objects sorted in increasing order.
The objects can be of any type (class), as long as we have a binary relation
\le
, so that for any two objects
x,y
we can determine whether
x\le y
In algorithm theory, we tend to think of the objects as numbers, so that we have an intuitive grasp on what
\le
means, but there is no loss of generality. In Object Oriented Programming, the notion of less than or equal is encapsulated in the class, and we may have to rewrite it as
x.isSmallerThanOrEqual\left(y\right)
x\le Y
. In Functional Programming, the function implementing
\le
may be passed as an argument alongside the array
A
using a lambda expression which also the later versions of Java supports, and in this case it may be rewritten as
isSmallerThanOrEqual\left(x,y\right)
Step 3. Pseudo-Code One way to solve the sorting problem is the following.
Input: Array
{A}_{i}
i=1,\dots ,n
Output: The same array
{A}_{i}
i=1,\dots ,n
sorted in place so that
{A}_{1}\le {A}_{2}\le \dots \le {A}_{n}
i:=2,3,\dots ,n
j:=i
\left(j\ge 2\right)
\left({A}_{j}<{A}_{j-1}\right)
{A}_{j}
{A}_{j-1}
j:=j-1
Observe how we combine well-known programming constructs (for, while), mathematical notation (
\ge ,{A}_{i},{A}_{j}
), and natural language (swap). This hybrid language is called pseudo-code. There is no standard for how to write it. As long as it is legible and unambiguous to human readers, it is ok.
I have in this case used
:=
as the assignment operator, just to illustrate the variation you will encounter. Java and C uses the equality sign = for assignment, and for equality they use a double equality sign ==.
Some authors would prefer pseudo-code closer to their favourite programming language, for instance:
3 while (j >= 2) and (A[j] < A[j-1]
5 j = j-1
This is a matter of taste, more community taste than personal taste though, but you may have to deal with more than one community..
The ‘swap’ line deserves some comment, as we use this in both the example styles, in spite of its being far from known programming languages, where it would have to be rewritten as
1 t =
{A}_{j}
{A}_{j}
{A}_{i}
{A}_{i}
= t
The swap statement is a simple and easily understandable instruction in natural language, and most human reader would find the programming construct much harder to read. This is why we resort to natural language here. In the choice of style, you should always strive to maximise the reader’s comprehension.
Empirical versus Theoretical Analysis In the first year, you have had to test your programs, to see if they work as intended. Testing is, of course, important both in its informal forms and in more structured and analytical forms, where we can talk about empirical analysis.
Empirical analysis is limited, however, by the number of examples you have time to test. Each test you make will only validate one single case, and there may be an infinite number of cases with different properties.
Algorithm theory, and therefore this module, focuses on theoretical analysis, searching for proofs that are valid for any data set. Note that theoretical analysis will also help in the design of good and complete test sets for empirical analysis.
Correctness of Insertion Sort Let’s see what it does.
is the card we are looking at. Cards to the left have already been looked at, and cards to the right not yet. In the first iteration,
i=2
, so there is only one card to the left. Obviously an array of a single card is always sorted. The inner loop (while) tries to insert the
i
th card in the correct position among the
i-1
previous cards.
Some programming languages allow assertions, establishing claims relevant to the analysis. Let’s write them into the pseudo-code as follows.
i:=2,3,\dots ,n
{A}_{1}\le {A}_{2}\le \dots \le {A}_{i-1}
j:=i
\left(j\ge 2\right)
\left({A}_{j}<{A}_{j-1}\right)
{A}_{j}
{A}_{j-1}
j:=j-1
{A}_{1}\le {A}_{2}\le \dots \le {A}_{i}
{A}_{1}\le {A}_{2}\le \dots \le {A}_{n}
When the programming languages support assertions, they are tested in debug mode, to identify places where critical assumptions are broken. In theoretical analysis, they are claims used to structure the proof. The two first assertions are examples of loop invariants, i.e. properties which invariable holds in every iteration of the loop.
The purpose of the while loop, is to reestablish the loop invariant for when the index
increases.
The first iteration of while compares
{A}_{i}
{A}_{i-1}
j=i
{A}_{i}
is larger, it belongs where it is and the while loopis not entered. If it is smaller, the two cards are swapped, and the loop continues to compare it with the next card.
When the while loop terminates, the
i
first cards are in sorted order, and in the next iteration of the for loop, we can again say that the cards to the left of
i
are sorted.
Number of instructions. What is an instruction?
Formal computing models
Worst case, best case, and average case
These projects should be solved as part of the weekly assignment. When you phrase your answers, you should always write with fellow students in mind. Write so that they would understand, and be convinced by your argument. If you are unsure what is a comprehensible argument, discuss it with fellow students. Always use drawings and sketches when they are more comprehensible than prose.
Problem 2.1 (Sorting Cards)
Consider the problem of sorting a hand of cards. How would you do it naturally?
If you have a deck of cards, shuffle and draw at least ten cards at random. If you do not, you can write down ten (or more) numbers at random.
Look at the cards/numbers. How would you intuitively start to sort them?
Is your approach systematic or not?
Can you write down pseudo-code describing how you do it?
If you cannot, why is that? Can you make changes so that it is possible?
Is your approach similar to insertion sort? What are the main differences?
(You should do this exercise together with other students, and discuss and compare approaches. Do not spend more than about 30 minutes on the problem. There is no ideal answer.)
Problem 2.2 (Letter Frequencies) Simple substitution ciphers work by replacing each letter in the alphabet with another. To encrypt a text, the same substitution is applied throughout the text. Such ciphers are easily broken by using frequency analysis.
Consider a program which takes a text as input and outputs frequency tables for each letter in the text. I.e. for each letter in the alphabet, output the number of occurrences of this letter in the text. (See also P-2.22 in the textbook.)
You are going to describe (not implement) such a program. Think throught the following questions first:
How do you model the text (input)?
How do you model the frequency tables (output)?
How do you parse the text?
Describe your program in the form of an algorithm, with pseudo-code and precise definitions of input and output.
How do you know that the algorithm produces the correct answer?
How many operations does the algorithm require when the text is
n
characters long and there are
k
letters in the alphabet?
Problem 2.3 (Change Making) Design and describe a program (algorithm) which takes two amounts (numbers) as input. One is the amount charged and the other the amount given. The program should return the number of each kind of bill and coin to give back as change for the difference between the amount given and the amount charged. The values assigned to the bills and coins avalaible can be based on the monetary system of any current or former government. Try to design your program so that it returns the fewest number of bills and coins as possible.
You can for instance use the denominations available for Norwegian kroner:
1,5,10,20,50,100,200,500,1000
Does your algorithm always produce the fewest number of bills and coints? Would it always give the fewest coins and bills if you change the denominations?
Consider the similar problem of selecting stamps to make up a given amount of postage. Suppose the stamps have values 1, 24, 33, and 44 pence. Test the algorithm to find the required stamps for 67p postage.
(Cf. Goodrich, Tamassia, & Goldwasser P-2.26)
Problem 2.4 (Selection Sort) Selection Sort is a sorting algorithm, sinilar to insertion sort. It can be described as follows:
Input: Array A[] of size n.
Output: The same array A[] in sorted order.
1 for i := 1 to n-1
2 for j := i+1 to n
3 if A[i] > A[j]
4 swap A[i] with A[j]
Rewrite the pseudo-code in Java. Note in the indices in particular. Is the first element A[0] or A[1].
How many swap operations must be made in the worst case? What about the best case?
Can you prove that the algorithm is correct, i.e. produces a sorted array? Compare it to the proof for insertion sort.
2.3.1 Proof Techniques
2.3.2 Review Material
TODO video on Induction and loop invariants
These videos where made for the old module in Discrete Mathematics 2013-2015, but they are also relevant here.
|
Home : Support : Online Help : Education : Student Packages : Numerical Analysis : Computation : AddPoint
return an interpolated polynomial structure with a newly added point
AddPoint(p, pts)
AddPoint(p, pts, bcs)
numeric, list(numeric, numeric), list(numeric, numeric,numeric); the new data point (node) to be added
list(numeric, numeric); new boundary conditions for an interpolating polynomial created using the cubic spline method
The AddPoint command takes the point(s) to be added and recomputes the interpolated polynomial from p with the new point(s) and returns the adjusted POLYINTERP structure.
This command is convenient because it prevents you from having to reenter all previous options and data with the new point into the PolynomialInterpolation command or the CubicSpline command to create a new POLYINTERP structure.
If the POLYINTERP structure was created using the CubicSpline command and the boundary conditions are not natural, then new boundary conditions bcs at the end points must be specified.
\mathrm{with}\left(\mathrm{Student}[\mathrm{NumericalAnalysis}]\right):
\mathrm{xy}≔[[1.0,0.7651977],[1.3,0.6200860],[1.6,0.4554022],[1.9,0.2818186]]
\textcolor[rgb]{0,0,1}{\mathrm{xy}}\textcolor[rgb]{0,0,1}{≔}[[\textcolor[rgb]{0,0,1}{1.0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.7651977}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1.3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.6200860}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1.6}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.4554022}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1.9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.2818186}]]
\mathrm{p2}≔\mathrm{PolynomialInterpolation}\left(\mathrm{xy},\mathrm{method}=\mathrm{neville},\mathrm{extrapolate}=[1.5]\right):
\mathrm{NevilleTable}\left(\mathrm{p2},1.5\right)
[\begin{array}{cccc}\textcolor[rgb]{0,0,1}{0.7651977}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0.6200860}& \textcolor[rgb]{0,0,1}{0.5233448671}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0.4554022}& \textcolor[rgb]{0,0,1}{0.5102968002}& \textcolor[rgb]{0,0,1}{0.5124714781}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0.2818186}& \textcolor[rgb]{0,0,1}{0.5132634002}& \textcolor[rgb]{0,0,1}{0.5112856669}& \textcolor[rgb]{0,0,1}{0.5118126939}\end{array}]
Add another node.
\mathrm{p2a}≔\mathrm{AddPoint}\left(\mathrm{p2},[2.2,0.1103623]\right):
The Neville Table now has another row.
\mathrm{NevilleTable}\left(\mathrm{p2a},1.5\right)
[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{0.7651977}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0.6200860}& \textcolor[rgb]{0,0,1}{0.5233448671}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0.4554022}& \textcolor[rgb]{0,0,1}{0.5102968002}& \textcolor[rgb]{0,0,1}{0.5124714781}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0.2818186}& \textcolor[rgb]{0,0,1}{0.5132634002}& \textcolor[rgb]{0,0,1}{0.5112856669}& \textcolor[rgb]{0,0,1}{0.5118126939}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0.1103623}& \textcolor[rgb]{0,0,1}{0.5104270002}& \textcolor[rgb]{0,0,1}{0.5137361336}& \textcolor[rgb]{0,0,1}{0.5118302149}& \textcolor[rgb]{0,0,1}{0.5118199942}\end{array}]
Student[NumericalAnalysis][DividedDifferenceTable]
|
Gibson, Joel1
1 School of Mathematics and Statistics University of Sydney NSW 2006, Australia
The product monomial crystal was defined by Kamnitzer, Tingley, Webster, Weekes, and Yacobi for any semisimple simply-laced Lie algebra
𝔤
, and depends on a collection of parameters
\mathbf{R}
. We show that a family of truncations of this crystal are Demazure crystals, and give a Demazure-type formula for the character of each truncation, and the crystal itself. This character formula shows that the product monomial crystal is the crystal of a generalised Demazure module, as defined by Lakshmibai, Littelmann and Magyar. In type
A
, we show the product monomial crystal is the crystal of a generalised Schur module associated to a column-convex diagram depending on
\mathbf{R}
Keywords: Monomial crystal, generalised schur module, Demazure crystal.
Gibson, Joel 1
author = {Gibson, Joel},
title = {A {Demazure} {Character} {Formula} for the {Product} {Monomial} {Crystal}},
TI - A Demazure Character Formula for the Product Monomial Crystal
%T A Demazure Character Formula for the Product Monomial Crystal
Gibson, Joel. A Demazure Character Formula for the Product Monomial Crystal. Algebraic Combinatorics, Volume 4 (2021) no. 2, pp. 301-327. doi : 10.5802/alco.156. https://alco.centre-mersenne.org/articles/10.5802/alco.156/
[1] Braden, Tom; Licata, Anthony; Proudfoot, Nicholas; Webster, Ben Quantizations of Conical Symplectic Resolutions II: Category
𝒪
and Symplectic Duality (2014) (https://arxiv.org/abs/1407.0964)
[2] Braverman, Alexander; Finkelberg, Michael Pursuing the double affine Grassmannian. I. Transversal slices via instantons on
{A}_{k}
-singularities, Duke Math. J., Volume 152 (2010) no. 2, pp. 175-206 | Article | MR: 2656088 | Zbl: 1200.14083
[3] Bump, Daniel; Schilling, Anne Crystal bases. Representations and combinatorics, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2017, xii+279 pages | Article | MR: 3642318 | Zbl: 1440.17001
[4] Demazure, Michel Désingularisation des variétés de Schubert généralisées, Ann. Sci. École Norm. Sup. (4), Volume 7 (1974) no. 1, pp. 53-88 | Article | Numdam | MR: 354697 | Zbl: 0312.14009
[5] Hernandez, David; Nakajima, Hiraku Level 0 monomial crystals, Nagoya Math. J., Volume 184 (2006), pp. 85-153 | Article | MR: 2285232 | Zbl: 1201.17008
[6] Joseph, Anthony A decomposition theorem for Demazure crystals, J. Algebra, Volume 265 (2003) no. 2, pp. 562-578 | Article | MR: 1987017 | Zbl: 1100.17009
[7] Kamnitzer, Joel; Tingley, Peter; Webster, Ben; Weekes, Alex; Yacobi, Oded Highest weights for truncated shifted Yangians and product monomial crystals, J. Comb. Algebra, Volume 3 (2019) no. 3, pp. 237-303 | Article | MR: 4011667 | Zbl: 1448.16023
[8] Kamnitzer, Joel; Tingley, Peter; Webster, Ben; Weekes, Alex; Yacobi, Oded On category
𝒪
for affine Grassmannian slices and categorified tensor products, Proc. Lond. Math. Soc. (3), Volume 119 (2019) no. 5, pp. 1179-1233 | Article | MR: 3968721 | Zbl: 1451.14143
[9] Kamnitzer, Joel; Webster, Ben; Weekes, Alex; Yacobi, Oded Yangians and quantizations of slices in the affine Grassmannian, Algebra Number Theory, Volume 8 (2014) no. 4, pp. 857-893 | Article | MR: 3248988 | Zbl: 1325.14068
[10] Kashiwara, Masaki Crystalizing the
q
-analogue of universal enveloping algebras, Comm. Math. Phys., Volume 133 (1990) no. 2, pp. 249-260 | Article | MR: 1090425 | Zbl: 0724.17009
[11] Kashiwara, Masaki On crystal bases of the
Q
-analogue of universal enveloping algebras, Duke Math. J., Volume 63 (1991) no. 2, pp. 465-516 | Article | MR: 1115118 | Zbl: 0739.17005
[12] Kashiwara, Masaki The crystal base and Littelmann’s refined Demazure character formula, Duke Math. J., Volume 71 (1993) no. 3, pp. 839-858 | Article | MR: 1240605 | Zbl: 0794.17008
[13] Kashiwara, Masaki Crystal bases of modified quantized enveloping algebra, Duke Math. J., Volume 73 (1994) no. 2, pp. 383-413 | Article | MR: 1262212 | Zbl: 0794.17009
[14] Kashiwara, Masaki Realizations of Crystals (2002) (https://arxiv.org/abs/math/0202268)
[15] Khovanov, Mikhail; Lauda, Aaron D. A diagrammatic approach to categorification of quantum groups. I, Represent. Theory, Volume 13 (2009), pp. 309-347 | Article | MR: 2525917 | Zbl: 1188.81117
[16] Khovanov, Mikhail; Lauda, Aaron D. A diagrammatic approach to categorification of quantum groups. II, Trans. Amer. Math. Soc., Volume 363 (2011) no. 5, pp. 2685-2700 | Article | MR: 2763732 | Zbl: 1214.81113
[17] Lakshmibai, Venkatramani; Littelmann, Peter; Magyar, Peter Standard monomial theory for Bott–Samelson varieties, Compositio Math., Volume 130 (2002) no. 3, pp. 293-318 | Article | MR: 1887117 | Zbl: 1061.14051
[18] Magyar, Peter Borel-Weil theorem for configuration varieties and Schur modules, Adv. Math., Volume 134 (1998) no. 2, pp. 328-366 | Article | MR: 1617793 | Zbl: 0911.14024
[20] Mirković, Ivan; Vilonen, Kari Geometric Langlands duality and representations of algebraic groups over commutative rings, Ann. of Math. (2), Volume 166 (2007) no. 1, pp. 95-143 | Article | MR: 2342692 | Zbl: 1138.22013
[22] Reiner, Victor; Shimozono, Mark Percentage-avoiding, northwest shapes and peelable tableaux, J. Combin. Theory Ser. A, Volume 82 (1998) no. 1, pp. 1-73 | Article | MR: 1616579 | Zbl: 0909.05049
[23] Reiner, Victor; Shimozono, Mark Flagged Weyl modules for two column shapes, J. Pure Appl. Algebra, Volume 141 (1999) no. 1, pp. 59-100 | Article | MR: 1705973 | Zbl: 0929.05089
[24] Rouquier, Raphael 2-Kac-Moody Algebras (2008) (https://arxiv.org/abs/0812.5023)
[25] Webster, Ben; Weekes, Alex; Yacobi, Oded A Quantum Mirković–Vybornov Isomorphism (2017) (https://arxiv.org/abs/1706.03841)
|
Brodsky, Sarah B.1; Stump, Christian1
1 Institut für Mathematik, Technische Universität Berlin, Germany
It has been established in recent years how to approach acyclic cluster algebras of finite type using subword complexes. We continue this study by uniformly describing the
c
g
-vectors, and by providing a conjectured description of the Newton polytopes of the
F
-polynomials. We moreover show that this conjectured description would imply that finite type cluster complexes are realized by the duals of the Minkowski sums of the Newton polytopes of either the
F
-polynomials or of the cluster variables, respectively. We prove this conjectured description to hold in type
A
and in all types of rank at most
8
including all exceptional types, leaving types
B
C
D
conjectural.
Keywords: cluster algebra,
F
-polynomial, subword complexes
Brodsky, Sarah B. 1; Stump, Christian 1
author = {Brodsky, Sarah B. and Stump, Christian},
title = {Towards a uniform subword complex description of acyclic finite type cluster algebras},
TI - Towards a uniform subword complex description of acyclic finite type cluster algebras
%T Towards a uniform subword complex description of acyclic finite type cluster algebras
Brodsky, Sarah B.; Stump, Christian. Towards a uniform subword complex description of acyclic finite type cluster algebras. Algebraic Combinatorics, Volume 1 (2018) no. 4, pp. 545-572. doi : 10.5802/alco.25. https://alco.centre-mersenne.org/articles/10.5802/alco.25/
[1] Borovik, Alexandre V.; Gelfand, Israil M.; White, Neil Coxeter matroids, Progress in Mathematics, 216, Birkhäuser, 2003 | MR: 1989953 | Zbl: 1050.52005
[2] Brodsky, Sarah B.; Ceballos, Cesar; Labbé, Jean-Philippe Cluster algebras of type
{D}_{4}
, tropical planes, and the positive tropical Grassmannian, Beitr. Algebra Geom., Volume 58 (2017) no. 1, pp. 25-46 | Article | MR: 3607668 | Zbl: 06695676
[3] Ceballos, Cesar; Labbé, Jean-Philippe; Stump, Christian Subword complexes, cluster complexes, and generalized multi-associahedra, J. Algebr. Comb., Volume 39 (2014) no. 1, pp. 17-51 | Article | MR: 3144391 | Zbl: 1286.05180
[4] Ceballos, Cesar; Pilaud, Vincent Denominator vectors and compatibility degrees in cluster algebras of finite type, Trans. Am. Math. Soc., Volume 367 (2015) no. 2, pp. 1421-1439 | Article | MR: 3280049 | Zbl: 1350.13020
[5] Chapoton, Frédéric; Fomin, Sergey; Zelevinsky, Andrei Polytopal realizations of generalized associahedra, Can. Math. Bull., Volume 45 (2002) no. 4, pp. 537-566 | Article | MR: 1941227 | Zbl: 1018.52007
[6] Fomin, Sergey; Zelevinsky, Andrei Cluster algebras I: foundations, J. Am. Math. Soc., Volume 15 (2002) no. 2, pp. 497-529 | Article | MR: 1887642 | Zbl: 1021.16017
[7] Fomin, Sergey; Zelevinsky, Andrei Cluster algebras II: finite type classification, Invent. Math., Volume 154 (2003) no. 1, pp. 63-121 | Article | MR: 2004457 | Zbl: 1054.17024
[8] Fomin, Sergey; Zelevinsky, Andrei Cluster algebras IV: coefficients, Compos. Math., Volume 143 (2007) no. 1, pp. 112-164 | Article | MR: 2295199 | Zbl: 1127.16023
[9] Hohlweg, Christophe; Lange, Carsten; Thomas, Hugh Permutahedra and generalized associahedra, Adv. Math., Volume 226 (2011) no. 1, pp. 608-640 | Article | MR: 2735770 | Zbl: 1233.20035
[10] Humphreys, James E. Reflection groups and Coxeter groups, 29, Cambridge University Press, 1990, xii+204 pages | MR: 1066460 | Zbl: 0725.20028
[11] Knutson, Allen; Miller, Ezra Subword complexes in Coxeter groups, Adv. Math., Volume 184 (2004) no. 1, pp. 161-176 | Article | MR: 2047852 | Zbl: 1069.20026
[12] Knutson, Allen; Miller, Ezra Gröbner geometry of Schubert polynomials, Ann. Math., Volume 161 (2005) no. 3, pp. 1245-1318 | Article | Zbl: 1089.14007
[13] Lange, Carsten Minkowski decomposition of associahedra and related combinatorics, Discrete Comput. Geom., Volume 50 (2013) no. 4, pp. 903-939 | Article | MR: 3138141 | Zbl: 1283.52014
[14] Lange, Carsten; Pilaud, Vincent Associahedra via spines, Combinatorica, Volume 38 (2018) no. 2, pp. 443-486 | Article | MR: 3800847 | Zbl: 06909521
[15] Loday, Jean-Louis Realization of the Stasheff polytope, Arch. Math., Volume 83 (2004) no. 3, pp. 267-278 | MR: 2108555 | Zbl: 1059.52017
[16] Musiker, Gregg; Schiffler, Ralf Cluster expansion formulas and perfect matchings, J. Algebr. Comb., Volume 32 (2010) no. 2, pp. 187-209 | Article | MR: 2661414 | Zbl: 1246.13035
[17] Musiker, Gregg; Schiffler, Ralf; Williams, Lauren Positivity for cluster algebras from surfaces, Adv. Math., Volume 227 (2011) no. 6, pp. 2241-2308 | Article | MR: 2807089 | Zbl: 1331.13017
[18] Nakanishi, Tomoki; Zelevinsky, Andrei On tropical dualities in cluster algebras, Algebraic groups and quantum groups (Nagoya, 2010) (Contemporary Mathematics), Volume 565, American Mathematical Society, 2012, pp. 217-226 | Article | MR: 2932428 | Zbl: 1317.13054
[19] Pilaud, Vincent; Stump, Christian Brick polytopes of spherical subword complexes and generalized associahedra, Adv. Math., Volume 276 (2015), pp. 1-61 | Article | MR: 3327085 | Zbl: 06436290
[20] Pilaud, Vincent; Stump, Christian Vertex barycenter of generalized associahedra, Proc. Am. Math. Soc., Volume 153 (2015) no. 6, pp. 2623-2636 | Article | MR: 3326042 | Zbl: 1316.52022
[21] Postnikov, Alexander Permutahedra, associahedra, and beyond, Int. Math. Res. Not., Volume 2009 (2009) no. 6, pp. 1026-1106 | Article | Zbl: 1162.52007
[22] Reading, Nathan Sortable elements and Cambrian lattices, Algebra Univers., Volume 56 (2007) no. 3-4, pp. 411-437 | Article | MR: 2318219 | Zbl: 1184.20038
[23] Reading, Nathan; Speyer, David Combinatorial frameworks for cluster algebras, Int. Math. Res. Not., Volume 2016 (2016) no. 1, pp. 109-173 | Article | MR: 3514060 | Zbl: 1330.05167
[24] Reading, Nathan; Speyer, David Cambrian frameworks for cluster algebras of affine type, Trans. Am. Math. Soc., Volume 370 (2018) no. 2, pp. 1429-1468 | Article | MR: 3729507 | Zbl: 06814531
[25] Schiffler, Ralf A cluster expansion formula (
{A}_{n}
case), Electron. J. Comb., Volume 15 (2008), Paper no. R64, 9 pages | MR: 2398856 | Zbl: 1184.13064
[26] Speyer, David; Williams, Lauren The tropical totally positive Grassmannian, J. Algebr. Comb., Volume 22 (2005) no. 2, pp. 189-210 | Article | MR: 2164397 | Zbl: 1094.14048
[27] Tran, Thao Quantum F-polynomials in the theory of cluster algebras (2010), 99 pages (https://search.proquest.com/docview/275987433) (Ph. D. Thesis) | MR: 2941308
[28] Yang, Shih-Wei; Zelevinsky, Andrei Cluster algebras of finite type via Coxeter elements and principal minors, Transform. Groups, Volume 13 (2008) no. 3-4, pp. 855-895 | Article | MR: 2452619 | Zbl: 1177.16010
|
Extract a frequency subband using a one-sided (complex) bandpass decimator - Simulink - MathWorks España
Reduce number of complex coefficients
Mix signal to baseband
The Complex Bandpass Decimator block extracts a specific subband of frequencies using a one-sided, multistage, complex bandpass decimator. The block determines the bandwidth of interest using the specified center frequency, decimator factor, and bandwidth values.
This port is unnamed unless you select the Specify center frequency from input port parameter.
Center frequency of the desired band in Hz, specified as a real, finite numeric scalar in the range [–Fs/2, Fs/2]. The value of Fs depends on the setting of the Inherit sample rate from input parameter. When you select this parameter, Fs is the value the block inherits from the input signal. When you clear this parameter, Fs is the value you specify in the Input sample rate (Hz) parameter.
This port is only available if you select the Specify center frequency from input port parameter.
Port_1 — Filtered output
Output of the complex bandpass decimator, returned as a vector or a matrix. The output contains the subband of frequencies specified by the parameters on the block dialog. The number of rows (frame size) in the output signal is 1/D times the number of rows in the input signal, where D is the decimation factor. The number of channels (columns) does not change.
Filter specification — Filter design parameters
Decimation factor (default) | Bandwidth | Decimation factor and bandwidth
Filter design parameters, specified as one of the following:
Decimation factor –– The block specifies the decimation factor through the Decimation factor parameter. The bandwidth of interest (BW) is computed using the following equation:
BW=Fs/D
Fs –– Sample rate specified through the Input sample rate (Hz) parameter.
Bandwidth –– The block specifies the bandwidth through the Bandwidth (Hz) parameter. The decimation factor (D) is computed using the following equation:
D=\text{floor}\left(\frac{Fs}{BW+TW}\right)
TW –– Transition width specified through the Transition width (Hz) parameter.
Decimation factor and bandwidth –– The decimation factor and the bandwidth of interest are specified through the Decimation factor and Bandwidth (Hz) parameters.
This parameter applies when you set Filter specification to either Decimation factor or Decimation factor and bandwidth.
Bandwidth (Hz) — Bandwidth in Hz
This parameter applies when you set Filter specification to either Bandwidth or Decimation factor and bandwidth.
Specify center frequency from input port — Flag to specify center frequency
When you select this check box, the center frequency is input through the Fc port. When you clear this check box, the center frequency is specified on the block dialog through the Center frequency (Hz) parameter.
When you select this check box, the block does not compute the filter response. To view the filter response, clear this check box, specify the center frequency on the block dialog, and click View Filter Response button.
Center frequency (Hz) — Center frequency in Hz
Center frequency of the desired band in Hz, specified as a real, finite numeric scalar in the range [–Fs/2, Fs/2].
Stopband attenuation (dB) — Stopband attenuation in dB
Stopband attenuation of the filter in dB, specified as a finite positive scalar.
Passband ripple (dB) — Passband ripple in dB
Transition width (Hz) — Transition width in Hz
Reduce number of complex coefficients — Minimize number of complex coefficients
Minimize the number of complex coefficients. When you select this parameter, the first stage of the multistage filter is bandpass (with complex coefficients) centered at the specified center frequency. The first stage is followed by a mixing stage that heterodynes the signal to DC. The remaining filter stages, all with real coefficients, follow.
When you clear the parameter, the input signal is first passed through the different stages of the multistage filter. All stages are bandpass (complex coefficients). The signal is then heterodyned to DC if Mix signal to baseband parameter is selected and the frequency offset resulting from the decimation is nonzero.
Mix signal to baseband — Mix signal to baseband
Mix the signal to baseband. When you select this parameter, the block heterodynes the filtered, decimated signal to DC. This mixing stage runs at the output sample rate of the filter. When you clear this parameter, the block skips the mixing stage.
This parameter applies when you clear the Reduce number of complex coefficients parameter.
Inherit sample rate from input — Flag to specify input sample rate
When you select this parameter, the block inherits its sample rate from the input signal. The block calculates the sample rate based on the sample time of the input port. When you clear this parameter, specify the sample rate in Input sample rate (Hz).
Input sample rate (Hz) — Input sample rate in Hz
This parameter applies when you clear the Inherit sample rate from input parameter.
Opens the dynamic filter visualizer and displays the magnitude response of the complex bandpass decimator. The response is based on the parameters you select in the block dialog box. To update the magnitude response while the dynamic filter visualizer is running, modify the parameters in the dialog box and click Apply.
View Info — View filter information
Display filter information of the Complex Bandpass Decimator block:
This button brings the functionality of the info analysis method into the Simulink® environment.
The Complex Bandpass Decimator block supports SIMD code generation using Intel AVX2 technology under these conditions:
|
Properties of Logarithms - Course Hero
College Algebra/Logarithmic Functions/Properties of Logarithms
Properties of logarithms can be used to simplify logarithmic expressions.
Logarithms have several important properties that can be used to combine them or write them in different forms.
\log_b{(xy)}=\log_b{x}+\log_b{y}
\log_b{\left (\frac{x}{y} \right )}=\log_b{x}-\log_b{y}
\log_b{x^p}=p\cdot \log_b{x}
\log_b{x}=\frac{\log_a{x}}{\log_a{b}}
There are also two important forms to remember. When the value of the argument is 1 and the base is a positive value other than 1, then the logarithm is equal to zero:
\log_b{1}=0
When the value of the base is the same as the value of the argument and the vase is a positive value other than 1, then the logarithm is equal to 1:
\log_b{b}=1
To determine the logarithm of a product, add the logarithms of the factors.
The product rule of logarithms states that the logarithm of a product is equal to the sum of the logarithms of the factors. In other words, where
b
x
y
b\neq 1
, the product rule of logarithms can be written as:
\log_b{(xy)}=\log_b{x}+\log_b{y}
Like other logarithmic properties, the product rule of logarithms can be used to evaluate logarithmic expressions or to solve complex logarithmic equations.
Adding Logarithms to Simplify an Expression
Rewrite the given expressionas a single logarithm, and then find the value of the given expression:
\log_6{12}+\log_6{18}
The logarithms have the same base, so use the product rule.
\log_b{x}+\log_b{y}=\log_b{(xy)}
The given expression can be rewritten as:
\begin{gathered}\log_6{12}+\log_6{18}\\\log_6{(12\cdot 18)}\end{gathered}
\begin{gathered}\log_6{(12\cdot 18)}\\\log_6{216}\end{gathered}
Use the relationship between exponents and logarithms to evaluate
\log_6{216}
Write 216 as a power of 6.
\begin{aligned}216&=6\cdot 6\cdot 6\\&=6^{3}\end{aligned}
The exponential equation
6^3=216
can be rewritten as the equivalent logarithmic equation:
\log_6{216}=3
The value of the given logarithmic expression is 3.
Evaluating the Logarithm of a Product
Evaluate the given logarithm:
\log_5{(25 \cdot 625)}
Use the product rule to rewrite the logarithm of a product as the sum of the logarithms of the factors.
\begin{aligned}\log_b{(xy)}&=\log_b{x}+\log_b{y}\\\log_5{(25 \cdot 625)}&=\log_5{25}+\log_5{625}\end{aligned}
Evaluate the first logarithm. Use
as the exponent.
The exponential form of
\log_5{25}
5^a=25
5^2=25
\log_{5}25
\log_5{25}=2
Evaluate the second logarithm. Use
b
as the exponent.
\log_5{625}
5^b=625
5^4=625
\log_5{625}
\log_5{625}=4
Calculate the sum of the logarithms from Step 2 and Step 3.
\begin{aligned}\log_5{25}+\log_5{625}&=2+4\\&=6\end{aligned}
The value of the given logarithm is 6:
\log_5{(25 \cdot 625)}=6
To calculate the logarithm of a quotient, subtract the logarithm of the divisor from the logarithm of the dividend.
The quotient rule of logarithms states that the logarithm of a quotient is equal to the difference of the logarithm of the dividend and the logarithm of the divisor. To calculate the logarithm of a quotient, use the quotient rule of logarithms, where
b
x
y
are positive and
b\neq 1
, by subtracting the logarithm of the divisor from the logarithm of the dividend:
\log_b{\left (\frac{x}{y} \right )}=\log_b{x}-\log_b{y}
Evaluating the Logarithm of a Quotient
\log_2{\left (\frac{8}{64} \right )}
Use the quotient rule to rewrite the logarithm of a quotient as a difference of logarithms.
\begin{aligned}\log_b\left (\frac{x}{y} \right )&=\log_b{x}-\log_b{y}\\ {\log_2{{\left (\frac{8}{64} \right )}}}&=\log_2{8}-\log_2{64}\end{aligned}
as the exponent.
\log_{2}8
2^{a}=8
2^{3}=8
\log_{2}8=3
\log_2{8}=3
b
\log_{2}64
2^{b}=64
2^6=64
\log_{2}64
\log_2{64}=6
\begin{aligned}\log_2{8}-\log_2{64}&=3-6\\&=-3\end{aligned}
The value of the given logarithm is –3.
\log_2{\left (\frac{8}{64} \right )}=-3
Evaluating the Logarithm of a Quotient That Contains a Sum
\log_7{\left (\frac{7}{x^{^{3}}+5} \right )}
\begin{aligned}\log_b{\left (\frac{x}{y} \right )}&=\log_b{x}-\log_b{y}\\\log_7{\left (\frac{7}{x^{^{3}}+5} \right )}&=\log_7{7}-\log_7{(x^3+5)}\end{aligned}
Examine the first term,
\log_{7}7
The value of the base is the same as the value of the argument. The base is also a positive value other than 1. So, the logarithm is equal to zero:
\log_b{b}=1
for any positive base that is not equal to 1, the first term is equal to 1.
\begin{aligned}\log_b{b}&=1\\\log_7{7}&=1\end{aligned}
Examine the second term:
\log_7{(x^3+5)}
There is no rule that allows the second term to be further simplified because it is the logarithm of a sum.
Put the examined terms together:
\log_7{\left (\frac{7}{x^{^{3}}+5} \right )}=1-\log_7{(x^3+5)}
To determine the logarithm of a power, multiply the exponent by the logarithm of the base.
The power rule of logarithms is equal to the product of the exponent and the logarithm of the base. In other words, where
p
b
x
b\neq 1
, the logarithm of a power can be written as:
\log_b{x^p}=p\cdot \log_b{x}
To identify the logarithm of a power, multiply the exponent of the power by the logarithm of the base. For example:
\log_3{11^5}=5\cdot \log_3{11}
The power rule and other properties of logarithms can be applied to write logarithmic expressions in expanded form.
Using Properties to Expand Expressions
Write the given expression in expanded form:
\log_3{\left ( \frac{4x^{5}}{3y^{7}} \right)}
Use the quotient rule of logarithms to rewrite the logarithm of a quotient as a difference of logarithms.
\begin{aligned}\log_b{\left (\frac{x}{y} \right )}&=\log_b{x}-\log_b{y}\\\log_3{\left ( \frac{4x^{5}}{3y^{7}} \right )}&=\log_3{(4x^5)}-\log_3{(3y^7)}\end{aligned}
The product rule of logarithms is:
\log_b{(xy)}=\log_b{x}+\log_b{y}
Rewrite each logarithm of a product as a sum of logarithms and simplify.
\begin{gathered}&\overbrace{\log_{3}(4x^5)}^{\text{First logarithm}}-\overbrace{\log_3{(3y^7)}}^{\text{Second logarithm}}\\&\overbrace{\log_{3}4+\log_3{(x^5)}}^{\text{First logarithm}}-\overbrace{[\log_{3}3+\log_{3}(y^7)]}^{\text{Second logarithm}}\\&\overbrace{\log_{3}4+\log_{3}(x^5)}^{\text{First logarithm}}-\overbrace{\log_{3}3-\log_{3}(y^{7})}^{\text{Second logarithm}}\end{gathered}
The power rule of logarithms is:
\log_b{x^p}=p\cdot \log_b{x}
Use the power rule of logarithms to rewrite the terms with exponents:
\begin{gathered}\log_{3}4+\overbrace{\log_{3}(x^5)}^\text{First term}-\log_{3}3-\overbrace{\log_{3}(y^7)}^{\text{Second term}}\\\log_{3}4+\overbrace {5\cdot\log_{3}x}^\text{First term}-\log_{3}3-\overbrace{7\cdot \log_{3}y}^{\text{Second term}}\end{gathered}
Since any positive number raised to the power of 1 is equal to that number, it is also true that:
\begin{aligned}\log_b{b}&=1\\\log_3{3}&=1\end{aligned}
\log_3{3}
\begin{gathered}\log_3{4}+5\cdot \log_3{x}-{\color{#c42126}{\log_3{3}}}-7\cdot \log_3{y}\\\log_3{4}+5\cdot \log_3{x}-{\color{#c42126}{1}}-7\cdot \log_3{y}\end{gathered}
The expanded form of the given expression is:
\log_3{4}+5\cdot\log_3{x}-1-7\cdot \log_3{y}
A logarithm can be changed from one base to another by using the formula
\log_b{x}=\frac{\log_a{x}}{\log_a{b}}
It can sometimes be convenient to change a logarithm from one base to another. The change of base rule states that a logarithm of a number in base
b
a
a
b
a
. The change of base rule, where
a
b
x
a\neq 1
b \neq 1
\log_b{x}=\frac{\log_a{x}}{\log_a{b}}
It shows how to change a logarithm in base
b
to a different base
a
. This rule is commonly used to calculate the values of logarithms with bases other than base 10 or
e
when using technology, in particular, calculators with only the common log and the natural log. To use the change of base rule with a calculator, select either log or ln and calculate the quotient of the log of the argument and the log of the original base. For example:
\log_5{12}=\frac{\log{12}}{\log{5}}=\frac{\ln{12}}{\ln{5}}\approx1.544
Deriving the Change of Base Rule
Derive the change of base rule:
\log_b{x}=\frac{\log_a{x}}{\log_a{b}}
The change of base rule shows how to write
\log_{b}x
in terms of a logarithm with another base,
a
y
represent the logarithm being derived, or
\log_{b}x
y=\log_b{x}
Next, rewrite the logarithm from Step 1 in exponential form.
\begin{aligned}y&=\log_b{x}\\x&=b^y\end{aligned}
\log_a
of both sides of the equation.
\begin{aligned}x&=b^y\\ \log_a{x}&=\log_a{(b^y)}\end{aligned}
Simplify the logarithm using the power rule of logarithms by multiplying the exponent by the logarithm of the base:
\begin{aligned}\log_a{x}&=\log_a{(b^y)}\\\log_a{x}&=y\cdot \log_a{b}\end{aligned}
y
\log_a{b}
\begin{aligned}\log_a{x}&=y \cdot \log_a{b}\\\frac{\log_a{x}}{\log_a{b}}&=y\end{aligned}
y=\log_b{x}
from Step 1. Substitute
y
\log_b{x}
\begin{aligned}y&=\frac{\log_a{x}}{\log_a{b}}\\\log_b{x}&=\frac{\log_a{x}}{\log_a{b}}\end{aligned}
\log_4{32}
Identify a different base.
The argument 32 is not a whole-number power of the base 4, but notice that both the argument and the base are powers of 2.
Change the logarithm to a base of 2.
The change of base rule is:
\log_b{x}=\frac{\log_a{x}}{\log_a{b}}
Apply the change of base rule to the logarithm.
\log_4{32}=\frac{\log_2{32}}{\log_2{4}}
Evaluate the logarithm in the numerator. Use
as the exponent.
\log_2{32}
2^{a}=32
2^5=32
, the logarithm in the numerator is 5:
\log_2{32}=5
Evaluate the logarithm in the denominator. Use
b
\log_2{4}
2^{b}=4
2^{2}=4
, the logarithm in the denominator is 2:
\log_2{4}=2
Substitute the simplified logarithms back into the logarithm from Step 1:
\frac{\log_2{32}}{\log_2{4}}=\frac{5}{2}
The value of the given logarithm is
\frac{5}{2}
\log_4{32}=\frac{5}{2}
<Relating Exponents to Logarithms>Graphing Logarithmic Functions
|
1.3 Scalar projection and first properties
1.4 Equivalence of the definitions
2.1 Application to the law of cosines
3 Triple product
5.5 Dyadics and matrices
Algebraic definition[edit]
{\displaystyle \mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} =\sum _{i=1}^{n}{\color {red}a}_{i}{\color {blue}b}_{i}={\color {red}a}_{1}{\color {blue}b}_{1}+{\color {red}a}_{2}{\color {blue}b}_{2}+\cdots +{\color {red}a}_{n}{\color {blue}b}_{n}}
{\displaystyle {\begin{aligned}\ [{\color {red}1,3,-5}]\cdot [{\color {blue}4,-2,-1}]&=({\color {red}1}\times {\color {blue}4})+({\color {red}3}\times {\color {blue}-2})+({\color {red}-5}\times {\color {blue}-1})\\&=4-6+5\\&=3\end{aligned}}}
{\displaystyle \mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} =\mathbf {\color {red}a} \mathbf {\color {blue}b} ^{\mathsf {T}},}
{\displaystyle \mathbf {\color {blue}b} ^{\mathsf {T}}}
{\displaystyle \mathbf {\color {blue}b} }
{\displaystyle {\begin{bmatrix}\color {red}1&\color {red}3&\color {red}-5\end{bmatrix}}{\begin{bmatrix}\color {blue}4\\\color {blue}-2\\\color {blue}-1\end{bmatrix}}=\color {purple}3}
Geometric definition[edit]
In Euclidean space, a Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector a is denoted by
{\displaystyle \left\|\mathbf {a} \right\|}
. The dot product of two Euclidean vectors a and b is defined by[3][4][1]
{\displaystyle \mathbf {a} \cdot \mathbf {b} =\|\mathbf {a} \|\ \|\mathbf {b} \|\cos \theta ,}
In particular, if the vectors a and b are orthogonal (i.e., their angle is π / 2 or 90°), then
{\displaystyle \cos {\frac {\pi }{2}}=0}
{\displaystyle \mathbf {a} \cdot \mathbf {b} =0.}
At the other extreme, if they are codirectional, then the angle between them is zero with
{\displaystyle \cos 0=1}
{\displaystyle \mathbf {a} \cdot \mathbf {b} =\left\|\mathbf {a} \right\|\,\left\|\mathbf {b} \right\|}
This implies that the dot product of a vector a with itself is
{\displaystyle \mathbf {a} \cdot \mathbf {a} =\left\|\mathbf {a} \right\|^{2},}
{\displaystyle \left\|\mathbf {a} \right\|={\sqrt {\mathbf {a} \cdot \mathbf {a} }},}
Scalar projection and first properties[edit]
{\displaystyle a_{b}=\left\|\mathbf {a} \right\|\cos \theta ,}
{\displaystyle a_{b}=\mathbf {a} \cdot {\widehat {\mathbf {b} }},}
{\displaystyle {\widehat {\mathbf {b} }}=\mathbf {b} /\left\|\mathbf {b} \right\|}
is the unit vector in the direction of b.
{\displaystyle \mathbf {a} \cdot \mathbf {b} =a_{b}\left\|\mathbf {b} \right\|=b_{a}\left\|\mathbf {a} \right\|.}
{\displaystyle (\alpha \mathbf {a} )\cdot \mathbf {b} =\alpha (\mathbf {a} \cdot \mathbf {b} )=\mathbf {a} \cdot (\alpha \mathbf {b} ).}
{\displaystyle \mathbf {a} \cdot (\mathbf {b} +\mathbf {c} )=\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \cdot \mathbf {c} .}
These properties may be summarized by saying that the dot product is a bilinear form. Moreover, this bilinear form is positive definite, which means that
{\displaystyle \mathbf {a} \cdot \mathbf {a} }
is never negative, and is zero if and only if
{\displaystyle \mathbf {a} =\mathbf {0} }
—the zero vector.
Equivalence of the definitions[edit]
{\displaystyle {\begin{aligned}\mathbf {a} &=[a_{1},\dots ,a_{n}]=\sum _{i}a_{i}\mathbf {e} _{i}\\\mathbf {b} &=[b_{1},\dots ,b_{n}]=\sum _{i}b_{i}\mathbf {e} _{i}.\end{aligned}}}
{\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{i}=1}
{\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=0.}
Thus in general, we can say that:
{\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=\delta _{ij}.}
Vector components in an orthonormal basis
{\displaystyle \mathbf {a} \cdot \mathbf {e} _{i}=\left\|\mathbf {a} \right\|\,\left\|\mathbf {e} _{i}\right\|\cos \theta _{i}=\left\|\mathbf {a} \right\|\cos \theta _{i}=a_{i},}
{\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {a} \cdot \sum _{i}b_{i}\mathbf {e} _{i}=\sum _{i}b_{i}(\mathbf {a} \cdot \mathbf {e} _{i})=\sum _{i}b_{i}a_{i}=\sum _{i}a_{i}b_{i},}
{\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {b} \cdot \mathbf {a} ,}
which follows from the definition (θ is the angle between a and b):[6]
{\displaystyle \mathbf {a} \cdot \mathbf {b} =\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\cos \theta =\left\|\mathbf {b} \right\|\left\|\mathbf {a} \right\|\cos \theta =\mathbf {b} \cdot \mathbf {a} .}
{\displaystyle \mathbf {a} \cdot (\mathbf {b} +\mathbf {c} )=\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \cdot \mathbf {c} .}
{\displaystyle \mathbf {a} \cdot (r\mathbf {b} +\mathbf {c} )=r(\mathbf {a} \cdot \mathbf {b} )+(\mathbf {a} \cdot \mathbf {c} ).}
{\displaystyle (c_{1}\mathbf {a} )\cdot (c_{2}\mathbf {b} )=c_{1}c_{2}(\mathbf {a} \cdot \mathbf {b} ).}
If a ⋅ b = a ⋅ c and a ≠ 0, then we can write: a ⋅ (b − c) = 0 by the distributive law; the result above says this just means that a is perpendicular to (b − c), which still allows (b − c) ≠ 0, and therefore allows b ≠ c.
If a and b are (vector-valued) differentiable functions, then the derivative (denoted by a prime ′) of a ⋅ b is given by the rule (a ⋅ b)′ = a′ ⋅ b + a ⋅ b′.
Application to the law of cosines[edit]
{\displaystyle {\begin{aligned}\mathbf {\color {orange}c} \cdot \mathbf {\color {orange}c} &=(\mathbf {\color {red}a} -\mathbf {\color {blue}b} )\cdot (\mathbf {\color {red}a} -\mathbf {\color {blue}b} )\\&=\mathbf {\color {red}a} \cdot \mathbf {\color {red}a} -\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} -\mathbf {\color {blue}b} \cdot \mathbf {\color {red}a} +\mathbf {\color {blue}b} \cdot \mathbf {\color {blue}b} \\&=\mathbf {\color {red}a} ^{2}-\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} -\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} +\mathbf {\color {blue}b} ^{2}\\&=\mathbf {\color {red}a} ^{2}-2\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} +\mathbf {\color {blue}b} ^{2}\\\mathbf {\color {orange}c} ^{2}&=\mathbf {\color {red}a} ^{2}+\mathbf {\color {blue}b} ^{2}-2\mathbf {\color {red}a} \mathbf {\color {blue}b} \cos \mathbf {\color {purple}\theta } \\\end{aligned}}}
Triple product[edit]
The scalar triple product of three vectors is defined as
{\displaystyle \mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )=\mathbf {b} \cdot (\mathbf {c} \times \mathbf {a} )=\mathbf {c} \cdot (\mathbf {a} \times \mathbf {b} ).}
The vector triple product is defined by[2][3]
{\displaystyle \mathbf {a} \times (\mathbf {b} \times \mathbf {c} )=(\mathbf {a} \cdot \mathbf {c} )\,\mathbf {b} -(\mathbf {a} \cdot \mathbf {b} )\,\mathbf {c} .}
Mechanical work is the dot product of force and displacement vectors,
Power is the dot product of force and velocity.
Complex vectors[edit]
{\displaystyle \mathbf {a} \cdot \mathbf {b} =\sum _{i}{{a_{i}}\,{\overline {b_{i}}}},}
{\displaystyle {\overline {b_{i}}}}
{\displaystyle b_{i}}
. When vectors are represented by column vectors, the dot product can be expressed as a matrix product involving a conjugate transpose, denoted with the superscript H:
{\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {b} ^{\mathsf {H}}\mathbf {a} .}
{\displaystyle \mathbf {a} \cdot \mathbf {b} ={\overline {\mathbf {b} \cdot \mathbf {a} }}.}
{\displaystyle \cos \theta ={\frac {\operatorname {Re} (\mathbf {a} \cdot \mathbf {b} )}{\left\|\mathbf {a} \right\|\,\left\|\mathbf {b} \right\|}}.}
The self dot product of a complex vector
{\displaystyle \mathbf {a} \cdot \mathbf {a} =\mathbf {a} ^{\mathsf {H}}\mathbf {a} }
, involving the conjugate transpose of a row vector, is also known as the norm squared,
{\textstyle \mathbf {a} \cdot \mathbf {a} =\|\mathbf {a} \|^{2}}
, after the Euclidean norm; it is a vector generalization of the absolute square of a complex scalar (see also: squared Euclidean distance).
The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers
{\displaystyle \mathbb {R} }
or the field of complex numbers
{\displaystyle \mathbb {C} }
. It is usually denoted using angular brackets by
{\displaystyle \left\langle \mathbf {a} \,,\mathbf {b} \right\rangle }
The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length-n vector u is, then, a function with domain {k ∈
{\displaystyle \mathbb {N} }
∣ 1 ≤ k ≤ n}, and ui is a notation for the image of i by the function/vector u.
{\displaystyle \left\langle u,v\right\rangle =\int _{a}^{b}u(x)v(x)dx}
{\displaystyle \left\langle \psi ,\chi \right\rangle =\int _{a}^{b}\psi (x){\overline {\chi (x)}}dx.}
Weight function[edit]
Inner products can have a weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions
{\displaystyle u(x)}
{\displaystyle v(x)}
{\displaystyle r(x)>0}
{\displaystyle \left\langle u,v\right\rangle =\int _{a}^{b}r(x)u(x)v(x)dx.}
Dyadics and matrices[edit]
{\displaystyle \mathbf {A} :\mathbf {B} =\sum _{i}\sum _{j}A_{ij}{\overline {B_{ij}}}=\operatorname {tr} (\mathbf {B} ^{\mathsf {H}}\mathbf {A} )=\operatorname {tr} (\mathbf {A} \mathbf {B} ^{\mathsf {H}}).}
{\displaystyle \mathbf {A} :\mathbf {B} =\sum _{i}\sum _{j}A_{ij}B_{ij}=\operatorname {tr} (\mathbf {B} ^{\mathsf {T}}\mathbf {A} )=\operatorname {tr} (\mathbf {A} \mathbf {B} ^{\mathsf {T}})=\operatorname {tr} (\mathbf {A} ^{\mathsf {T}}\mathbf {B} )=\operatorname {tr} (\mathbf {B} \mathbf {A} ^{\mathsf {T}}).}
(For real matrices)
A dot product function is included in:
BLAS level 1 real SDOT, DDOT; complex CDOTU, ZDOTU = X^T * Y, CDOTC ZDOTC = X^H * Y
Matlab as A' * B or conj(transpose(A)) * B or sum(conj(A) .* B)
GNU Octave as sum(conj(X) .* Y, dim)
Intel oneAPI Math Kernel Library real p?dot dot = sub(x)'*sub(y); complex p?dotc dotc = conjg(sub(x)')*sub(y)
Dot product representation of a graph
Euclidean norm, the square-root of the self dot product
^ The term scalar product means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space.
^ a b "Dot Product". www.mathsisfun.com. Retrieved 2020-09-06.
^ a b c d e f S. Lipschutz; M. Lipson (2009). Linear Algebra (Schaum's Outlines) (4th ed.). McGraw Hill. ISBN 978-0-07-154352-1.
^ a b c M.R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis (Schaum's Outlines) (2nd ed.). McGraw Hill. ISBN 978-0-07-161545-7.
^ A I Borisenko; I E Taparov (1968). Vector and tensor analysis with applications. Translated by Richard Silverman. Dover. p. 14.
^ Arfken, G. B.; Weber, H. J. (2000). Mathematical Methods for Physicists (5th ed.). Boston, MA: Academic Press. pp. 14–15. ISBN 978-0-12-059825-0. .
^ Nykamp, Duane. "The dot product". Math Insight. Retrieved September 6, 2020.
^ T. Banchoff; J. Wermer (1983). Linear Algebra Through Geometry. Springer Science & Business Media. p. 12. ISBN 978-1-4684-0161-5.
^ A. Bedford; Wallace L. Fowler (2008). Engineering Mechanics: Statics (5th ed.). Prentice Hall. p. 60. ISBN 978-0-13-612915-8.
^ K.F. Riley; M.P. Hobson; S.J. Bence (2010). Mathematical methods for physics and engineering (3rd ed.). Cambridge University Press. ISBN 978-0-521-86153-3.
^ M. Mansfield; C. O'Sullivan (2011). Understanding Physics (4th ed.). John Wiley & Sons. ISBN 978-0-47-0746370.
^ Berberian, Sterling K. (2014) [1992], Linear Algebra, Dover, p. 287, ISBN 978-0-486-78055-9
Wikimedia Commons has media related to Scalar product.
"Inner product", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Dot_product&oldid=1089716898"
|
1 Department of Electrica & Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh.
2 Department of Electrical and Electronic Engineering, Bangladesh University of Business and Technology, Dhaka, Bangladesh.
3 Department of Electrical and Electronic Engineering, Jashore University of Science and Technology, Jashore, Bangladesh.
Abstract: This paper presents the analytical design and high performance of step-constant tapered slot antenna (STSA) for ultra-wideband application. The return loss, radiation pattern, antenna gain, and level of cross polarization of this antenna are presented and analyzed. Utilizing Rogers (RO3006) substrate having a relative permittivity of 6.15, the proposed antenna provides the ultra-wideband (UWB) from 3.1 GHz to 10.6 GHz. It is observed that the return loss and gain are increased with increasing the step size. It has been observed from the simulation results incorporating CST microwave studio commercial software version 2015, the optimum return loss, directivity and gain are −43 dB, 10.52 dBi and 10.20 dB, respectively, for 15 step size. Therefore, the newly proposed antenna will be a decent candidate for ultra-wideband application.
Keywords: Antenna Gain, Voltage Standing Wave Ratio (VSWR), Step Constant Tapered Slot Antenna (STSA), Ultra Wideband (UWB), Reflection Coefficient
\text{VSWR}=\frac{1+|\Gamma |}{1-|\Gamma |}
Cite this paper: Aktar, M. , Rana, M. , Hossain, M. and Hossain, M. (2019) Design and Implementation of Step-Constant Tapered Slot Antennas for UWB Application. Journal of Sensor Technology, 9, 91-100. doi: 10.4236/jst.2019.94008.
|
Gas is one of the four fundamental states of matter (the others being solid, liquid, and plasma).[1]
A pure gas may be made up of individual atoms (e.g. a noble gas like neon), elemental molecules made from one type of atom (e.g. oxygen), or compound molecules made from a variety of atoms (e.g. carbon dioxide). A gas mixture, such as air, contains a variety of pure gases. What distinguishes a gas from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer.
The gaseous state of matter occurs between the liquid and plasma states,[2] the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases[3] which are gaining increasing attention.[4] High-density atomic gases super-cooled to very low temperatures are classified by their statistical behavior as either Bose gases or Fermi gases. For a comprehensive listing of these exotic states of matter see list of states of matter.
The only chemical elements that are stable diatomic homonuclear molecules at STP are hydrogen (H2), nitrogen (N2), oxygen (O2), and two halogens: fluorine (F2) and chlorine (Cl2). When grouped together with the monatomic noble gases – helium (He), neon (Ne), argon (Ar), krypton (Kr), xenon (Xe), and radon (Rn) – these gases are referred to as "elemental gases".
The word gas was first used by the early 17th-century Flemish chemist Jan Baptist van Helmont.[5] He identified carbon dioxide, the first known gas other than air.[6] Van Helmont's word appears to have been simply a phonetic transcription of the Ancient Greek word χάος Chaos – the g in Dutch being pronounced like ch in "loch" (voiceless velar fricative, /x/) – in which case Van Helmont was simply following the established alchemical usage first attested in the works of Paracelsus. According to Paracelsus's terminology, chaos meant something like "ultra-rarefied water".[7]
An alternative story is that Van Helmont's term was derived from "gahst (or geist), which signifies a ghost or spirit".[8] That story is given no credence by the editors of the Oxford English Dictionary.[9] In contrast, French-American historian Jacques Barzun speculated that Van Helmont had borrowed the word from the German Gäscht, meaning the froth resulting from fermentation.[10]
Gas particles are widely separated from one another, and consequently, have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from electrostatic interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another; gases that contain permanently charged ions are known as plasmas. Gaseous compounds with polar covalent bonds contain permanent charge imbalances and so experience relatively strong intermolecular forces, although the molecule while the compound's net charge remains neutral. Transient, randomly induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these intermolecular forces varies within a substance which determines many of the physical properties unique to each gas.[11][12] A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion.[13] The drifting smoke particles in the image provides some insight into low-pressure gas behavior.
Macroscopic view of gasesEdit
The speed of a gas particle is proportional to its absolute temperature. The volume of the balloon in the video shrinks when the trapped gas particles slow down with the addition of extremely cold nitrogen. The temperature of any physical system is related to the motions of the particles (molecules and atoms) which make up the [gas] system.[16] In statistical mechanics, temperature is the measure of the average kinetic energy stored in a molecule (also known as the thermal energy). The methods of storing this energy are dictated by the degrees of freedom of the molecule itself (energy modes). Thermal (kinetic) energy added to a gas or liquid (an endothermic process) produces translational, rotational, and vibrational motion. In contrast, a solid can only increase its internal energy by exciting additional vibrational modes, as the crystal lattice structure prevents both translational and rotational motion. These heated gas molecules have a greater speed range (wider distribution of speeds) with a higher average or mean speed. The variance of this distribution is due to the speeds of individual particles constantly varying, due to repeated collisions with other particles. The speed range can be described by the Maxwell–Boltzmann distribution. Use of this distribution implies ideal gases near thermodynamic equilibrium for the system of particles being considered.
Specific volumeEdit
When performing a thermodynamic analysis, it is typical to speak of intensive and extensive properties. Properties which depend on the amount of gas (either by mass or volume) are called extensive properties, while properties that do not depend on the amount of gas are called intensive properties. Specific volume is an example of an intensive property because it is the ratio of volume occupied by a unit of mass of a gas that is identical throughout a system at equilibrium.[17] 1000 atoms a gas occupy the same space as any other 1000 atoms for any given temperature and pressure. This concept is easier to visualize for solids such as iron which are incompressible compared to gases. However, volume itself --- not specific --- is an extensive property.
Microscopic view of gasesEdit
Gas-phase particles (atoms, molecules, or ions) move around freely in the absence of an applied electric field.
If one could observe a gas under a powerful microscope, one would see a collection of particles without any definite shape or volume that are in more or less random motion. These gas particles only change direction when they collide with another particle or with the sides of the container. This microscopic view of gas is well-described by statistical mechanics, but it can be described by many different theories. The kinetic theory of gases, which makes the assumption that these collisions are perfectly elastic, does not account for intermolecular forces of attraction and repulsion.
For example: Imagine you have a sealed container of a fixed-size (a constant volume), containing a fixed-number of gas particles; starting from absolute zero (the theoretical temperature at which atoms or molecules have no thermal energy, i.e. are not moving or vibrating), you begin to add energy to the system by heating the container, so that energy transfers to the particles inside. Once their internal energy is above zero-point energy, meaning their kinetic energy (also known as thermal energy) is non-zero, the gas particles will begin to move around the container. As the box is further heated (as more energy is added), the individual particles increase their average speed as the system's total internal energy increases. The higher average-speed of all the particles leads to a greater rate at which collisions happen (i.e. greater number of collisions per unit of time), between particles and the container, as well as between the particles themselves.
Thermal motion and statistical mechanicsEdit
In the kinetic theory of gases, kinetic energy is assumed to purely consist of linear translations according to a speed distribution of particles in the system. However, in real gases and other real substances, the motions which define the kinetic energy of a system (which collectively determine the temperature), are much more complex than simple linear translation due to the more complex structure of molecules, compared to single atoms which act similarly to point-masses. In real thermodynamic systems, quantum phenomena play a large role in determining thermal motions. The random, thermal motions (kinetic energy) in molecules is a combination of a finite set of possible motions including translation, rotation, and vibration. This finite range of possible motions, along with the finite set of molecules in the system, leads to a finite number of microstates within the system; we call the set of all microstates an ensemble. Specific to atomic or molecular systems, we could potentially have three different kinds of ensemble, depending on the situation: microcanonical ensemble, canonical ensemble, or grand canonical ensemble. Specific combinations of microstates within an ensemble are how we truly define macrostate of the system (temperature, pressure, energy, etc.). In order to do that, we must first count all microstates though use of a partition function. The use of statistical mechanics and the partition function is an important tool throughout all of physical chemistry, because it is the key to connection between the microscopic states of a system and the macroscopic variables which we can measure, such as temperature, pressure, heat capacity, internal energy, enthalpy, and entropy, just to name a few. (Read: Partition function Meaning and significance)
Using the partition function to find the energy of a molecule, or system of molecules, can sometimes be approximated by the Equipartition theorem, which greatly-simplifies calculation. However, this method assumes all molecular degrees of freedom are equally populated, and therefore equally utilized for storing energy within the molecule. It would imply that internal energy changes linearly with temperature, which is not the case. This ignores the fact that heat capacity changes with temperature, due to certain degrees of freedom being unreachable (a.k.a. "frozen out") at lower temperatures. As internal energy of molecules increases, so does the ability to store energy within additional degrees of freedom. As more degrees of freedom become available to hold energy, this causes the molar heat capacity of the substance to increase.[19]
Brownian motionEdit
Intermolecular forces - the primary difference between Real and Ideal gasesEdit
Arising from the study of physical chemistry, one of the most prominent intermolecular forces throughout physics, are van der Waals forces. Van der Waals forces play a key role in determining nearly all physical properties of fluids such as viscosity, flow rate, and gas dynamics (see physical characteristics section). The van der Waals interactions between gas molecules, is the reason why modeling a "real gas" is more mathematically difficult than an "ideal gas". Ignoring these proximity-dependent forces allows a real gas to be treated like an ideal gas, which greatly simplifies calculation.
Isothermal curves depicting the non-ideality of a real gas. The changes in volume (depicted by Z, compressibility factor) which occur as the pressure is varied. The compressibility factor Z, is equal to the ratio Z = PV/nRT. An ideal gas, with compressibility factor Z = 1, is described by the horizontal line where the y-axis is equal to 1. Non-ideality can be described as the deviation of a gas above or below Z = 1.
The intermolecular attractions and repulsions between two gas molecules are dependent on the amount of distance between them. The combined attractions and repulsions are well-modelled by the Lennard-Jones potential, which is one of the most extensively studied of all interatomic potentials describing the potential energy of molecular systems. The Lennard-Jones potential between molecules can be broken down into two separate components: a long-distance attraction due to the London dispersion force, and a short-range repulsion due to electron-electron exchange interaction (which is related to the Pauli exclusion principle).
When two molecules are relatively distant (meaning they have a high potential energy), they experience a weak attracting force, causing them to move toward each other, lowering their potential energy. However, if the molecules are too far away, then they would not experience attractive force of any significance. Additionally, if the molecules get too close then they will collide, and experience a very high repulsive force (modelled by Hard spheres) which is a much stronger force than the attractions, so that any attraction due to proximity is disregarded.
Ideal and perfect gasEdit
{\displaystyle PV=nRT,}
{\displaystyle P=\rho R_{s}T,}
{\displaystyle R_{s}}
Real gasEdit
21 April 1990 eruption of Mount Redoubt, Alaska, illustrating real gases not in thermodynamic equilibrium.
Permanent gasEdit
Historical researchEdit
Boyle's lawEdit
{\displaystyle PV=k}
{\displaystyle \qquad P_{1}V_{1}=P_{2}V_{2}.}
Charles's lawEdit
{\displaystyle {\frac {V_{1}}{T_{1}}}={\frac {V_{2}}{T_{2}}}}
Gay-Lussac's lawEdit
{\displaystyle {\frac {P_{1}}{T_{1}}}={\frac {P_{2}}{T_{2}}}\,}
Avogadro's law states that the volume occupied by an ideal gas is proportional to the number of moles (or molecules) present in the container. This gives rise to the molar volume of a gas, which at STP is 22.4 dm3 (or liters). The relation is given by
{\displaystyle {\frac {V_{1}}{n_{1}}}={\frac {V_{2}}{n_{2}}}\,}
Dalton's lawEdit
Satellite view of weather pattern in vicinity of Robinson Crusoe Islands on 15 September 1999, shows a turbulent cloud pattern called a Kármán vortex street
Delta wing in wind tunnel. The shadows form as the indices of refraction change within the gas as it compresses on the leading edge of this wing.
Boundary layerEdit
Maximum entropy principleEdit
Thermodynamic equilibriumEdit
^ "Gas". Merriam-Webster. {{cite web}}: CS1 maint: url-status (link)
^ The work by T. Zelevinski provides another link to recent research about strontium in this new field of study. See Tanya Zelevinsky (2009). "84Sr—just right for forming a Bose-Einstein condensate". Physics. 2: 94. Bibcode:2009PhyOJ...2...94Z. doi:10.1103/physics.2.94.
^ For the Bose–Einstein condensate see Quantum Gas Microscope Offers Glimpse Of Quirky Ultracold Atoms. ScienceDaily. 4 November 2009.
^ J. B. van Helmont, Ortus medicinae. … (Amsterdam, (Netherlands): Louis Elzevir, 1652 (first edition: 1648)). The word "gas" first appears on page 58, where he mentions: "… Gas (meum scil. inventum) …" (… gas (namely, my discovery) …). On page 59, he states: "… in nominis egestate, halitum illum, Gas vocavi, non longe a Chao …" (… in need of a name, I called this vapor "gas", not far from "chaos" …)
^ Draper, John William (1861). A textbook on chemistry. New York: Harper and Sons. p. 178.
^ Barzun, Jacques (2000). For Dawn to Decadence: 500 Years of Western Cultural Life. New York: HarperCollins Publishers. p. 199.
^ One noticeable exception to this physical property connection is conductivity which varies depending on the state of matter (ionic compounds in water) as described by Michael Faraday in 1833 when he noted that ice does not conduct a current. See page 45 of John Tyndall's Faraday as a Discoverer (1868).
^ John S. Hutchinson (2008). Concept Development Studies in Chemistry. p. 67.
^ Jeschke, Gunnar (26 November 2020). "Canonical Ensemble". Archived from the original on 2021-05-20.
^ "Lennard-Jones Potential - Chemistry LibreTexts". 2020-08-22. Archived from the original on 2020-08-22. Retrieved 2021-05-20.
^ "14.11: Real and Ideal Gases - Chemistry LibreTexts". 2021-02-06. Archived from the original on 2021-02-06. Retrieved 2021-05-20.
^ "Permanent gas". www.oxfordreference.com. Oxford University Press. Retrieved 3 April 2021.
^ John P. Millington (1906). John Dalton. pp. 72, 77–78.
Lewes, Vivian Byam; Lunge, Georg (1911). "Gas" . Encyclopædia Britannica. Vol. 11 (11th ed.). p. 481–493.
|
GraphPolynomial - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : GraphPolynomial
construct graph polynomial
GraphPolynomial(G,x)
name or list(algebraic)
GraphPolynomial(G,x) returns a polynomial in the variables
{x}_{1}
{x}_{n}
when x is a symbol and G is a graph with
n
vertices. The polynomial consists only of linear factors of the form
\left({x}_{j}-{x}_{k}\right)
j
k
represent adjacent vertices.
If x is a list of algebraic expressions whose length is equal to the number of vertices of G, the polynomial is formed using linear factors of the form
\left(x\left[j\right]-x\left[k\right]\right)
j
k
\mathrm{with}\left(\mathrm{GraphTheory}\right):
G≔\mathrm{Graph}\left([1,2,3,4,5,6],{{1,4},{2,6},{3,4},{3,5},{4,5},{4,6},{5,6}}\right):
\mathrm{GraphPolynomial}\left(G,x\right)
\left(\textcolor[rgb]{0,0,1}{\mathrm{x1}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x4}}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x6}}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x4}}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x5}}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x4}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x5}}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x4}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x6}}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x5}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x6}}\right)
\mathrm{GraphPolynomial}\left(\mathrm{CycleGraph}\left(4\right),[x,y,z,w]\right)
\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{w}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{w}\right)
Noga Alon and Michael Tarsi, "A note on graph colorings and graph polynomials", J. Combin. Theory Ser. B 70 (1997), no. 1, 197–201, doi: 10.1006/jctb.1997.1753
The GraphTheory[GraphPolynomial] command was updated in Maple 2019.
|
Circular_orbit Knowpia
Circular accelerationEdit
{\displaystyle a\,={\frac {v^{2}}{r}}\,={\omega ^{2}}{r}}
{\displaystyle v\,}
{\displaystyle r\,}
{\displaystyle \omega \ }
{\displaystyle \mathbf {a} }
{\displaystyle v\,}
{\displaystyle r\,}
{\displaystyle \omega \ }
The speed (or the magnitude of velocity) relative to the central object is constant:[1]: 30
{\displaystyle v={\sqrt {GM\! \over {r}}}={\sqrt {\mu \over {r}}}}
{\displaystyle G}
{\displaystyle M}
{\displaystyle (M_{1}+M_{2})}
{\displaystyle \mu =GM}
The orbit equation in polar coordinates, which in general gives r in terms of θ, reduces to:[clarification needed][citation needed]
{\displaystyle r={{h^{2}} \over {\mu }}}
{\displaystyle h=rv}
{\displaystyle \mu =rv^{2}}
Angular speed and orbital periodEdit
{\displaystyle \omega ^{2}r^{3}=\mu }
{\displaystyle T\,\!}
) can be computed as:[1]: 28
{\displaystyle T=2\pi {\sqrt {r^{3} \over {\mu }}}}
{\displaystyle T_{ff}={\frac {\pi }{2{\sqrt {2}}}}{\sqrt {r^{3} \over {\mu }}}}
{\displaystyle T_{par}={\frac {\sqrt {2}}{3}}{\sqrt {r^{3} \over {\mu }}}}
The fact that the formulas only differ by a constant factor is a priori clear from dimensional analysis.[citation needed]
{\displaystyle \epsilon \,}
{\displaystyle \epsilon =-{v^{2} \over {2}}}
{\displaystyle \epsilon =-{\mu \over {2r}}}
Thus the virial theorem[1]: 72 applies even without taking a time-average:[citation needed]
The escape velocity from any distance is √2 times the speed in a circular orbit at that distance: the kinetic energy is twice as much, hence the total energy is zero.[citation needed]
Delta-v to reach a circular orbitEdit
Orbital velocity in general relativityEdit
{\displaystyle r}
{\displaystyle v={\sqrt {\frac {GM}{r-r_{S}}}}}
{\displaystyle \scriptstyle r_{S}={\frac {2GM}{c^{2}}}}
{\displaystyle \scriptstyle c=G=1}
{\displaystyle u^{\mu }=({\dot {t}},0,0,{\dot {\phi }})}
{\displaystyle \scriptstyle r}
{\displaystyle \scriptstyle \theta ={\frac {\pi }{2}}}
{\displaystyle \scriptstyle \tau }
{\displaystyle \left(1-{\frac {2M}{r}}\right){\dot {t}}^{2}-r^{2}{\dot {\phi }}^{2}=1}
{\displaystyle {\ddot {x}}^{\mu }+\Gamma _{\nu \sigma }^{\mu }{\dot {x}}^{\nu }{\dot {x}}^{\sigma }=0}
{\displaystyle \scriptstyle \mu =r}
{\displaystyle {\frac {M}{r^{2}}}\left(1-{\frac {2M}{r}}\right){\dot {t}}^{2}-r\left(1-{\frac {2M}{r}}\right){\dot {\phi }}^{2}=0}
{\displaystyle {\dot {\phi }}^{2}={\frac {M}{r^{3}}}{\dot {t}}^{2}}
{\displaystyle \left(1-{\frac {2M}{r}}\right){\dot {t}}^{2}-{\frac {M}{r}}{\dot {t}}^{2}=1}
{\displaystyle {\dot {t}}^{2}={\frac {r}{r-3M}}}
{\displaystyle \scriptstyle r}
{\displaystyle \scriptstyle \partial _{t}}
{\displaystyle v^{\mu }=\left({\sqrt {\frac {r}{r-2M}}},0,0,0\right)}
{\displaystyle \gamma =g_{\mu \nu }u^{\mu }v^{\nu }=\left(1-{\frac {2M}{r}}\right){\sqrt {\frac {r}{r-3M}}}{\sqrt {\frac {r}{r-2M}}}={\sqrt {\frac {r-2M}{r-3M}}}}
{\displaystyle v={\sqrt {\frac {M}{r-2M}}}}
{\displaystyle v={\sqrt {\frac {GM}{r-r_{S}}}}}
^ a b c Lissauer, Jack J.; de Pater, Imke (2019). Fundamental Planetary Sciences : physics, chemistry, and habitability. New York, NY, USA: Cambridge University Press. p. 604. ISBN 9781108411981.
|
Walsh permutation; bit permutation; calc - Wikiversity
Walsh permutation; bit permutation; calc
Finite permutations p and corresponding bit permutations P are composed in opposite directions:
{\displaystyle p_{a}*p_{b}*...=p_{x}~~~~\Leftrightarrow ~~~~...*P_{b}*P_{a}=P_{x}}
1 p1 * p8
2 p1 * p3 * p8
p1 * p8Edit
{\displaystyle p_{1}*p_{8}=p_{9}~~~~\Leftrightarrow ~~~~P_{8}*P_{1}=P_{9}}
p1 = [0 1 0 0;
p1 * [1:4]'
p1 * p8 * [1:4]'
% The result is p9.
% p1 after p8 = p9
P1 = [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1];
P1 * [0:15]'
P1 * P8 * [0:15]'
% The result is not P9 but P10, the inverse permutation of P9.
% The row permutation matrices of the big permutations have to be multiplied like column permutation matrices from right to left
% to get the result corresponding to the small permutations:
% P1 before P8 = P9
p1 * p3 * p8Edit
{\displaystyle p_{1}*p_{3}*p_{8}=p_{6}~~~~\Leftrightarrow ~~~~P_{8}*P_{3}*P_{1}=P_{6}}
p1 * p3 * p8 * [1:4]'
% p1 after p3 after p8 = p6
P8 * P3 * P1 * [0:15]'
% P1 before P3 before P8 = P6
{\displaystyle p_{8}*p_{3}*p_{1}=p_{22}~~~~\Leftrightarrow ~~~~P_{1}*P_{3}*P_{8}=P_{22}}
% The result is p22.
% p8 after p3 after p1 = p22
% P8 before P3 before P1 = P22
Retrieved from "https://en.wikiversity.org/w/index.php?title=Walsh_permutation;_bit_permutation;_calc&oldid=2210300"
|
STEP - ICON DevPortal
STEP refers to the unit that measures the amount of computational effort required to execute specific operations on the ICON network.
Each ICON transaction requires computational resources to execute. Because this is a financial system, each transaction thus requires a fee. In the ICON network, these fees are referred to as STEP.
STEP fees are paid in ICX, the ICON network's native currency. The initial STEP price is approximately as follows:
STEP=1*10^{-8} ICX
However, the ICON network delegates can vote to change the STEP fee as supply and demand for computational power in the ICON network changes.
Why can STEP fees get so high?
STEP fees rise with the increased usage of the ICON network. An smart contract developer should be aware of two aspects of ICON network usage:
How complicated is an individual transaction?
How many applications are attempting to perform transactions during each new block?
For point 1., work to make each transaction efficient so that they can minimize their own costs or the cost to their users.
For point 2., note that as more applications conduct transactions, the block validators will have to use more computational power. If there is a high demand for computational power, then ICON network delegates can vote to increase the STEP fee.
|
Susannah is drawing a card from a standard
52
-card deck. Click playing cards to learn what playing cards are included in a deck.
What is the probability that she draws a card that is less than
5
How many cards in a deck are less than
5
How does that compare to the total number of cards?
What is the probability that the card she draws is
5
or more? Use a complement.
Remember that a complement of an event is all the outcomes in the sample space that are not in the original event.
1-\frac{\large\text{# of cards less than}\ 5}{\large\text{total number of cards}}=\text{the probability of drawing a card greater than 5}
What is the probability that the card she draws is a red card or a face card? Show how you can use the Addition Rule to determine this probability.
The Additional Rule is the
P(\text{red or face card)}=P(\text{red card)}+P(\text{face card)}-P(\text{red and facecard)}
|
Treeplanter's Pot - Ring of Brodgar
Treeplanter's Pot
Skill(s) Required Pottery, Plant Lore
Object(s) Required Unfired Treeplanter's Pot
Required By Bush, Tree
Craft > Pots & Pottery > Treeplanter's Pot
A treeplanter's pot is an important tool used with Soil or Bat Guano, Water, a Herbalist Table and any one of the tree "seeds" such as an Apple Core, Fir Cone, or Mulberry to grow a new Tree (full list). The quality is important for treeplanter's pots as it is one of the factors in determining the tree's quality.
Trees grown in a Treeplanter's Pot will never naturally stunt ensuring the tree will grow to adulthood successfully, unless you harvest materials prematurely.
Materials required: 1x Treeplanter's Pot, Any 4x of Soil, Mulch, Earthworm, or Bat Guano, 1.0L Water, 1x Tree Seed, 1x Herbalist Table
Place 4x units of Soil, Mulch, Earthworms, or Bat Guano into the Treeplanter's Pot (left-click on the soil, right-click on the pot).
Place 1.0L of Water into the pot (left-click on the pot, right-click on the water source, or vice versa).
After the pot is filled with Soil and Water, put the Tree Seed into the pot (left-click on the seed, right-click on the pot).
Place the filled pot onto a Herbalist Table and leave it there for 4 in-game hours (73 minutes real-time).
Plant the sapling within 24 real-life hours of sprouting or else it will die (left click on the tree sapling, right click on the ground).
To empty the contents of the pot at any time, right-click on the pot and then select the Empty option.
By right clicking on sprouted Treeplanter's Pot, you can get preview of plant which is planted.
See the Tree page for more details on tree farming
To make a Treeplanter's Pot, you need to fire an Unfired Treeplanter's Pot in a Kiln:
Use the craft menu to mold the clay, and then fire the unburnt Treeplanter's Pot in a Kiln with 8 branches loaded.
After 36 minutes you will get an empty Treeplanter's Pot.
Unfired Treeplanter's Pot Quality =
{\displaystyle {\frac {_{q}Clay*3+_{q}PottersWheel}{4}}}
{\displaystyle {\sqrt[{3}]{Dexterity*Masonry*Farming}}}
Treeplanter's Pot Quality =
{\displaystyle 2*_{q}Unburnt+_{q}Fuel+_{q}Kiln \over 4}
Bull Ram (2022-03-20) >"Tree Pots now display the new quality of trees planted in their tooltips."
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Treeplanter%27s_Pot&oldid=93343"
|
Tundra orbit - WikiMili, The Best Wikipedia Reader
A Tundra orbit (Russian : орбита «Тундра») is a highly elliptical geosynchronous orbit with a high inclination (approximately 63.4°), an orbital period of one sidereal day, and a typical eccentricity between 0.2 and 0.3. A satellite placed in this orbit spends most of its time over a chosen area of the Earth, a phenomenon known as apogee dwell, which makes them particularly well suited for communications satellites serving high-latitude regions. The ground track of a satellite in a Tundra orbit is a closed figure 8 with a smaller loop over either the northern or southern hemisphere. [1] [2] This differentiates them from Molniya orbits designed to service high-latitude regions, which have the same inclination but half the period and do not loiter over a single region. [3] [4]
Tundra and Molniya orbits are used to provide high-latitude users with higher elevation angles than a geostationary orbit. This is desirable as broadcasting to these latitudes from a geostationary orbit (above the Earth's equator) requires considerable power due to the low elevation angles, and the extra distance and atmospheric attenuation that comes with it. Sites located above 81° latitude are unable to view geocentric satellites at all, and as a rule of thumb, elevation angles of less than 10° can cause problems, depending on the communications frequency. [5] : 499 [6]
Highly elliptical orbits provide an alternative to geostationary ones, as they remain over their desired high-latitude regions for long periods of time at the apogee. Their convenience is mitigated by cost, however: two satellites are required to provide continuous coverage from a Tundra orbit (three from a Molniya orbit). [3]
A ground station receiving data from a satellite constellation in a highly elliptical orbit must periodically switch between satellites and deal with varying signal strengths, latency and Doppler shifts as the satellite's range changes throughout its orbit. These changes are less pronounced for satellites in a Tundra orbit, given their increased distance from the surface, making tracking and communication more efficient. [7] Additionally, unlike the Molniya orbit, a satellite in a Tundra orbit avoids passing through the Van Allen belts. [8]
Despite these advantages the Tundra orbit is used less often than a Molniya orbit [8] in part due to the higher launch energy required. [1]
In 2017 the ESA Space Debris office released a paper proposing that a Tundra-like orbit be used as a disposal orbit for old high-inclination geosynchronous satellites, as opposed to traditional graveyard orbits. [3]
A typical [7] Tundra orbit has the following properties:
In general, the oblateness of the Earth perturbs a satellite's argument of perigee (
{\displaystyle \omega }
) such that it gradually changes with time. [1] If we only consider the first-order coefficient
{\displaystyle J_{2}}
, the perigee will change according to equation 1 , unless it is constantly corrected with station-keeping thruster burns.
{\displaystyle {\dot {\omega }}={\frac {3}{4}}nJ_{2}\left({\frac {R_{E}}{a}}\right)^{2}{\frac {4-5\sin ^{2}i}{(1-e^{2})^{2}}},}
{\displaystyle i}s the orbital inclination,
{\displaystyle e}
{\displaystyle n}
is mean motion in degrees per day,
{\displaystyle J_{2}}
is the perturbing factor,
{\displaystyle R_{E}}
is the radius of the Earth,
{\displaystyle a}
is the semimajor axis, and
{\displaystyle {\dot {\omega }}}
is in degrees per day.
To avoid this expenditure of fuel, the Tundra orbit uses an inclination of 63.4°, for which the factor
{\displaystyle (4-5\sin ^{2}i)}
is zero, so that there is no change in the position of perigee over time. [9] [10] : 143 [7] This is called the critical inclination, and an orbit designed in this manner is called a frozen orbit.
An argument of perigee of 270° places apogee at the northernmost point of the orbit. An argument of perigee of 90° would likewise serve the high southern latitudes. An argument of perigee of 0° or 180° would cause the satellite to dwell over the equator, but there would be little point to this as this could be better done with a conventional geostationary orbit. [7]
The period of one sidereal day ensures that the satellites follows the same ground track over time. This is controlled by the semi-major axis of the orbit. [7]
The eccentricity is chosen for the dwell time required, and changes the shape of the ground track. A Tundra orbit generally has an eccentricity of about 0.2; one with an eccentricity of about 0.4, changing the ground track from a figure 8 to a teardrop, is called a Supertundra orbit. [11]
The exact height of a satellite in a Tundra orbit varies between missions, but a typical orbit will have a perigee of approximately 25,000 kilometres (16,000 mi) and an apogee of 39,700 kilometres (24,700 mi), for a semi-major axis of 46,000 kilometres (29,000 mi). [7]
From 2000 to 2016, Sirius Satellite Radio, now part of Sirius XM Holdings, operated a constellation of three satellites in Tundra orbits for satellite radio. [12] [13] The RAAN and mean anomaly of each satellite were offset by 120° so that when one satellite moved out of position, another had passed perigee and was ready to take over. The constellation was developed to better reach consumers in far northern latitudes, reduce the impact of urban canyons and required only 130 repeaters compared to 800 for a geostationary system. After Sirius' merger with XM it changed the design and orbit of the FM-6 replacement satellite from a tundra to a geostationary one. [14] [15] This supplemented the already geostationary FM-5 (launched 2009), [16] and in 2016 Sirius discontinued broadcasting from tundra orbits. [17] [18] [19] The Sirius satellites were the only commercial satellites to use a Tundra orbit. [20]
The Japanese Quasi-Zenith Satellite System uses a geosynchronous orbit similar to a Tundra orbit, but with an inclination of only 43°. It includes four satellites following the same ground track. It was tested from 2010 and became fully operational in November 2018. [21]
The Tundra orbit has been considered for use by the ESA's Archimedes project, a broadcasting system proposed in the 1990s. [13] [22]
Sirius FM-5, also known as Radiosat 5, is an American communications satellite which is operated by Sirius XM Radio. It was constructed by Space Systems Loral, based on the LS-1300 bus, and carries a single transponder designed to transmit in the NATO E, F and I bands. It is currently being used to provide satellite radio broadcasting to North America.
Inmarsat-4A F4, also known as Alphasat and Inmarsat-XL, is a large geostationary communications I-4 satellite operated by UK based Inmarsat in partnership with the European Space Agency. Launched in 2013, it is used to provide mobile communications to Africa and parts of Europe and Asia.
1 2 3 Fortescue, P. W.; Mottershead, L. J.; Swinerd, G.; Stark, J. P. W. (2003). "Section 5.7: highly elliptic orbits". Spacecraft Systems Engineering. John Wiley and Sons. ISBN 978-0-471-61951-2.
↑ Dickinson, David (2018). The Universe Today Ultimate Guide to Viewing The Cosmos: Everything You Need to Know to Become an Amateur Astronomer. Page Street Publishing. p. 203. ISBN 9781624145452.
1 2 3 Jenkin, A. B.; McVey, J. P.; Wilson, J. R.; Sorge, M. E. (2017). Tundra Disposal Orbit Study. 7th European Conference on Space Debris. ESA Space Debris Office. Archived from the original on 2017-10-02. Retrieved 2017-10-02.
↑ Mortari, D.; Wilkins, M. P.; Bruccoleri, C. (2004). The Flower Constellations (PDF) (Report). p. 4. Archived from the original (PDF) on 2017-08-09. Retrieved 2017-10-02.
↑ Ilčev, Stojče Dimov (2017). Global Satellite Meteorological Observation (GSMO) Theory. Vol. 1. Springer International Publishing. p. 57. Bibcode:2018gsmo.book.....I. ISBN 978-3-319-67119-2 . Retrieved 16 April 2019.
↑ Soler, Tomás; Eisemann, David W. (August 1994). "Determination of Look Angles To Geostationary Communication Satellites" (PDF). Journal of Surveying Engineering. 120 (3): 123. doi:10.1061/(ASCE)0733-9453(1994)120:3(115). ISSN 0733-9453. Archived (PDF) from the original on 4 March 2016. Retrieved 16 April 2019.
1 2 3 4 5 6 Maral, Gerard; Bousquet, Michel (2011-08-24). "2.2.1.2 Tundra Orbits". Satellite Communications Systems: Systems, Techniques and Technology. ISBN 9781119965091.
1 2 Capderou, Michel (2005). Satellites. p. 228. ISBN 9782287213175.
↑ Kidder, Stanley Q.; Vonder Haar, Thomas H. (18 August 1989). "On the Use of Satellites in Molniya Orbits of Meteorological Observation of Middle and High Latitudes". Journal of Atmospheric and Oceanic Technology. 7 (3): 517. doi: 10.1175/1520-0426(1990)007<0517:OTUOSI>2.0.CO;2 .
↑ Wertz, James Richard; Larson, Wiley J. (1999). Larson, Wiley J.; Wertz, James R. (eds.). Space Mission Analysis and Design. Microcosm Press and Kluwer Academic Publishers. Bibcode:1999smad.book.....W. ISBN 978-1-881883-10-4.
↑ Capderou, Michel (2006-01-16). Satellites: Orbits and Missions (PDF). p. 224. ISBN 978-2-287-27469-5. Archived (PDF) from the original on 2018-05-17. Retrieved 2019-04-30.
↑ "Sirius Rising: Proton-M Ready to Launch Digital Radio Satellite Into Orbit". AmericaSpace. 2013-10-18. Archived from the original on 28 June 2017. Retrieved 8 July 2017.
1 2 Capderou, Michel (2014-04-23). Handbook of Satellite Orbits: From Kepler to GPS. p. 290. Bibcode:2014hso..book.....C. ISBN 9783319034164.
↑ Selding, Peter B. de (October 5, 2012). "Sirius XM Needs To Install 600 New Ground Repeaters". SpaceNews.com.
↑ Binkovitz, Leah (24 October 2012). "Sirius Satellite Comes to Udvar-Hazy". Smithsonian. Archived from the original on 8 May 2019. Retrieved 8 May 2019.
↑ Clark, Stephen (30 June 2009). "New Sirius XM Radio Satellite Launches to Orbit". Space.com. Archived from the original on 8 May 2019. Retrieved 8 May 2019.
↑ Wiley Rein (19 November 2009). Application for Modification (Report). Federal Communications Commission. Archived from the original on 2 October 2017. Retrieved 2 February 2017.
↑ Meyer, James E.; Frear, David J., eds. (2 February 2016). Sirius XM Holdings 10-K 2015 Annual Report (PDF) (Report). Sirius XM Holdings. Archived (PDF) from the original on 29 August 2016. Retrieved 2 February 2017.
↑ Meyer, James E.; Frear, David J., eds. (2 February 2017). Sirius XM Holdings Inc. 10-K Feb. 2, 2017 11:57 AM. Seeking Alpha (Report). Sirius XM Holdings Inc.
↑ Bruno, Michael J.; Pernicka, Henry J. (2005). "Tundra Constellation Design and Stationkeeping". Journal of Spacecraft and Rockets. 42 (5): 902–912. Bibcode:2005JSpRo..42..902B. doi:10.2514/1.7765.
↑ "Quasi-Zenith Satellite Orbit (QZO)". Archived from the original on 2018-03-09. Retrieved 2018-03-10.
↑ Hoeher, P.; Schweikert, R.; Woerz, T.; Schmidbauer, A.; Frank, J.; Grosskopf, R.; Schramm, R.; Gale, F. C. T.; Harris, R. A. (1996). "Digital Audio Broadcasting (DAB) via Archimedes/Media Star HEO-Satellites". Mobile and Personal Satellite Communications 2. pp. 150–161. doi:10.1007/978-1-4471-1516-8_13. ISBN 978-3-540-76111-2.
|
Nonlinear Heat Transfer in Thin Plate - MATLAB & Simulink - MathWorks India
{Q}_{c}={h}_{c}\left(T-{T}_{a}\right)
{T}_{a}
T
is the temperature at a particular x and y location on the plate surface, and
{h}_{c}
{Q}_{r}=ϵ\sigma \left({T}^{4}-{T}_{a}^{4}\right)
ϵ
\sigma
\rho {C}_{p}{t}_{z}\frac{\partial T}{\partial t}-k{t}_{z}{\nabla }^{2}T+2{Q}_{c}+2{Q}_{r}=0
\rho
{C}_{p}
is the specific heat,
{t}_{z}
is the plate thickness, and the factors of two account for the heat transfer from both plate faces.
\rho {C}_{p}{t}_{z}\frac{\partial T}{\partial t}-k{t}_{z}{\nabla }^{2}T+2{h}_{c}T+2ϵ\sigma {T}^{4}=2{h}_{c}{T}_{a}+2ϵ\sigma {T}_{a}^{4}
|
Brundan, Jonathan1
1 Department of Mathematics University of Oregon Eugene OR 97403 USA
We revisit the definition of the Heisenberg category of central charge
k\in ℤ
. For central charge
-1
, this category was introduced originally by Khovanov, but with some additional cyclicity relations which we show here are unnecessary. For other negative central charges, the definition is due to Mackaay and Savage, also with some redundant relations, while central charge zero recovers the affine oriented Brauer category of Brundan, Comes, Nash and Reynolds. We also discuss cyclotomic quotients.
Classification: 17B10, 18D10
Keywords: Heisenberg category, string calculus
Brundan, Jonathan 1
title = {On the definition of {Heisenberg} category},
TI - On the definition of Heisenberg category
%T On the definition of Heisenberg category
Brundan, Jonathan. On the definition of Heisenberg category. Algebraic Combinatorics, Volume 1 (2018) no. 4, pp. 523-544. doi : 10.5802/alco.26. https://alco.centre-mersenne.org/articles/10.5802/alco.26/
[1] Ariki, Susumu On the decomposition numbers of the Hecke algebra of
G\left(m,1,n\right)
, J. Math. Kyoto Univ., Volume 36 (1996) no. 4, pp. 789-808 | Article | MR: 1443748 | Zbl: 0888.20011
[2] Brundan, Jonathan On the definition of Kac-Moody 2-category, Math. Ann., Volume 364 (2016) no. 1-2, pp. 353-372 | Article | MR: 3451390 | Zbl: 06540658
[3] Brundan, Jonathan Representations of oriented skein categories (2017) (https://arxiv.org/abs/1712.08953)
[4] Brundan, Jonathan; Comes, Jonathan; Nash, David; Reynolds, Andrew A basis theorem for the affine oriented Brauer category and its cyclotomic quotients, Quantum Topol., Volume 8 (2017) no. 1, pp. 75-112 | Article | MR: 3630282 | Zbl: 06718140
[5] Brundan, Jonathan; Davidson, Nicholas Categorical actions and crystals, Categorification and higher representation theory (Contemporary Mathematics), Volume 684, American Mathematical Society, 2017, pp. 116-159 | Zbl: 06708134
[6] Brundan, Jonathan; Kleshchev, Alexander Graded decomposition numbers for cyclotomic Hecke algebras, Adv. Math., Volume 222 (2009) no. 6, pp. 1883-1942 | Article | MR: 2562768 | Zbl: 1241.20003
[7] Brundan, Jonathan; Savage, Alistair On the definition of quantum Heisenberg category (in preparation)
[8] Cautis, Sabin; Lauda, Aaron; Licata, Anthony; Samuelson, Peter; Sussan, Joshua The elliptic Hall algebra and the deformed Khovanov Heisenberg category (2016) (https://arxiv.org/abs/1609.03506) | Zbl: 06976959
[9] Cautis, Sabin; Lauda, Aaron; Licata, Anthony; Sussan, Joshua
W
-algebras from Heisenberg categories, J. Inst. Math. Jussieu (2016), pp. 1-37 | Article | Zbl: 06963839
[10] Cautis, Sabin; Licata, Anthony Heisenberg categorification and Hilbert schemes, Duke Math. J., Volume 161 (2012) no. 13, pp. 2469-2547 | Article | MR: 2988902 | Zbl: 1263.14020
[11] Comes, Jonathan; Kujawa, Jonathan Higher level twisted Heisenberg supercategories (in preparation)
[12] Hill, David; Sussan, Joshua A categorification of twisted Heisenberg algebras, Adv. Math., Volume 295 (2016), pp. 368-420 | Article | MR: 3488039 | Zbl: 06570861
[13] Khovanov, Mikhail Heisenberg algebra and a graphical calculus, Fundam. Math., Volume 225 (2014), pp. 169-210 | Article | MR: 3205569 | Zbl: 1304.18019
[14] Khovanov, Mikhail; Lauda, Aaron A categorification of quantum
\mathrm{𝔰𝔩}\left(n\right)
, Quantum Topol., Volume 1 (2010) no. 1, pp. 1-92 | Article | MR: 2628852 | Zbl: 1206.17015
[15] Kleshchev, Alexander Linear and projective representations of symmetric groups, Cambridge University Press, 2005, xiv+277 pages | MR: 2165457 | Zbl: 1080.20011
[16] Licata, Anthony; Savage, Alistair Hecke algebras, finite general linear groups, and Heisenberg categorification, Quantum Topol., Volume 4 (2013) no. 2, pp. 125-185 | Article | MR: 3032820 | Zbl: 1279.20006
[17] Mackaay, Marco; Savage, Alistair Degenerate cyclotomic Hecke algebras and higher level Heisenberg categorification, J. Algebra, Volume 505 (2018), pp. 150-193 | Article | MR: 3789909 | Zbl: 06893263
[18] Queffelec, Hervé; Savage, Alistair; Yacobi, Oded An equivalence between truncations of categorified quantum groups and Heisenberg categories, J. Éc. Polytech., Math., Volume 5 (2018), pp. 192-238 | MR: 3738513 | Zbl: 06988578
[19] Rosso, Daniele; Savage, Alistair A general approach to Heisenberg categorification via wreath product algebras, Math. Z., Volume 286 (2017) no. 1-2, pp. 603-655 | Article | MR: 3648512 | Zbl: 1366.18006
[21] Rouquier, Raphael Quiver Hecke algebras and
2
-Lie algebras, Algebra Colloq., Volume 19 (2012) no. 2, pp. 359-410 | Article | MR: 2908731 | Zbl: 1247.20002
[22] Rui, Hebing; Su, Yucai Affine walled Brauer algebras and super Schur-Weyl duality, Adv. Math., Volume 285 (2015), pp. 28-71 | Article | MR: 3406495 | Zbl: 1356.17012
[23] Savage, Alistair Frobenius Heisenberg categorification (2018) (https://arxiv.org/abs/1802.01626)
[24] Webster, Ben Canonical bases and higher representation theory, Compos. Math., Volume 151 (2015) no. 1, pp. 121-166 | Article | MR: 3305310 | Zbl: 06417584
|
Traveller's Sack - Ring of Brodgar
Object(s) Required Hardened Leather x5, String x8
Craft > Clothes & Equipment > Packs & Sacks > Traveler's Sack
A Traveller's Sack takes the spot of one of your hands and increases a player's inventory space by both a vertical and horizontal row. Two of them can be worn at once.
Quality is softcapped by
{\displaystyle {\sqrt {Sewing*Dexterity}}}
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Traveller%27s_Sack&oldid=87861"
|
Chimney - 3D BIM Objects - 3D BIM Components
Chimney (29772 views - Architecure & BIM & MEP)
A chimney is a structure that provides ventilation for hot flue gases or smoke from a boiler, stove, furnace or fireplace to the outside atmosphere. Chimneys are typically vertical, or as near as possible to vertical, to ensure that the gases flow smoothly, drawing air into the combustion in what is known as the stack, or chimney effect. The space inside a chimney is called a flue. Chimneys may be found in buildings, steam locomotives and ships. In the United States, the term smokestack (colloquially, stack) is also used when referring to locomotive chimneys or ship chimneys, and the term funnel can also be used. The height of a chimney influences its ability to transfer flue gases to the external environment via stack effect. Additionally, the dispersion of pollutants at higher altitudes can reduce their impact on the immediate surroundings. In the case of chemically aggressive output, a sufficiently tall chimney can allow for partial or complete self-neutralization of airborne chemicals before they reach ground level. The dispersion of pollutants over a greater area can reduce their concentrations and facilitate compliance with regulatory limits.
3D BIM Models - schiedel
Licensed under GNU Free Documentation License (Dutchbelted5).
A chimney is a structure that provides ventilation for hot flue gases or smoke from a boiler, stove, furnace or fireplace to the outside atmosphere. Chimneys are typically vertical, or as near as possible to vertical, to ensure that the gases flow smoothly, drawing air into the combustion in what is known as the stack, or chimney effect. The space inside a chimney is called a flue. Chimneys may be found in buildings, steam locomotives and ships. In the United States, the term smokestack (colloquially, stack) is also used when referring to locomotive chimneys or ship chimneys, and the term funnel can also be used.[1][2]
Romans used tubes inside the walls to draw smoke out of bakeries but chimneys only appeared in large dwellings in northern Europe in the 12th century. The earliest extant example of an English chimney is at the keep of Conisbrough Castle in Yorkshire, which dates from 1185 AD.[3] However, they did not become common in houses until the 16th and 17th centuries.[4] Smoke hoods were an early method of collecting the smoke into a chimney (see image). Another step in the development of chimneys was the use of built in ovens which allowed the household to bake at home. Industrial chimneys became common in the late 18th century.
Metal liners may be stainless steel, aluminum, or galvanized iron and may be flexible or rigid pipes. Stainless steel is made in several types and thicknesses. Type 304 is used with firewood, wood pellet fuel, and non-condensing oil appliances, types 316 and 321 with coal, and type AL 29-4C is used with non-condensing gas appliances. Stainless steel liners must have a cap and be insulated if they service solid fuel appliances, but following the manufacturer's instructions carefully.[7] Aluminum and galvanized steel chimneys are known as class A and class B chimneys. Class A are either an insulated, double wall stainless steel pipe or triple wall, air-insulated pipe often known by its genericized trade name Metalbestos. Class B are uninsulated double wall pipes often called B-vent, and are only used to vent non-condensing gas appliances. These may have an aluminum inside layer and galvanized steel outside layer. Condensing boilers do not need a chimney.
{\displaystyle Q=C\;A\;{\sqrt {2\;g\;H\;{\frac {T_{i}-T_{e}}{T_{e}}}}}}
Roof tilesFlue-gas stackFlueStack effectCowl (chimney)Solar chimneyConcrete masonry unitRoofHip roofClock tower
This article uses material from the Wikipedia article "Chimney", which is released under the Creative Commons Attribution-Share-Alike License 3.0. There is a list of all authors in Wikipedia
|
Volume of a Pyramid | Brilliant Math & Science Wiki
Abhineet Goel, Julian Poon, Mahindra Jain, and
The volume of a pyramid can be expressed as
\frac{1}{3}Ah,
A
is the base area of the pyramid and
h
is the height of the pyramid. Refer to the image below.
What is the volume of a pyramid with a height of 10 and a square base with sides of length 12?
Since the area of the base is
12 \times 12 = 144
, the volume of the pyramid is
\frac{1}{3} A \times h = \frac{1}{3} \times 144 \times 10 = 480. \ _\square
In a playground, some children used
200 \text{ cm}^3
of sand to build a pyramid. If the side length of the square base is
10\text{ cm},
what is the height of the pyramid?
A= 10 \times 10 = 100
, the volume of the pyramid in
\text{cm}^3
\frac{1}{3} A \times h = \frac{1}{3} \times 100 \times h = 200.
Thus, the height of the pyramid is
h=6 \text{ cm}.
_\square
In the above diagram, if the base of the pyramid is a square, what is the volume of the pyramid?
In order to get the volume of the pyramid, we need to find the side length of the base by cutting the pyramid into half.
The above diagram is the cross section of the pyramid cut through
A, B, D
C.
Since the side length of the base is equal to
\lvert\overline{BC}\rvert,
which is twice the length
\lvert \overline{CD}\rvert ,
we use the Pythagorean theorem as follows to calculate
\lvert\overline{BC}\rvert:
\begin{aligned} \lvert \overline{CD} \rvert^2 &= \lvert \overline{AC} \rvert^2 - \lvert \overline{AD} \rvert^2 \\ &= 25 - 16 = 9. \\ \Rightarrow \lvert \overline{CD} \rvert &= 3 \\ \Rightarrow \lvert \overline{BC} \rvert &= 2 \times \lvert \overline{CD} \rvert = 6. \end{aligned}
Thus, the side length of the base is
6\text{ cm}.
Then the volume of the pyramid is
\frac{1}{3} A \times h = \frac{1}{3} \times 6 \times 6 \times 4 = 48~ (\text{cm}^3).\ _\square
Find the volume of the blue part in the above pyramid.
The volume of the blue part is
\text{(volume of whole pyramid)} - \text{(volume of small pyramid on top)}.
Since the height ratio between the blue part and the small pyramid on top is
1 : 1,
the side length of the base is
20 \text{ cm}.
A
be the volume of the blue part of the whole pyramid, then
\begin{aligned} A &= \frac{1}{3} (20 \times 20) \times (15 + 15) - \frac{1}{3} (10 \times 10) \times (15) \\ &= 4000 - 500 \\ &= 3500 \ (\text{cm}^3). \ _\square \end{aligned}
A regular pyramid of square base
7\times 7
is cut such that another regular pyramid of square base of
5\times 5
is removed, leaving a shape with the height of 3 between the parallel square planes as shown above.
What is the volume of this shape?
ABCD
has points
E
F
as its midpoints on
AD
AB
, respectively. The square is then folded such that the vertices
A
B
D
joined together become a new vertex of the pyramid with triangular base
EFC
If the square has a side length of
6\text{ cm}
, what is the volume of the pyramid (in
\text{cm}^3
Proof by Integration
Derive the formula for the volume of a pyramid using calculus.
First, we want to find
A(x)
A(x)
is the function of the areas of the cross section of the pyramid.
h
be the height of the solid, and
z
a constant such that
{ z }^{ 2 }\propto A
x
y
be variables which would be used later to define
A(x)
{ z }^{ 2 }\propto A
A=k{z}^{2}
(k)
is another constant.
From the image above, it can be seen that both triangles are similar. So, finding the equation of
y
x
\frac { z }{ h } =\frac { y }{ x } \Rightarrow y =\frac { xz }{ h }.
A(x)=k{y}^{2}=\frac{k{z}^{2}{x}^{2}}{{h}^{2}}.
Now, here comes the integrating part.
The volume of the object is
\begin{aligned} \int _{ b }^{ a }{ A(x)dx }&=\int _{ 0 }^{ h }{ \frac { k{ z }^{ 2 }{ x }^{ 2 }dx }{ { h }^{ 2 } } } \\ &=\frac { 1 }{ 3 } k{ z }^{ 2 }h. \end{aligned}
A=k{ z }^{ 2 }
\frac { 1 }{ 3 } k{ z }^{ 2 }h=\boxed{\frac { 1 }{ 3 } Ah}.
Since the volume is based on the area of the cross section, the point at the top of the pyramid can be anywhere and this formula would still work.
_\square
Cite as: Volume of a Pyramid. Brilliant.org. Retrieved from https://brilliant.org/wiki/volume-pyramid/
|
RungeKutta - Maple Help
Home : Support : Online Help : Education : Student Packages : Numerical Analysis : Visualization : RungeKutta
numerically approximate the solution to a first order initial-value problem with the Runge-Kutta Method
RungeKutta(ODE, IC, t=b, opts)
RungeKutta(ODE, IC, b, opts)
\frac{ⅆ}{ⅆt}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}y\left(t\right)=f\left(t,y\right)
y\left(t\right)
at
y\left(t\right)
at
output = plot returns a plot of the approximate (Runge-Kutta) solution and the solution from one of Maple's best numeric DE solvers; and
t
y\left(t\right)
submethod = midpoint, rk3, rk4, rkf, heun, or meuler
The Runge-Kutta submethod used to solve this initial-value problem.
rk3 = Order Three Method
rk4 = Order Four Method
heun = Heun Method
By default the Runge-Kutta Midpoint Method is used.
Given an initial-value problem consisting of an ordinary differential equation ODE, a range a <= t <= b, and an initial condition y(a) = c, the RungeKutta command computes an approximate value of y(b) using the Runge-Kutta methods.
The RungeKutta command is a shortcut for calling the InitialValueProblem command with the method = rungekutta option.
To approximate the solution to an initial-value problem using a method other than the Runge-Kutta Method, see InitialValueProblem.
\mathrm{with}\left(\mathrm{Student}[\mathrm{NumericalAnalysis}]\right):
\mathrm{RungeKutta}\left(\mathrm{diff}\left(y\left(t\right),t\right)=\mathrm{cos}\left(t\right),y\left(0\right)=0.5,t=3,\mathrm{submethod}=\mathrm{rk4}\right)
\textcolor[rgb]{0,0,1}{0.6411}
\mathrm{RungeKutta}\left(\mathrm{diff}\left(y\left(t\right),t\right)=\mathrm{cos}\left(t\right),y\left(0\right)=0.5,t=3,\mathrm{submethod}=\mathrm{rk4},\mathrm{output}=\mathrm{plot}\right)
|
Periodic Table of the Elements | Brilliant Math & Science Wiki
Jordan Calmes, Skanda Prasad, Ashish Menon, and
The modern periodic table organizes elements into a grid based on their atomic number. Both the horizontal and vertical positionings of an element within the table give clues as to that element's behavior, making the periodic table a quick and useful reference for predicting how certain elements will react with each other.
Cupcakes from the first-anniversary celebration of the Chemical Heritage Foundation's museum. Photographer: Conrad Erb. Creative Commons license for reuse and modification.
Each box on the table represents one element. Basic information about the element is included on every periodic table, including the following:
The one- or two-letter atomic symbol
The atomic number, which is the number of protons in the atom's nucleus
The atomic mass in atomic mass units
\big(
or AMUs, where one AMU is equal to
\frac1{12}
the mass of a carbon-12 atom, or about 1.66 x 10
^\text{-27}
\big).
(Note: The terms atomic mass and atomic weight are often interchanged, but they do have distinct definitions. Atomic weight is calculated based on the atomic masses and relative abundances of all naturally occurring isotopes of an element.)
In the 1800s scientists across Europe were working on the same puzzle: making sense of the patterns of behavior observed in chemical elements and developing a systematic way of organizing those elements.
John Newlands of the United Kingdom, Alexandre Béguyer de Chancourtois of France, and Julius Lothar Meyer and Johann Wolfgang Döbereiner of Germany were among the scientists who contributed to developing a periodic table. They noticed trends and similarities among elements and started dividing them into discrete groups, the best-known of which are Döbereiner's triads and Newlands' octaves.
While specific pieces of these early classifications fit well, no system accommodated all of the approximately 60 known elements.
Mendeleev's periodic table was first published in the German chemistry journal Zeitschrift fϋr Chemie in 1869.
These early chemists faced two hurdles. First, they knew there were more elements to be discovered and incorporated into the periodic table. Second, some of the published information about the elements was known to be wrong. The puzzle had both missing and torn pieces, making it even harder to put together.
Russian chemist Dmitri Mendeleev is often called "the father of the periodic table." In 1869, he published a version with 63 elements arranged by atomic mass, showing that when the elements were arranged that way, certain characteristics were periodically repeated.
Putting the elements in the correct place on the table still sometimes required correcting their atomic mass. Sometimes Mendeleev decided the atomic mass must be wrong because the elements seemed to appear in the wrong order. He placed tellurium before iodine, for example, even though tellurium is heavier. Iodine’s properties are much more similar to those of fluorine, chlorine, and bromine than to oxygen, sulfur, and selenium, and the opposite is true for tellurium.
Mendeleev's table had impressive predictive power. He left blank spaces on the table where he thought undiscovered elements would fit. He predicted several properties for five of those elements, including atomic weight, melting point, density of the solid, and valency. By 1875, three of those elements (gallium, germanium, and scandium) had been discovered by independent researchers in France, Germany, and Sweden, giving further credibility to Mendeleev's periodic table.
The modern periodic table was devised by Henry Moseley in 1913. Moseley, a young English physicist, carried out research regarding the structure of the atom and came to the conclusion that atomic number, the number of protons in the nucleus, is the fundamental property of an element rather than the atomic mass. This explained some of the inconsistencies Mendeleev was finding. Tellurium is number 52 and iodine is 53, for example.
Moseley's Modern Periodic Law
Properties of the elements are the periodic function of their atomic numbers.
Periodic tables can contain a variety of extra information. This one tells the phase of the pure element at room temperature and notes which elements were created synthetically, rather than discovered in nature. Public domain image from the National Institute of Standards and Technology (NIST).
Advantages of the modern periodic table include the following:
The classification of the elements is based on the fundamental property of their atomic number.
The position of an element is determined by the electronic configuration of the outer valence, which naturally groups elements with similar chemical properties.
Inert gases, which have completely filled valence shells, are placed at the end of each period.
It provides a clear demarcation between different kinds of elements such as metals, non-metals, metalloids, transition elements, inert gases, lanthanides, and actinides.
Deficiencies of the modern periodic table include the following:
The position of hydrogen is unresolved.
There is no place for lanthanides and actinides in the main body of the table.
The arrangement is unable to reflect the electronic configuration of many elements in the transition group, lanthanides, and actinides.
The vertical columns of the table are called groups. Groups have the same electron configuration in their outermost shell or valence. Roman numerals are used to indicate the group number, which is also the number of electrons in the outer valence. There are two sets of groups: A and B. A, the representative elements, have the valence electrons in
s
p
sub-shells. The non-representative elements of group B are the transition metals, which have partially filled
sub-shells and the lanthanides and actinides, which have partially filled
f
sub-shells. Groups with distinct properties the general chemistry student should be familiar with are discussed below:
Alkali metals, group IA, have many of the characteristic physical properties of metals. Each alkali metal has a single electron in its valence.
Alkaline earth metals, group IIA, have two electrons in their outer valence.
Halogens, group VIIA. With seven valence electrons, halogens are close to having a complete octet, making them highly reactive. They can easily strip an electron from a positively charged cation, forming anions with a
-1
(\ce{Cl^-}, \ce{F^-},
).
Noble gases, group VIII, are also called the inert gases. Their complete valence shell is energetically favorable, making them relatively nonreactive. They have low boiling points and are gases at room temperature.
The horizontal rows of the table are called periods. There are seven periods, and they are filled sequentially. They represent the principal quantum numbers
n=1
n=7
. Arabic numerals are used when referring to the periods.
Periodicity is the repetition of the similar properties of the elements placed in a group separated by certain definite gaps of atomic numbers.
In other words, periodicity is the idea that elements in a group have similar chemical properties because they have the same valence shell electronic configuration. All elements will gain or lose electrons to reach a stable octet, the valence configuration of the noble gases, leading to some general trends across the periodic table.
Going from left to right, electrons are added to the outer valence one at a time. The more electrons there are in the outer shell, the stronger the nuclear attraction. Properties of the vertical groups also factor into periodic trends. Most notably, the number of filled shells increases going down the periodic table. With more electrons between the outer valence and the nucleus, the outer valence is less tightly bound. Together, these two factors explain the following periodic trends:
Atomic radius describes the size of the electrons' orbitals around the nucleus of an atom. The atomic radii decrease from left to right within a period and decrease from bottom to top within a group.
Ionization energy is the energy required to remove an electron from an atom or ion. The closer the electron is to the nucleus, the more difficult it is to remove. Ionization energy increases from left to right within a period and from bottom to top within a group.
Electronegativity measures how strongly an atom is attracted to electrons when it forms a chemical bond. Electronegativity increases from left to right within a period and also from bottom to top within a group.
Electron affinity is a way of measuring how easily an atom can accept an electron. A positive electron affinity indicates that energy is being released when the electron is added. Generalizations for electron affinity can be made for groups within the periodic table by looking at their valence state. For example, the halogens have a high electron affinity because they need one electron to make an octet, while the noble gases have an electron affinity around 0 because they already have an octet.
Low electronegativity and positive electron affinity Low electronegativity and negative electron affinity High electronegativity and positive electron affinity High electronegativity and negative electron affinity
Alkaline earth metals have which of the following sets of traits?
Group I A, the alkali metals Group II A, the alkaline earth metals Group VII A, the halogens There is no perfectly suited place for hydrogen.
Considering its chemical properties and the ions it makes, what is the best location for hydrogen on the periodic table?
In December 2015, the International Union of Pure and Applied Chemistry (IUPAC) announced that four new elements had been discovered and synthesized by laboratories in the United States, Russia, and Japan.
With these new elements, the seventh period of the table is now complete.
The process of naming the elements and giving them two letter symbols is done and the elements are now labeled as Nihonium (Nh, Atomic number 113), Moscovium (Mc, Atomic number 115), Tennesine (Ts, Atomic number 117), and Oganesson (Og, Atomic number 118). New elements can be named after a mythological concept, a mineral, a place, or a scientist. [1]
The standard layout of the periodic table does not capture all the patterns and relationships found among the elements. Several alternatives have been proposed to highlight electron configurations or quantum numbers, among other traits. Circular and 3-dimensional versions have been drawn, which eliminate some of the questions regarding the placement of hydrogen and the exclusion of lanthanides and actinides from the main body of the table.
The ADOMAH periodic table highlights electron configurations
Theodore Benfey's spiral format was first published in the 1960's
The periodic table[2] hosted by Los Alamos National Laboratory includes a short article describing the history, properties, and uses of each element.
The University of Nottingham has a website called Periodic Videos[3] that features experiments exploring the properties of the different elements, including plenty of explosions.
IUPAC News, I. Discovery and Assignment of Elements with Atomic Numbers 113, 115, 117 and 118. Retrieved from http://www.iupac.org/news/news-detail/article/discovery-and-assignment-of-elements-with-atomic-numbers-113-115-117-and-118.html
Los Alamos National Laboratory, U. Periodic Table of Elements. Retrieved from http://periodic.lanl.gov/index.shtml
University of Nottingham, U. Periodic Videos. Retrieved from http://www.periodicvideos.com
Cite as: Periodic Table of the Elements. Brilliant.org. Retrieved from https://brilliant.org/wiki/periodic-table-of-the-elements/
|
Liquid hydrocarbon characterization of the lacustrine Yanchang Formation, Ordos Basin, China: Organic-matter source variation and thermal maturity | Interpretation | GeoScienceWorld
. E-mail: xun.sun@beg.utexas.edu; daniel.enriquez@beg.utexas.edu; tongwei.zhang@beg.utexas.edu.
Quansheng Liang;
. E-mail: liangqsh0833@163.com; petrojcf@163.com.
Xun Sun, Quansheng Liang, Chengfu Jiang, Daniel Enriquez, Tongwei Zhang, Paul Hackley; Liquid hydrocarbon characterization of the lacustrine Yanchang Formation, Ordos Basin, China: Organic-matter source variation and thermal maturity. Interpretation 2017;; 5 (2): SF225–SF242. doi: https://doi.org/10.1190/INT-2016-0114.1
Source-rock samples from the Upper Triassic Yanchang Formation in the Ordos Basin of China were geochemically characterized to determine variations in depositional environments, organic-matter (OM) source, and thermal maturity. Total organic carbon (TOC) content varies from 4 wt% to 10 wt% in the Chang 7, Chang 8, and Chang 9 members — the three OM-rich shale intervals. The Chang 7 has the highest TOC and hydrogen index values, and it is considered the best source rock in the formation. Geochemical evidence indicates that the main sources of OM in the Yanchang Formation are freshwater lacustrine phytoplanktons, aquatic macrophytes, aquatic organisms, and land plants deposited under a weakly reducing to suboxic depositional environment. The elevated
C29
sterane concentration and depleted
δC13
values of OM in the middle of the Chang 7 may indicate the presence of freshwater cyanobacteria blooms that corresponds to a period of maximum lake expansion. The OM deposited in deeper parts of the lake is dominated by oil-prone type I or type II kerogen or a mixture of both. The OM deposited in shallower settings is characterized by increased terrestrial input with a mixture of types II and III kerogen. These source rocks are in the oil window, with maturity increasing with burial depth. The measured solid-bitumen reflectance and calculated vitrinite reflectance from the temperature at maximum release of hydrocarbons occurs during Rock-Eval pyrolysis (
Tmax
) and the methylphenanthrene index (MPI-1) chemical maturity parameters range from 0.8 to
1.05%Ro
. Because the thermal labilities of OM are associated with the kerogen type, the required thermal stress for oil generation from types I and II mixed kerogen has a higher and narrower range of temperature for hydrocarbon generation than that of OM dominated by type II kerogen or types II and III mixed kerogen deposited in the prodelta and delta front.
|
Huang, Jia1
1 University of Nebraska at Kearney Department of Mathematics and Statistics Kearney, Nebraska 68849, USA
We study the (complex) Hecke algebra
{ℋ}_{S}\left(\mathbf{q}\right)
of a finite simply-laced Coxeter system
\left(W,S\right)
with independent parameters
\mathbf{q}\in {\left(ℂ\setminus \left\{\text{roots}\phantom{\rule{4pt}{0ex}}\text{of}\phantom{\rule{4pt}{0ex}}\text{unity}\right\}\right)}^{S}
. We construct its irreducible representations and projective indecomposable representations. We obtain the quiver of this algebra and determine when it is of finite representation type. We provide decomposition formulas for induced and restricted representations between the algebra
{ℋ}_{S}\left(\mathbf{q}\right)
{ℋ}_{R}\left(\mathbf{q}{|}_{R}\right)
R\subseteq S
. Our results demonstrate an interesting combination of the representation theory of finite Coxeter groups and their 0-Hecke algebras, including a two-sided duality between the induced and restricted representations.
Classification: 16G30, 05E10
Keywords: Hecke algebra, independent parameters, simply-laced Coxeter system, induction and restriction, duality.
Huang, Jia 1
author = {Huang, Jia},
title = {Hecke algebras of simply-laced type with independent parameters},
TI - Hecke algebras of simply-laced type with independent parameters
%T Hecke algebras of simply-laced type with independent parameters
Huang, Jia. Hecke algebras of simply-laced type with independent parameters. Algebraic Combinatorics, Volume 3 (2020) no. 3, pp. 667-691. doi : 10.5802/alco.108. https://alco.centre-mersenne.org/articles/10.5802/alco.108/
[1] Adin, Ron M.; Brenti, Francesco; Roichman, Yuval A construction of Coxeter group representations. II, J. Algebra, Volume 306 (2006) no. 1, pp. 208-226 | Article | MR: 2271580 | Zbl: 1159.20007
[2] Adin, Ron M.; Brenti, Francesco; Roichman, Yuval A unified construction of Coxeter group representations, Adv. Appl. Math., Volume 37 (2006) no. 1, pp. 31-67 | Article | MR: 2232081 | Zbl: 1150.20002
[3] Bergeron, Nantel; Li, Huilan Algebraic structures on Grothendieck groups of a tower of algebras, J. Algebra, Volume 321 (2009) no. 8, pp. 2068-2084 | Article | MR: 2501510 | Zbl: 1185.16008
[4] Björner, Anders; Brenti, Francesco Combinatorics of Coxeter groups, Grad. Texts Math., 231, Springer, New York, 2005, xiv+363 pages | MR: 2133266 | Zbl: 1110.05001
[5] Björner, Anders; Wachs, Michelle L. Generalized quotients in Coxeter groups, Trans. Am. Math. Soc., Volume 308 (1988) no. 1, pp. 1-37 | Article | MR: 946427 | Zbl: 0659.05007
[6] Curtis, Charles W.; Reiner, Irving Methods of representation theory. Vol. I, John Wiley & Sons, Inc., New York, 1981, xxi+819 pages (With applications to finite groups and orders, Pure and Applied Mathematics, A Wiley-Interscience Publication) | MR: 632548 | Zbl: 0469.20001
[7] Denton, Tom; Hivert, Florent; Schilling, Anne; Thiéry, Nicolas M. On the representation theory of finite
𝒥
-trivial monoids, Sémin. Lothar. Comb., Volume 64 (2010/11), Paper no. Art. B64d, 44 pages | MR: 2800981 | Zbl: 1296.05201
[8] Dipper, Richard; James, Gordon Representations of Hecke algebras of general linear groups, Proc. Lond. Math. Soc., III. Ser., Volume 52 (1986) no. 1, pp. 20-52 | Article | MR: 812444 | Zbl: 0587.20007
[9] Duchamp, Gérard; Hivert, Florent; Thibon, Jean-Yves Noncommutative symmetric functions. VI. Free quasi-symmetric functions and related algebras, Int. J. Algebra Comput., Volume 12 (2002) no. 5, pp. 671-717 | Article | MR: 1935570 | Zbl: 1027.05107
[10] Etingof, Pavel; Golberg, Oleg; Hensel, Sebastian; Liu, Tiankai; Schwendner, Alex; Vaintrob, Dmitry; Yudovina, Elena Introduction to representation theory, Stud. Math. Libr., 59, American Mathematical Society, Providence, RI, 2011, viii+228 pages (With historical interludes by Slava Gerovitch) | Article | MR: 2808160 | Zbl: 1242.20001
[11] Geck, Meinolf; Jacon, Nicolas Representations of Hecke algebras at roots of unity, Algebr. Appl., 15, Springer-Verlag London, Ltd., London, 2011, xii+401 pages | Article | MR: 2799052 | Zbl: 1232.20008
[12] Goodman, Frederick M.; Wenzl, Hans Iwahori-Hecke algebras of type
A
at roots of unity, J. Algebra, Volume 215 (1999) no. 2, pp. 694-734 | Article | MR: 1686212 | Zbl: 1027.17012
[13] Grinberg, Darij; Reiner, Victor Hopf algebras in Combinatorics (2014) (https://arxiv.org/abs/1409.8356)
[14] Huang, Jia 0-Hecke algebra actions on coinvariants and flags, J. Algebr. Comb., Volume 40 (2014) no. 1, pp. 245-278 | Article | MR: 3226825 | Zbl: 1297.05255
[15] Huang, Jia 0-Hecke algebra action on the Stanley–Reisner ring of the Boolean algebra, Ann. Comb., Volume 19 (2015) no. 2, pp. 293-323 | Article | MR: 3347384 | Zbl: 1316.05125
[16] Huang, Jia Hecke algebras with independent parameters, J. Algebr. Comb., Volume 43 (2016) no. 3, pp. 521-551 | Article | MR: 3482439 | Zbl: 1342.20003
[17] Huang, Jia A tableau approach to the representation theory of 0-Hecke algebras, Ann. Comb., Volume 20 (2016) no. 4, pp. 831-868 | Article | MR: 3572389 | Zbl: 1354.05140
[18] Huang, Jia A uniform generalization of some combinatorial Hopf algebras, Algebr. Represent. Theory, Volume 20 (2017) no. 2, pp. 379-431 | Article | MR: 3638354 | Zbl: 1367.16036
[19] Humphreys, James E. Reflection groups and Coxeter groups, Camb. Stud. Adv. Math., 29, Cambridge University Press, Cambridge, 1990, xii+204 pages | Article | MR: 1066460 | Zbl: 0725.20028
[20] König, Sebastian The decomposition of 0-Hecke modules associated to quasisymmetric Schur functions, Algebr. Comb., Volume 2 (2019) no. 5, pp. 735-751 | Article | MR: 4023564 | Zbl: 1421.05092
q=0
, J. Algebr. Comb., Volume 6 (1997) no. 4, pp. 339-376 | Article | MR: 1471894 | Zbl: 0881.05120
[22] Li, Fang; Chen, Lili The natural quiver of an Artinian algebra, Algebr. Represent. Theory, Volume 13 (2010) no. 5, pp. 623-636 | Article | MR: 2684224 | Zbl: 1254.16012
[23] Lusztig, George On a theorem of Benson and Curtis, J. Algebra, Volume 71 (1981) no. 2, pp. 490-498 | Article | MR: 630610 | Zbl: 0465.20042
[24] Lusztig, George Hecke algebras with unequal parameters, CRM Monogr. Ser., 18, American Mathematical Society, Providence, RI, 2003, vi+136 pages | MR: 1974442 | Zbl: 1051.20003
[25] Norton, P. N.
0
-Hecke algebras, J. Aust. Math. Soc., Ser. A, Volume 27 (1979) no. 3, pp. 337-357 | Article | MR: 532754
[26] Simson, Daniel; Skowroński, Andrzej Elements of the representation theory of associative algebras. Vol. 3, Lond. Math. Soc. Stud. Texts, 72, Cambridge University Press, Cambridge, 2007, xii+456 pages (Representation-infinite tilted algebras) | MR: 2382332 | Zbl: 1131.16001
[27] Sloane, Neil J. A. The on-line encyclopedia of integer sequences, Notices Am. Math. Soc., Volume 65 (2018) no. 9, pp. 1062-1074 | MR: 3822822 | Zbl: 06989892
[28] Solomon, Louis A decomposition of the group algebra of a finite Coxeter group, J. Algebra, Volume 9 (1968), pp. 220-239 | Article | MR: 232868
[29] Steinberg, Benjamin Representation theory of finite monoids, Universitext, Springer, Cham, 2016, xxiv+317 pages | Article | MR: 3525092 | Zbl: 1428.20003
[30] Stembridge, John R. A short derivation of the Möbius function for the Bruhat order, J. Algebr. Comb., Volume 25 (2007) no. 2, pp. 141-148 | Article | MR: 2310418 | Zbl: 1150.20028
[31] Tewari, Vasu V.; van Willigenburg, Stephanie J. Modules of the 0-Hecke algebra and quasisymmetric Schur functions, Adv. Math., Volume 285 (2015), pp. 1025-1065 | Article | MR: 3406520 | Zbl: 1323.05132
[32] Tewari, Vasu V.; van Willigenburg, Stephanie J. Permuted composition tableaux, 0-Hecke algebra and labeled binary trees, J. Comb. Theory, Ser. A, Volume 161 (2019), pp. 420-452 | Article | MR: 3861786 | Zbl: 1400.05272
|
Classical Mechanics Problem on Rotational Work-Kinetic Energy Theorem: Work Done On A Rotating Wheel - Rohit Gupta | Brilliant
Work Done On A Rotating Wheel
A wheel is rotating at an angular speed of 20 rad/s. It is stopped to rest by applying a constant torque in 4 seconds. If the moment of inertia of the wheel about its axis is 0.20 kg m
^2
, then what is the magnitude of the work done by the torque in first two seconds?
|
Metal Axe - Ring of Brodgar
Object(s) Required Bar of Hard Metal x2, Block of Wood
Required By Block of Mirkwood, Block of Wood, Log, Old Trunk, Strange Root, Stump, (Sharp Tool: Ant Meat, Bat Wings, Beast Unborn, Boreworm Meat, Cave Louse Meat, Chasm Conch Meat, Chicken Meat, Eagle Owl Meat, Entrails, Fresh Adder Hide, Fresh Aurochs Hide, Fresh Badger Hide, Fresh Bat Hide, Fresh Bear Hide, Fresh Beaver Hide, Fresh Boarhide, Fresh Boreworm Hide, Fresh Cattle Hide, Fresh Cave Angler Scales, Fresh Caverat Hide, Fresh Goat Hide, Fresh Grey Seal Hide, Fresh Hedgehog Skin, Fresh Horse Hide, Fresh Lynx Hide, Fresh Mammoth Hide, Fresh Moose Hide, Fresh Mouflon Hide, Fresh Otter Hide, Fresh Pigskin, Fresh Red Deer Hide, Fresh Reindeer Hide, Fresh Sheepskin, Fresh Sly Ear of the Fox, Fresh Squirrel Hide, Fresh Troll Hide, Fresh Walrus Hide, Fresh Wildgoat Hide, Fresh Wildhorse Hide, Fresh Wolf Hide, Fresh Wolverine Hide, Golden Eagle Meat, Intestines, Mallard Meat, Mohair, Mole Carcass, Opened Oyster, Oyster Pearl, Ptarmigan Meat, Quail Meat... further results)
Craft > Clothes & Equipment > Tools > Axes > Metal Axe
A Metal Axe is the metal equivalent of the Stone Axe. When felling trees, it's faster and costs less stamina, and it can produce more blocks out of one log than the stone axe. With a Metal Axe, you get ~25% more blocks per log than you get with a stone axe. Different types of logs provide different amounts of blocks, see more here.
It can also be used to shear a sheep, skin and butcher an animal or for combat. Identical to the stone axe, it increases the effectiveness of chipping stone, destroying objects (adds 1 damage when destroying) and affects quality of skins, raw meats, intestines, and entrails when skinning and butchering.
Metal Axe Quality =
{\displaystyle {\frac {{\frac {_{q}Metal+_{q}Wood}{2}}*9+{_{q}Anvil*4+_{q}Smithy'sHammer}*3}{16}}}
{\displaystyle {\sqrt[{2}]{Strength*Smithing}}}
{\displaystyle Damage=45*{\sqrt {{\sqrt {Strength*{q}Axe}}/10}}}
The quality of blocks of wood you can produce is not capped by axe quality, but skinning, butchering, and splitting logs into branches does take the axe quality into account.
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Metal_Axe&oldid=89935"
|
Pickaxe - Ring of Brodgar
Object(s) Required Bar of Hard Metal x4, Tree Bough x2
Required By Cellar, Ice Block, Petrified Seashell, Strange Crystal
Craft > Clothes & Equipment > Tools > Axes > Pick Axe
In-game example of a pickaxe
You need to equip a Smithy's Hammer and you need an Anvil to make a pickaxe.
Pickaxe Quality =
{\displaystyle {\frac {{\frac {_{q}Metal+_{q}Bough}{2}}*9+{_{q}Anvil*4+_{q}Smithy'sHammer}*3}{16}}}
{\displaystyle {\sqrt[{3}]{Strength*Smithing*Masonry}}}
Increases Mining speed .
Increases Chip Stone Speed .
Reduces stamina drain (need confirmation)
Allows boulders to be removed from a Cellar so it can be used.
Increases damage of destruction +2 damage to buildings.
2 hand tool.
Your mining efficiency is the square root of the mining tool times strength.
Example: q40 pickaxe and 100 strength is much better than q10 pickaxe and 300+ strength.
Strawberry Turpentine (2021-08-01) >"The "Pick Axe" can now be used as a weapon, counting as an edged weapon to use axe-style attacks like "Chop"."
Kicksleddin' on Thin Ice (2019-09-04) >"You can now, using a pick-axe, chop away ice, and get an ice block in the process. You can shave snow from ice blocks."
Retrieved from "https://ringofbrodgar.com/w/index.php?title=Pickaxe&oldid=92735"
|
A-4 Alg Expressions and Ops - Maple Help
Home : Support : Online Help : Study Guides : MultivariateCalculus : Appendix : A-4 Alg Expressions and Ops
Section A-4: Algebraic Expressions and Operations
The essential concepts of the calculus are few; the manipulative skills needed to successfully put these concepts into use are many. The following examples demonstrate ways to implement some routine operations on algebraic expressions.
The ExpandSteps command in the Student Basics package can provide an annotated stepwise version of some of the algebraic manipulations involved in expanding or simplifying polynomial and rational functions. If the package has been loaded, the functionality of the ExpandSteps command is available via the Context Panel. The use of this command is illustrated in several of the following examples.
{\left(3 x-2\right)}^{2} \left({x}^{3}+2 x\right)
{a}_{n}{x}^{n}+⋯+{a}_{1}x+{a}_{0}
{x}^{3}
{x}^{4}
{\left(2 x-a\right)}^{4} {\left(x+a\right)}^{2}
{\left(3 x-1\right)}^{5}
Express the difference
\frac{{x}^{2}-x}{{x}^{3}-x}-\frac{{x}^{2}-1}{{x}^{2}+x}
as a single fraction.
Obtain the real root of
{\left(-8\right)}^{1/3}
3 {x}^{2}-5 {y}^{2}+7 x+4 y-9
{\left(x+y\right)}^{5}
|
Estimate the measure of each angle below.
The corner of your book is a right angle. Do you think this angle is bigger or smaller than the angle at the corner of your book? About how many of these angles could you put in the corner of your book?
This angle is approximately
40°
Do you think this angle is bigger than the angle at the corner of your book? About how much bigger is it?
100°
A square in an angle means that the angle is
90°
How does this angle compare to the angles in parts(a), (b), and (c)? If it is bigger, how much bigger? If it is smaller, how much smaller?
|
Electroweak and Strong Interactions - Wina's Examenwiki
4 Old exams
This course it taught at VUB, but is frequently followed by students from Leuven (and from Ghent). It is a succesor to Quantum Field Theory; also given by prof. Sevrin and in the same style. You have to write a little paper (5-10 pages) about some topic related to the course which is due in the exam period.
The exam is completely analagous to the exam of QFT. Make sure to give a complete (i.e. say everything you know) answer to his questions on the oral part.
It consisted of two questions. The maximum time was 4,5 hours to solve them, but in the end the exam ended up during 5-5.5 hours.
Please note that in 2021, the format of the exam was changed due to the corona pandemic; it was oral and lasted only one hour.
Same questions as below. New questions:
What is the experimental status of CP violation in the leptonic sector?
Why does the negative pion decay almost exclusively to the muon and anti-muon neutrino?
To introduce the BEH mechanism in extensions of the standard model, an additional scalar has to be added. Which degrees of freedom does one obtain?
All four oral defences were the same as the 28th of may.
Explain hierarchy problem
- Is it also a problem for the electron mass?
Explain why W decays into 2/3 hadrons and only 1/3 leptons?
What experimental observation let to the introduction of non-diagonal Yukawa couplings?
Explain how LEP calculated there are only 3 generations of neutrinos.
Explain running coupling constants
- What’s the difference for abelian vs non-abelian theory?
Discuss the figure about decay channels of Higgs Boson:
- At low Higgs-mass: why is the branching ratio to bb- about 10 times bigger than tau+tau-?
- How can a Higgs-particle decay into 2 photons?
- How is it possible that Higgs decays into WW starting from 80GeV, even though their mass is bigger than that of Higgs?
- Explain the dip of ZZ at +-160GeV?
And many more little questions I’ve already forgotten…
Credit to Students of UGent Media:Examenvragen-Electroweak-and-strong-interaction.pdf
Give a qualitative explanation for relative strength of the branching ratio to bb with respect to
{\displaystyle \tau \tau }
for low BEH masses
Give an explanation with Feynman diagrams of leading order decay of a scalar boson to a gluon-gluon pair and a photon-photon pair.
Define the concepts of partial decay width, total decay width and branching ratio of a decay process and explain the dip in the ZZ producing process branching ratio
Discuss the production of a WW and ZZ boson pair from the decay of scalar boson
and explain when these modes are possible and give the respective decay process. Justify why the WW branching ratio is significantly higher than the ZZ branching ratio.
Explain what the Hierarchy problem is for the Higgs Boson.
What are the running couplings? And why does one constant increases with increasing energy and the other constant become smaller?
The exam had similar questions as the exams of the previous years. Professor Sevrin considered this as the easiest exam he made during this examination period, compared to the exams in Gent and Brussels and the other exam in Leuven.
Media:EWS-2019-06-28.pdf
Consider a Higgs triplet
{\displaystyle \Phi (x)=\left({\begin{matrix}\phi _{1}(x)\\\phi _{2}(x)\\\phi _{3}(x)\end{matrix}}\right),}
where the lagrangian density in the scalar sector is given by
{\displaystyle {\mathcal {L}}=\partial _{\mu }\Phi ^{\dagger }\partial ^{\mu }\Phi +m^{2}\Phi ^{\dagger }\Phi -\lambda (\Phi ^{\dagger }\Phi )^{2},}
{\displaystyle \Phi }
transforms under the three dimensional representation of
{\displaystyle SU(2)}
{\displaystyle SU(2)\times U(1)}
gauge transformation given by
{\displaystyle \Phi (x)\to \Phi '(x)=\exp(ig\,t_{a}\alpha ^{a}(x))\exp(ig'q_{S}\beta (x))\Phi (x).}
{\displaystyle t_{a}}
obey the usual
{\displaystyle SU(2)}
commutation relations and are given by
{\displaystyle t_{1}={\frac {1}{\sqrt {2}}}\left({\begin{matrix}0&1&0\\1&0&1\\0&1&0\end{matrix}}\right),\qquad t_{2}={\frac {i}{\sqrt {2}}}\left({\begin{matrix}0&-1&0\\1&0&-1\\0&1&0\end{matrix}}\right),\qquad t_{3}=\left({\begin{matrix}1&0&0\\0&0&0\\0&0&-1\end{matrix}}\right).}
1) Discuss the groundstate(s) of the system.
2) Take as groundstate for the system
{\displaystyle \Phi ={\frac {1}{\sqrt {2}}}\left({\begin{matrix}0\\0\\v\end{matrix}}\right),}
{\displaystyle v={\sqrt {(}}m^{2}/\lambda )}
. Determine the weak hypercharge
{\displaystyle q_{S}}
such that the photon remains massless.
3) Give the Higgs multiplet in the unitairy gauge. An extra question on the oral part of the exam was to determine the electrical charge of the
{\displaystyle \phi _{1}}
part of the the Higgs multiplet.
4) Determine the masses of the W and the Z gauge bosons.
5) Can you write down Yukawa couplings in this representation?
We consider now the theory as it is used in the standard model and as we built it during the lectures. The coupling of the Brout-Englert-Higgs particle
{\displaystyle \sigma }
to the leptons and the quarks
{\displaystyle \psi }
{\displaystyle {\mathcal {L}}_{HF}=-{\frac {1}{v}}m_{\psi }\sigma {\bar {\psi }}\psi ,}
and the coupling to the W- and Z-bosons is given by
{\displaystyle {\mathcal {L}}_{HVB}={\frac {vg^{2}}{2}}\sigma W_{\mu }^{\dagger }W^{\mu }+{\frac {g^{2}}{4}}\sigma ^{2}W_{\mu }^{\dagger }W^{\mu }+{\frac {vg^{2}}{4\cos ^{2}\theta _{W}}}\sigma Z_{\mu }Z^{\mu }+{\frac {g^{2}}{8\cos ^{2}\theta _{W}}}\sigma ^{2}Z_{\mu }Z^{\mu }.}
The fermion masses are given by
{\displaystyle m_{e}=0.51\cdot 10^{-3}\quad m_{\mu }=0.11\quad m_{\tau }=1.8\quad m_{u}=1-5\cdot 10^{-3}\quad m_{d}=3-9\cdot 10^{-3}}
{\displaystyle m_{c}=1.15-1.35\quad m_{s}=0.075-0.17\quad m_{t}=170\quad m_{b}=4.0-4.4}
A plot of the braching ratios of the decay of the Higgs particle was given similar to that on the last page of the exam of 2011.
1) Compare the Higgs-fermion coupling strenght to the electromagnetic coupling strength of the fermions. Use
{\displaystyle g\sin \theta _{W}=e=g'\cos \theta _{W}}
2) Consider the
{\displaystyle b{\overline {b}}}
{\displaystyle \tau \tau }
decay channels and explain.
{\displaystyle gg}
{\displaystyle \gamma \gamma }
decay channels and explain. It is not necessary to explain the detailed form of the curves but a few Feynman diagrams would be nice.
4) Explain the form of the curves of the decay to WW and ZZ. The masses are approximately
{\displaystyle m_{W}=80\,GeV/c^{2}}
{\displaystyle m_{Z}=90\,GeV/c^{2}}
. An extra question on the oral part was to explain why the branching ratio of the Higgs decaying to the W boson is bigger than the branching ratio of the decay to the Z-boson, since the later is heavier so one could expect the opposite situation.
Very similar questions as the previous years. An extra question on the oral part was to explain why the branching ratio of the Higgs decaying to the W boson is bigger than the branching ratio of the decay to the Z-boson, since the later is heavier so one could expect the opposite situation.
Identiek hetzelfde examen als 11 juni 2012.
Media:EWS11juni.pdf
Toepassing van het Higgs mechanisme.
Gegeven zijn twee scalaire velden
{\displaystyle \Phi _{1}}
{\displaystyle \Phi _{2}}
die als doubletten onder
{\displaystyle SU(2)_{L}}
transformeren en beide hebben zwakke hyperlading
{\displaystyle Q_{Y}=1/2}
. We gebruiken de notatie:
{\displaystyle \Phi _{1}=\left(\phi _{11}\quad \phi _{12}\right)^{T}\quad \Phi _{2}=\left(\phi _{21}\quad \phi _{22}\right)^{T}}
met alle
{\displaystyle \phi }
kes complexe getallen.
Stel dat de vacuumsverwachtingswaarden van de scalaire velden door
{\displaystyle \Phi _{1}=1/{\sqrt {2}}\left(0\quad v_{1}\right)^{T}}
{\displaystyle \Phi _{2}=1/{\sqrt {2}}\left(0\quad v_{2}\right)^{T}}
gegeven worden met v1 en v2 reeel. Wat zijn de massa's van de W, de Z, en het foton?
Veronderstel dat de Lagrange dichtheid voor de scalairen door
{\displaystyle {\mathcal {L}}=\partial _{\mu }\Phi _{1}^{\dagger }\partial ^{\mu }\Phi _{1}+\partial _{\mu }\Phi _{2}^{\dagger }\partial ^{\mu }\Phi _{2}+m_{1}^{2}\Phi _{1}^{\dagger }\Phi _{1}+m_{2}^{2}\Phi _{2}^{\dagger }\Phi _{2}-\lambda _{1}(\Phi _{1}^{\dagger }\Phi _{1})^{2}-\lambda _{2}(\Phi _{2}^{\dagger }\Phi _{2})^{2}-\lambda _{3}(\Phi _{1}^{\dagger }\Phi _{1})(\Phi _{2}^{\dagger }\Phi _{2})}
gegeven is waar m1, m2 reeel en lamda_j>0 voor j in {1,2,3}. Bepaal v1 en v2 als functie van de diverse koppelingsconstantes.
Bespreek de fysische vrijheidsgraden van de scalaire velden (maw geef een bondige analyse van de unitaire ijk). Toon bv aan dat het fysische elektrisch geladen veld door
{\displaystyle -\sin \beta \phi _{11}+\cos \beta \phi _{21}}
gegeven wordt met
{\displaystyle \tan \beta =v_{2}/v_{1}}
Op zoek naar het Brout-Englert-Higgs deeltje. (een grote grafiek van de verschillende branching ratios is gegeven: [1], fig. 3 op p.8 - de figuur voor het exaam was ietsje anders, maar kwam op hetzelfde neer.)
De koppeling van het Higgs (
{\displaystyle \sigma }
) aan leptonen of quarks (
{\displaystyle \psi }
) heeft de vorm
{\displaystyle {\mathcal {L}}_{HF}=-(1/v)m_{\psi }\sigma {\bar {\psi }}\psi }
en deze van de Higgs aan de W en Z velden
{\displaystyle {\mathcal {L}}_{HVB}=(vg^{2}/2)\sigma W_{\mu }^{\dagger }W^{\mu }+(g^{2}/4)\sigma ^{2}W_{\mu }^{\dagger }W^{\mu }+(vg^{2}/4\cos ^{2}\theta _{W})\sigma Z_{\mu }Z^{\mu }+(g^{2}/8\cos ^{2}\theta _{W})\sigma ^{2}Z_{\mu }Z^{\mu }}
Hierin worden de massa's gegeven door (alle massa's werden uitgedrukt in GeV/c^2 en komen van PDG)
{\displaystyle m_{e}=0.51\cdot 10^{-3}\quad m_{\mu }=0.11\quad m_{\tau }=1.8\quad m_{u}=1-5\cdot 10^{-3}\quad m_{d}=3-9\cdot 10^{-3}}
{\displaystyle m_{c}=1.15-1.35\quad m_{s}=0.075-0.17\quad m_{t}=170\quad m_{b}=4.0-4.4}
Op het volgend blad vind je de branching ratios voor het Higgs verval als functie van de Higgsmassa (
{\displaystyle m_{H}=v{\sqrt {2\lambda }}}
). Geef een kwalitatieve bespreking van de bb, tautau, gammagamma, gg, WW en ZZ kanalen.
Overgenomen van "https://examens.wina.be/examens/index.php?title=Electroweak_and_Strong_Interactions&oldid=20462"
|
Hydrogen Bonds | Brilliant Math & Science Wiki
Jordan Calmes, Aditya Virani, Christopher Williams, and
Hydrogen bonds are an unusually strong form of a dipole-dipole interaction that can occur when hydrogen is bonded to a highly electronegative atom. (The most common are
\ce{N}
\ce{O}
\ce{F}
\ce{S}
\ce{Cl}
can also form hydrogen bonds.) Hydrogen bonds always form between hydrogen and an electronegative atom.[1] Compounds with hydrogen bonds have higher-than-predicted boiling points, which helps explain some of the unique behaviors of water--including why liquid water exists on Earth at all! Hydrogen bonding is also important to understanding the properties and behavior of many organic compounds, including alcohols, carboxylic acids, and amines.
How Hydrogen Bonds Form
Water: How Hydrogen Bonds Make Life on Earth Possible
As illustrated in the figure below, the highly electronegative atom pulls the electron cloud from the hydrogen atom, which exposes a part of the hydrogen atom's nucleus, giving the atom a partial positive charge. The highly electronegative atoms on nearby molecules can then interact with the hydrogen atom, and since the hydrogen atoms are so small, the dipoles can approach each other more closely than most dipole-dipole interactions.
Hydrogen bonds are the strongest of the intermolecular forces. This explains the high boiling point of water compared to similar group 16 compounds. Normally, the boiling point increases with higher molecular mass, due to increasing van der Waals forces.
\ce{HI}, \ce{HF}
\ce{HI}, \ce{HBr}
\ce{HF}, \ce{HI}
\ce{HF}, \ce{HBr}
\ce{HF}, \ce{HF}
\ce{HCl}, \ce{HCl}
\ce{HI}, \ce{HI}
If hydrogen bonding is NOT considered, which of the following compounds would you expect to have the highest boiling point:
\ce{HF}, ~~\ce{HCl}, ~~\ce{HBr}, ~~\ce{or} ~~\ce{HI}?
If hydrogen bonding IS considered, which of the following compounds would you expect to have the highest boiling point:
\ce{HF}, ~~\ce{HCl}, ~~\ce{HBr}, ~~\text{or} ~~\ce{HI}?
Which of the following molecules can form hydrogen bonds with its own kind?
For hydrogen bonds to occur, a hydrogen atom directly attached to an
\text{N}
\text{O}
\text{F}
atom must react with another
\text{N}
\text{O}
\text{F}
atom. According to the above structural illustration of the given compounds, only (B) ammonia contains a hydrogen atom directly attached to an
\text{N}
\text{O}
\text{F}
_\square
In the above example, why doesn't acetone form hydrogen bonds with itself? Couldn't the hydrogen from a
\ce{CH_3}
group attach to the oxygen of another acetone molecule?
In acetone, all the hydrogens are attached to carbon, rather than to oxygen, nitrogen, or another highly electronegative element. Since carbon and hydrogen have roughly the same electronegativity, the electrons distribute more evenly in the bond than they do in a hydrogen-oxygen bond, for example. Hydrogen bonds require that hydrogen be bonded directly to an electronegative element, leaving a large partial positive charge on the hydrogen that can attract another electronegative species.
_\square
\ce{F_2}
\ce{(CH_3)_{3}N}
\ce{CH_3CO_2H}
\ce{C_3H_8}
Which of these compounds can form hydrogen bonds with itself?
Plots showing the boiling points of covalent hydrides for groups 16 and 17.[2]
Boiling point tends to increase as the molecular weight of the compound increases. Notice the general trend for the halides of group 16 and 17, ignoring the first point in each line. The boiling point increases in a near-linear fashion moving from the lightest atom in a group to the heaviest. The compounds that form hydrogen bonds (
\ce{HF} \text{ and } \ce{H_2O}
) do not follow the pattern. They have significantly higher boiling points, despite having the smallest molecular weights of the hydrides in their respective groups. This anomaly is a direct result of hydrogen bonding. If the trend line for the rest of group 16 were extrapolated, the boiling point for water would be around
-130^\circ\text{C}
100^\circ\text{C}.
Each molecule of water has two hydrogen atoms and two pairs of unshared electrons, allowing it to participate in four hydrogen bonds. The tetrahedral electron configuration of the water molecules causes the hydrogen bonds on each molecule to be spatially separated from the other bonds on that molecule, leading to a very loose packing of molecules in the typical crystal structure of ice. As a result, the density of solid ice is lower than the density of liquid water.
When a lake freezes over in the winter, ice layers form on the top of the lake first because the ice is less dense. The liquid water underneath the surface maintains a stable temperature as the top layers undergo this phase change, allowing the plant, animal, insect, and microbial life inside the lake to survive the winter. If ice were denser than water and sunk to the bottom instead, the lake would eventually freeze solid, guaranteeing a short life for the pond critters and a gruesome sight by spring.
A 3-D model of hydrogen bonding between water molecules. Hydrogen bonds are usually noted by dotted lines.[3]
Baaden, M. Dynamic water hydrogen-bond network using HyperBalls. Retrieved from https://www.youtube.com/watch?v=pBK22hN7cIM
Jkwchui, . Boiling Points Chalcogen Halogen. Retrieved from https://commons.wikimedia.org/wiki/File:Boiling-points_Chalcogen-Halogen.svg
Porto, A. 3D_model_hydrogen_bonds_in_water-es.jpg. Retrieved from https://commons.wikimedia.org/wiki/File:3D_model_hydrogen_bonds_in_water-es.jpg
Cite as: Hydrogen Bonds. Brilliant.org. Retrieved from https://brilliant.org/wiki/hydrogen-bonds/
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.