text
stringlengths
256
16.4k
Well, the title of the question pretty much says everything. To be more precise: I'd like to know (1) what font would be visually suitable to use with Linux Libertine and/or Linux Biolinum as the main text font (I haven't decided which of the two I'll use) and (2) how to enable it (as a math font) in XeTeX. A good guide on what factors to consider when mixing fonts is Thierry Bouche's Diversity in math fonts article in TUGboat, Volume 19 (1998), No. 2. The most important aspect is to use the same font for text and math letters (as well as letter-like symbols as \partial or \infty). This has drawbacks as some letters will suffer from spacing problems, but compared to the other option (using totally different math letters), it's really a lesser evil. Of course, if this is not acceptable to you, then you should first choose the math font and then use the same font for text, but that limits your font choices dramatically. Once you've assigned the text font to the math letters, the remaining choices you face is for the geometric symbols, the delimiters and the big operators ( \sum, \int, \bigcup etc.). The main consideration is color (how bold the symbols are) and the shape of the symbols (mainly the shape of sum or integral symbols, especially if you use them often). Compared to Libertine, XITS and Asana are a bit too bold (especially true for the sum symbol), Latin Modern is a bit too light (especially +, \otimes, etc.), and Cambria has a very huge \sum symbol, huge \otimes and \oplus as well as very bold \bigcup. Thus, which font will look better will depend on what type of math you're typing, and none will be perfect. Here's a sample to show the results of this font mixing with Libertine. Notice the spacing problems around the f in f(r_k) and \Sigma_c f(r) due to the fact that's it's a text font we're using for math. I've not set all letter-like symbols to come from Libertine (only \infty), so there's still room for improvement. (Note also the missing parenthesis in one of the formulas with Latin Modern Math.) \documentclass{article}\usepackage{amsmath}\usepackage{fontspec}\usepackage{unicode-math}\setmainfont{Linux Libertine O}\newcommand{\setlibertinemath}{%% use Libertine for the letters\setmathfont[range=\mathit/{latin,Latin,num,Greek,greek}]{Linux Libertine O Italic}\setmathfont[range=\mathup/{latin,Latin,num,Greek,greek}]{Linux Libertine O}\setmathfont[range=\mathbfup/{latin,Latin,num,Greek,greek}]{Linux Libertine O Bold}%\setmathfont[range={"2202}]{Linux Libertine O}% "02202 = \partial % doesn't work\setmathfont[range={"221E}]{Linux Libertine O}% "0221E = \infty% etc. (list should be completed depending on needs)}\newcommand{\sample}{%When computing the sums $\sum_{k=0}^{+\infty}{f(r_k)}$ of $f$ the integral representation of $K_0(x)$ may be used.\[ \eta(r)\frac{\partial f}{\partial r} + 2\Sigma_cf(r) = \sum_{k=0}^{+\infty}{K_0\mathopen{}\left(\frac{\lvert r - r_k \rvert}{L}\right)} = \int_{0}^{\infty}{e^{-\left(z+\frac{r^2}{4L^2\pi}\right)} \frac{dz}{2z}}.\]We then use\[ \bigcup_{\lambda \in \Lambda}{U_\lambda} \cap \bigsqcup_{\delta > 0}{G_\delta} = \bigcap_{i \in I}{\mathbf{A}_i} \quad \text{so that} \quad u \otimes w \oplus v = 0.\]}\pagestyle{empty}\begin{document}\section{Libertine + Latin Modern}\setmathfont{Latin Modern Math}\setlibertinemath\sample\section{Libertine + Cambria}\setmathfont{Cambria Math}\setlibertinemath\sample\section{Libertine + XITS}\setmathfont{XITS Math}\setlibertinemath\sample\section{Libertine + Asana}\setmathfont{Asana Math}\setlibertinemath\sample\end{document} A late answer (and a shameless plug), but I have been working on math companion to Linux Libertine fonts, which got more attention recently (thanks to support from TUG) and it is starting to take shape. The character coverage is still a bit limited and there may be bugs in the existing ones, but testing and bug reports are appreciated. I’m currently forking the whole Linux Libertine and Linux Biolinum family (and changed the name to avoid confusion and follow the reserved name clause in the license) as I need a way to quickly fix bugs I see in the text fonts (there is quite a few of them), but the idea is to merge this back with the original fonts once the dust settles. Here is a sample (the full document is here): In the MWE below, a pangram is first in text italics (with Linux Libertine) and then in four different math alphabets -- Asana Math, Cambria Math (not entirely free, but quite cheap), XITS Math, and Latin Modern Math. The exercise is repeated with Linux Biolinum and the same four math alphabets, this time in sans-serif mode. I'd say that the overall closest, though by no means perfect, fit is between Linux Libertine and Asana Math. Should you, however, wish to give considerable weight to the shape of the letters f, p, and q, XITS Math may be your best choice. Or, should you care much about compatibility of the shapes of the letter w (but not care much about the letters f and g), Cambria Math may be best for you. In any case, Latin Modern is not visually compatible with Linux Libertine. % !TEX program = xelatex\documentclass[letterpaper]{article}\usepackage[no-math]{fontspec} \setmainfont{Linux Libertine O} \setsansfont{Linux Biolinum O}\usepackage{unicode-math} \setmathfont[version=asana]{Asana Math} \setmathfont[version=cambria]{Cambria Math} \setmathfont[version=xits]{XITS Math} \setmathfont[version=lm]{Latin Modern Math}\newcommand{\qbf}{The\ quick\ brown\ fox\ jumps\ over\ the\ lazy\ dog.}\begin{document} \noindent\emph{\qbf} --- Linux Libertine O, italics\newline\mathversion{asana} $\qbf$ --- Asana Math\newline\mathversion{cambria} $\qbf$ --- Cambria Math\newline\mathversion{xits} $\qbf$ --- XITS Math\newline\mathversion{lm} $\qbf$ --- Latin Modern Math\bigskip\noindent\textsf{\qbf} --- \textsf{Linux Biolinum O}\newline\mathversion{asana} $\mathsf{\qbf}$ --- Asana Math-sf\newline\mathversion{cambria} $\mathsf{\qbf}$ --- Cambria Math-sf\newline\mathversion{xits} $\mathsf{\qbf}$ --- XITS Math-sf\newline\mathversion{lm} $\mathsf{\qbf}$ --- Latin Modern Math-sf\end{document} Incidentally, the weird "tz" character in the first line (text italics, Linux Libertine) seems to be a product of an unfortunate interaction between Linux Libertine and XeLaTeX. This problem does not occur if one uses either a font other than Linux Libertine or if the MWE is run under LuaLaTeX. Addendum To learn more about the various math symbols that the various math fonts provide, please refer to Will Robertson's write-up, Every symbol defined by unicode-math. You'll find out quickly that just about all Unicode math fonts provide all of the "standard" math symbols. However, the math font packages tend to differ importantly in the sets of specialized symbols, e.g., arrows, that they provide. Obviously, the font with the most math symbols (AFAICT, XITS Math at present) need not be the one that's best for you, simply because you may have no need for most of the symbols that the most feature-laden package provides. Finally, then, suppose that you end up deciding to use the XITS Math font because it comes with all the special symbols you need (and the other math font packages do not). In that case, you should probably be willing to use the XITS text font, rather than Linux Libertine O, because XITS harmonizes very well (by design!) with the XITS Math font. You can consider newtxmath with the libertine option. The math fonts used are not OpenType, so no unicode-math, but the result is pretty good. The order of packages is important. \documentclass{article}\usepackage{amsmath}\usepackage[libertine]{newtxmath}\usepackage[no-math]{fontspec}\usepackage{mleftright}\setmainfont{Linux Libertine O}\pagestyle{empty}\begin{document}When computing the sums $\sum_{k=0}^{+\infty}{f(r_k)}$ of $f$ the integralrepresentation of $K_0(x)$ may be used.\[ \eta(r)\frac{\partial f}{\partial r} + 2\Sigma_cf(r) = \sum_{k=0}^{+\infty}{K_0 \mleft(\frac{\lvert r - r_k \rvert}{L}\mright)} = \int_{0}^{\infty}{e^{-\mleft(z+\frac{r^2}{4L^2\pi}\mright)} \frac{dz}{2z}}.\]We then use\[ \bigcup_{\lambda \in \Lambda}{U_\lambda} \cap \bigsqcup_{\delta > 0}{G_\delta} = \bigcap_{i \in I}{\mathbf{A}_i} \quad \text{so that} \quad u \otimes w \oplus v = 0.\]\end{document} you should wait until the Lucida Math OpenType font will be published by TUG. I suppose it will be at least at the end of this year. (answer copied from here: https://tex.stackexchange.com/a/364502/75284) Libertinus is a fork of Linux Libertine with bug fixes and pretty nice math support (see this example document). It is the perfect match for Linux Libertine, because, well, it is Linux Libertine, just forked. And I personally much prefer the upright integral symbol, in keeping with the spirit of the ISO recommendations that only variables should be italic. \documentclass[varwidth,border=1mm]{standalone}\usepackage[ math-style=ISO, bold-style=ISO, partial=upright, nabla=upright]{unicode-math}\setmainfont{Libertinus Serif}\setsansfont{Libertinus Sans}\setmathfont{Libertinus Math}\begin{document}The formula \(E=mc^2\) is arguably the most famous formula in physics.In mathematics, it could be \(\mathrm{e}^{\mathrm{i}\uppi}-1=0\).\(\displaystyle \sum_{k=0}^\infty \frac{1}{k^2} = \frac{\uppi^2}{6}\), and\(\displaystyle \int\displaylimits_{-\infty}^\infty \exp\left(-\frac{x^2}{2}\right) = \sqrt{2\uppi}\).\(\alpha\beta\gamma\delta\epsilon\zeta\eta\theta\iota\kappa\lambda\mu\nu\xi\pi\rho\sigma\tau\upupsilon\phi\chi\psi\omega \varepsilon\vartheta\varrho\varsigma\varphi\varkappa\)\(\upalpha\upbeta\upgamma\updelta\upepsilon\upzeta\upeta\uptheta\upiota\upkappa\uplambda\upmu\upnu\upxi\uppi\uprho\upsigma\uptau\upupsilon\upphi\upchi\uppsi\upomega \upvarepsilon\upvartheta\upvarrho\upvarsigma\upvarphi\upvarkappa\)\(\Alpha\Beta\Gamma\Delta\Epsilon\Zeta\Eta\Theta\Iota\Kappa\Lambda\Mu\Nu\Xi\Pi\Rho\Sigma\Tau\Upsilon\Phi\Chi\Psi\Omega\)\(\upAlpha\upBeta\upGamma\upDelta\upEpsilon\upZeta\upEta\upTheta\upIota\upKappa\upLambda\upMu\upNu\upXi\upPi\upRho\upSigma\upTau\upUpsilon\upPhi\upChi\upPsi\upOmega\)\end{document} Be aware though: xe(la)tex currently (TeX Live 2016) has a bug, which is visible in my screenshot: Why is the fraction off the math axis in XeTeX? . lualatex of TeX Live 2016 has a bug as well: spacing in LuaLaTeX with unicode-math
The MVT, Higher Order Partial Derivatives, and Taylor's Formula Review The Mean Value Theorem, Higher Order Partial Derivatives, and Taylor's Formula Review We will now review some of the recent material regarding the Mean Value Theorem, higher order partial derivatives of functions, and Taylor's formula for functions from $\mathbb{R}^n$ to $\mathbb{R}$. Recall from The Mean Value Theorem for Differentiable Functions from Rn to Rmpage that if $S \subseteq \mathbb{R}^n$ is open, $S \subseteq \mathbb{R}^n$, and $f : S \to \mathbb{R}^m$ is differentiable on all of $S$ then if $\mathbf{x}, \mathbf{y} \in S$ are such that the line segment joining these points is contained in $S$, i.e., $L(\mathbf{x}, \mathbf{y}) \subset S$, then for every $\mathbf{a} \in \mathbb{R}^m$ there exists a point $\mathbf{z} \in L(\mathbf{x}, \mathbf{y})$ such that: \begin{align} \quad \mathbf{a} \cdot [ \mathbf{f}(\mathbf{y}) - \mathbf{f}(\mathbf{x}) ] = \mathbf{a} \cdot [ \mathbf{f}'(\mathbf{z}) (\mathbf{y} - \mathbf{x})] \end{align} Note that the notation "$L(\mathbf{x}, \mathbf{y})$" denoting the line segment joining the points $\mathbf{x}$ and $\mathbf{y}$ can be written as: \begin{align} \quad L(\mathbf{x}, \mathbf{y}) = \{ (1 - t)\mathbf{x} + t\mathbf{y} : t \in [0, 1] \} \end{align} We then looked at some corollaries to the Mean Value theorem on the Corollaries to the Mean Value Theorem for Differentiable Functions from Rn to Rmpage. We proved the less general multivariable real-valued function Mean Value Theorem which states that if $S \subseteq \mathbb{R}^n$ is open, $f : S \to \mathbb{R}$, and $\mathbf{x}, \mathbf{y} \in S$ are such that $L(\mathbf{x}, \mathbf{y}) \subset S$ then there exists a point $\mathbf{z} \in L(\mathbf{x}, \mathbf{y})$ such that: \begin{align} \quad f(\mathbf{y}) - f(\mathbf{x}) = \nabla f(\mathbf{z}) \cdot (\mathbf{y} - \mathbf{x}) \end{align} We also proved that if $\mathbf{f} : S \to \mathbb{R}^m$ is differentiable on $S$ then for any two points $\mathbf{x}, \mathbf{y} \in S$ with $L(\mathbf{x}, \mathbf{y}) \subset S$ there exists a point $\mathbf{z} \in L(\mathbf{x}, \mathbf{y})$ such that: \begin{align} \quad \| \mathbf{f}(\mathbf{y}) - \mathbf{f}(\mathbf{x}) \| \leq M \| \mathbf{y} - \mathbf{x} \| \end{align} Where $\displaystyle{M = \sum_{k=1}^{m} \| \nabla f_k(\mathbf{z}) \|}$. On the A Sufficient Condition for the Differentiability of Functions from Rn to Rmpage we looked at a very important result which gives us a sufficient condition for the differentiability of a function from $\mathbb{R}^n$ to $\mathbb{R}^m$. We saw that if $\mathbf{f} : S \to \mathbb{R}^m$ and $\mathbf{c} \in S$ then if one of the partial derivatives of $\mathbf{f}$ at $\mathbf{c}$ exist, i.e, one of $D_1 \mathbf{f}(\mathbf{c})$, $D_2 \mathbf{f}(\mathbf{c})$, …, $D_n \mathbf{f} (\mathbf{c})$ exist, and the remaining $n - 1$ of these partial derivatives are continuous on an open ball centered at $\mathbf{c}$ then $\mathbf{f}$ is differentiable at $\mathbf{c}$. On the Higher Order Partial Derivatives of Functions from Rn to Rmpage we defined higher order partial derivatives of functions from $\mathbb{R}^n$ to $\mathbb{R}^m$. Essentially, they are the partial derivatives of partial derivatives, etc… We looked at a couple of examples in computing these higher order partial derivatives. However, on the Equality and Inequality of Mixed Partial Derivatives of Functions from Rn to Rmwe noted that the mixed partial derivatives of a function from $\mathbb{R}^n$ to $\mathbb{R}^m$ need not equal each other. In particular, we looked at a prime counter example - the function $f : \mathbb{R}^2 \to \mathbb{R}$ defined for all $(x, y) \in \mathbb{R}^2$ by: \begin{align} \quad \quad f(x, y) = \left\{\begin{matrix} xy \frac{x^2 - y^2}{x^2 + y^2}& \mathrm{if} \: (x, y) \neq (0, 0)\\ 0 & \mathrm{if} \: (x, y) = (0, 0) \end{matrix}\right. \end{align} We proved that $D_{1, 2} f(x, y) = 1 \neq -1 = D_{2, 1} f(x, y)$ which shows that mixed partial derivatives need not equal. Nevertheless, we stated an important result which gives us a sufficient condition for when second order mixed partial derivatives equal. We saw that if $\mathbf{f} : S \to \mathbb{R}^m$ and $\mathbf{c} \in S$ then if the partial derivatives $D_j \mathbf{f}$ and $D_k \mathbf{f}$ exist on an open ball centered at $\mathbf{c}$, $B(\mathbf{c}, r)$ and if $D_j \mathbf{f}$ and $D_k \mathbf{f}$ are differentiable at $\mathbf{c}$ then: \begin{align} \quad D_{j, k} \mathbf{f}(\mathbf{c}) = D_{k, j} \mathbf{f} (\mathbf{c}) \end{align} On the Taylor's Formula for Functions from Rn to Rpage we said that if $f : S \to \mathbb{R}$, $\mathbf{x} \in S$, and if all of the second order partial derivatives of $f$ at $\mathbf{x}$ exist, i.e., $D_{i, j} f(\mathbf{x})$ exist for $i, j \in \{ 1, 2, ..., n \}$ then the Second Order Directional Derivative of $\mathbf{f}$ at $\mathbf{x}$ in the Direction of $\mathbf{t} \in \mathbb{R}^n$is defined as: \begin{align} \quad f''(\mathbf{x}, \mathbf{t}) = \sum_{i=1}^{n} \sum_{j=1}^{n} D_{i, j} f(\mathbf{x}) t_j t_i \end{align} We similarly defined the Third Order Directional Derivative of $\mathbf{f}$ at $\mathbf{x}$ in the Direction of $\mathbf{t}$(which exists provided that all of the third order partial derivatives of $f$ exist) and is defined as: \begin{align} \quad f'''(\mathbf{x}, \mathbf{t}) = \sum_{i=1}^{n} \sum_{j=1}^{n} \sum_{k=1}^{n} D_{i, j,k} f(\mathbf{x}) t_k t_j t_i \end{align} An analogous definition was also made for even higher order directional derivatives of $\mathbf{f}$ at $\mathbf{x}$ in the direction of $\mathbf{t}$. We then stated a very nice formula known as Taylor's formula which states that if $S \subseteq \mathbb{R}^n$, and if $f : S \to \mathbb{R}$ and all of the partial derivatives of $f$ of order less than $m$ are differentiable on $S$ then for all $\mathbf{a}, \mathbf{b} \in S$ with $L(\mathbf{a}, \mathbf{b}) \subset S$ there exists a point $\mathbf{z} \in L(\mathbf{a}, \mathbf{b})$ such that: \begin{align} \quad f(\mathbf{b}) - f(\mathbf{a}) = \sum_{k=1}^{m-1} \frac{1}{k!} f^{(k)} (\mathbf{a}, \mathbf{b} - \mathbf{a}) + \frac{1}{m!} f^{(m)} (\mathbf{z}, \mathbf{b}-\mathbf{a}) \end{align}
Eigenvalues of the Adjoint of a Linear Map Eigenvalues of the Adjoint of a Linear Map In the following proposition we will see that the eigenvalues of $T^*$ are the complex conjugate eigenvalues of $T$. Proposition 1: Let $V$ be a finite-dimensional nonzero inner product spaces. Then $\lambda$ is an eigenvalue of $T$ if and only if $\overline{\lambda}$ is an eigenvalue of $T^*$. Proof:We will prove the contrapositives to proposition 1. $\Rightarrow$ First suppose that $\lambda$ is not an eigenvalue of $T$. Then $(T - \lambda I)$ is injective which implies that $(T - \lambda I)$ is invertible, and so there exists a linear map $S \in \mathcal L (V)$ such that $S(T - \lambda I) = I = (T - \lambda)S$. If we take the adjoint of both sides of the equation above, then we have that: \begin{align} \quad S(T - \lambda I) = I = (T - \lambda)S \\ \quad (S(T - \lambda I))^* = I^* = ((T - \lambda)S)^* \\ \quad S^*(T - \lambda I)^* = I = (T - \lambda)^* S^* \end{align} Thus $(T - \lambda I)^*$ is invertible which implies that $(T - \lambda I)^* = (T^* - \overline{\lambda}I)$ is injective, so $\overline{\lambda}$ is not an eigenvalue of $T^*$. $\Leftarrow$ Now suppose that $\overline{\lambda}$ is not an eigenvalue of $T^*$. Then $(T^* - \overline{\lambda}I) = (T - \lambda I)^*$ is injective which implies that $(T - \lambda I)^*$ is invertible, and so there exists a linear map $S^* \in \mathcal L (V)$ such that $S^* (T - \lambda I)^* = I = (T - \lambda I)^* S^*$. If we take the adjoint of both sides of the equation above, then we have that: \begin{align} \quad S^* (T - \lambda I)^* = I = (T - \lambda I)^* S^* \\ \quad (S^* (T - \lambda I)^*)^* = I^* = ((T - \lambda I)^* S^*)^* \\ \quad S^{**} (T - \lambda I)^{**} = I = (T - \lambda I)^{**} S^{**} \\ \quad S ( T - \lambda I) = I = (T - \lambda I)S \end{align} Therefore $(T - \lambda I)$ is invertible which implies that $(T - \lambda I)$ is injective and so $\lambda$ is not an eigenvalue of $T$. $\blacksquare$
Geometric Series of Real Numbers Examples 1 Recall from Geometric Series of Real Numbers page the following test for convergence/divergence of a geometric series: We will now look at some examples of applying the test to geometric series. Example 1 Determine whether the series $\displaystyle{\sum_{n=1}^{\infty} 2 \cdot 3^n}$ converges or diverges, and if it converges, find the sum of the series. Notice that $x = 3$ in this example, and so $\mid x \mid = \mid 3 \mid \geq 1$. Therefore $\displaystyle{\sum_{n=1}^{\infty} 2 \cdot 3^n}$ diverges. Example 2 Determine whether the series $\displaystyle{\sum_{n=1}^{\infty} \frac{2e^2}{\pi^n}}$ converges or diverges, and if it converges, find the sum of the series. In this example, notice that $a = 2e^2$ and $\displaystyle{x = \frac{1}{\pi}}$. Since $\pi > 1$, $0 < \frac{1}{\pi} < 1$ and hence $\biggr \lvert \frac{1}{\pi} \biggr \rvert < 1$. So the series $\displaystyle{\sum_{n=1}^{\infty} \frac{2e^2}{\pi^n}}$ converges, and:(1) Example 3 Determine whether the series $\displaystyle{\sum_{n=1}^{\infty} \frac{e 5^n}{e^{2n}}}$ converges or diverges, and if it converges, find the sum of the series. In this example, $a = e$, and $x = \frac{5}{e^2}$. Notice that $e^2 > 5$ and so $\displaystyle{\biggr \lvert \frac{5}{e^2} \biggr \rvert < 1}$. Therefore the series $\displaystyle{\sum_{n=1}^{\infty} \frac{e 5^n}{e^{2n}}}$ converges and:(2) Example 4 Let $S = \{ x_1, x_2, ... \}$ be the collection of all natural numbers that do not contain the digit $0$, i.e. $6 \in S$ but $60 \not \in S$. Prove that $\sum_{k=1}^{\infty} \frac{1}{x_k} < 90$. Partition $S$ into an infinite collection of sets $\{ S_1, S_2, ..., S_j, ... \}$ where $S_k \subset S$ is the set of all natural numbers that do not contain the digit and have $j$ digits total. If $k = 1$, notice that:(3) Furthermore, we see that:(4) If $k = 2$, then notice that:(5) There are $90$ numbers between $10$ and $99$ (inclusive), and $9$ of these numbers contain a $0$. So there are $81$ valid reciprocals to sum and:(6) If $k = 3$, then notice that:(7) There are $900$ numbers between $100$ and $999$ (inclusive). There are $729$ valid reciprocals to sum and:(8) In general, for $j \in \mathbb{N}$ we see that:(9) So we see that the sum of the reciprocals of numbers in $S$ is bounded since:(10)
Question Repeat Exercise 22.48, but with the loop lying flat on the ground with its current circulating counterclockwise (when viewed from above) in a location where the Earth’s field is north, but at an angle $45.0^\circ$ below the horizontal and with a strength of $6.00 \times 10^{-5} \textrm{ T}$. Exercise 22.48(a) A 200-turn circular loop of radius 50.0 cm is vertical, with its axis on an east-west line. A current of 100 A circulates clockwise in the loop when viewed from the east. The Earth’s field here is due north, parallel to the ground, with a strength of $3.00 \times 10^{-5} \textrm{ T}$ . What are the direction and magnitude of the torque on the loop? (b) Does this device have any practical applications as a motor? Final Answer $0.666 \textrm{ N}\cdot\textrm{m}$ This is a very small torque, so it's unlikely to have a practical application. This torque would be able to lift only a 0.27kg weight placed on the loop. Calculator Screenshots Video Transcript This is College Physics Answers with Shaun Dychko. We have a side view here of this loop lying flat on the ground. The magnetic field due to the earth is pointing north-ish, by that it's had an angle of 45 degrees to the ground. And pointing into the ground of 45 degrees. And so this angle theta which is between the perpendicular to the loop and the magnetic field lines also is 45 degrees because this is 90 and we have 90 minus 45 here, it gets 45 there. And the top-down view of this loop shows the current going counter clockwise. And so the component of the magnetic field which is parallel to the plane of the loop is what is going to create a torque. And so it's going to be magnetic field directed upwards here like this and this is north. And at this position at the top of the loop, we can point out thumb to the left in the direction of the current. And our fingers point to the top of the page, in the direction of the magnetic field and the palm is facing into the page. And so there's going to be a force into the page there which is going to cause a torque such that it goes into the page at the top. And on the bottom, the thumb points to the right and the fingers still point upwards in the direction of the magnetic field. And the palm is facing out of the page and so we have dots to represent that force coming out of the page there. And so we have the torque on this loop. And the size of the torque we get from number of loops times the current times the area times the magnetic field strength times sine of the angle between the perpendicular to the loop and the magnetic field. And the area is going to be Pi rsquared because it's a circle. So we have 200 times a hundred Amps times Pi times 50 times ten to the minus two meters radius squared times six times ten to the negative five Tesla times sine of 45 degrees giving us 0.666 Newton meters. And this is a very small torque and so it's unlikely to have any practical application as a motor.
\(\quad\)In the following few sections We will often drop the subscript I for convenience, but remember all the computation involves interaction picture fields. Def: \(\quad\)Such a term (e.g., \(a_{\boldsymbol{p}}^{\dagger}a_{\boldsymbol{q}}^{\dagger}a_{\boldsymbol{k}a_{\boldsymbol{l}}}\)) is said to in \text{normal order}, which vanishes vacuum expectation value. Denote the operator that re-orders the creation and annihilation operators as \(:\cdot:\) or more clearly, \(N(\cdot)\), i.e. $$N(a_{\boldsymbol{p}}a_{\boldsymbol{k}}^{\dagger}a_{\boldsymbol{l}})\equiv a_{\boldsymbol{k}}^{\dagger}a_{\boldsymbol{p}}a_{\boldsymbol{l}}.$$ \(\quad\)Consider again the case of two fields, \(\displaystyle\langle0|T\{\phi(x)\phi(y)\}|0\rangle\). In order to generalize the case of more than two fields, we rewrite it by decomposing \(\phi\) into positive- and negative- parts: $$\phi(x)=\phi^{+}(x)+\phi^{-}(x),$$ where \begin{align*}\phi^{+}(x)=\int\dfrac{d^{3}p}{(2\pi)^{3}}\dfrac{1}{\sqrt{E_{\boldsymbol{p}}}}a_{\boldsymbol{p}}e^{-ip\cdot x},\quad\phi^{-}(x)=\int\dfrac{d^{3}p}{(2\pi)^{3}}\dfrac{1}{\sqrt{E_{\boldsymbol{p}}}}a_{\boldsymbol{p}}^{\dagger}e^{ip\cdot x}.\end{align*} Note: \(\quad\)The position of \(a\) and \(a^{\dagger}\) in the definition of \(\phi^{+}\) and \(\phi^{-}\) indicate that it’s the annihilation(creation) of particles that fields are created(annihilated), and vice versa. \(\quad\)Consider the case \(x^{0}>y^{0}\). The time-order product of two fields is then \begin{align*}T\{\phi(x)\phi(y)\}&=\phi^{+}(x)\phi^{+}(y)+\underbrace{\phi^{+}(x)\phi^{-}(y)}_{\text{Needs to be re-permute}}+\phi^{-}(x)\phi^{+}(y)+\phi^{-}(x)\phi^{+}(y)\\&=\phi^{+}(x)\phi^{+}(y)+\phi^{-}(y)\phi^{+}(x)+\phi^{-}(x)\phi^{+}(y)+\phi^{-}(x)\phi^{+}(y)+\left[\phi^{+}(x),\phi^{-}(y)\right].\end{align*} \(\quad\)The \(\text{contraction}\) of two fields is defined as: (Since the damning Plug-in do not support the \(\text{simplewick}\) package of \(\TeX\), I have to denote contractions as \(\langle\cdot\rangle\), which is different from what is widely used in formal textbooks, and hope that you not mistake it with quantum state : ) ) $$\langle\phi(x)\phi(y)\rangle\equiv\begin{cases}\left[\phi^{+}(x),\phi^{-}(y)\right],\quad\text{for }x^{0}>y^{0};\\ \left[\phi^{+}(y),\phi^{-}(x)\right],\quad\text{for }y^{0}>x^{0}.\end{cases}$$ \(\quad\)So the connection between time-ordering and normal-ordering product is extremely simple, at least for two fields: $$T\{\phi(x),\phi(y)\}=N\{\phi(x)\phi(y)+\langle\phi(x)\phi(y)\rangle\}.$$ More generalized, we have Theorem(Wick): $$T\{\phi(x_{1})\cdots\phi(x_{n})\}=N\{\phi(x_{1})\cdots\phi(x_{n})+\text{all possible contractions}\}.$$ Proof: \(\quad\)Skipped…
Mazur's Theorem Theorem 1 (Mazur's Theorem): Let $X$ be a normed linear space and let $K$ be a convex subset of $X$. Then $K$ is norm closed if and only if $K$ is weakly closed. Mazur's Theorem says that the concept of norm closed and weakly closed is the same for convex subsets of a normed linear space $X$. Since we know that the weak and norm topologies on a normed linear space $X$ coincide if and only if $X$ if finite-dimensional, we see that Mazur's theorem is only really interesting when discussing infinite-dimensional normed linear spaces. (1) Proof: $\Rightarrow$ Clearly the norm-closed $\emptyset$ is also weakly closed. So let $K$ be a nonempty, norm-closed convex subset of $X$. Let $x_0 \in X \setminus K$. Since $X$ is a normed linear space by one of the corollaries on The Separation Theorems page, there exists a bounded linear functional $f \in X^*$ such that: \begin{align} f(x_0) < c = \inf_{k \in K} f(k) \end{align} Consider the set $N = \{ x \in X : f(x) < c \}$. Note that $x_0 \in N$. Furthermore, $N \subset X \setminus K$ and $N$ is weakly open, since: Hence $x_0$ is in the weak-interior of $X \setminus K$. So $X \setminus K$ is weakly open. Thus $K$ is weakly closed. $\Leftarrow$ Let $K$ be weakly closed. Since the weak topology on $X$ is weaker than the norm topology on $X$ we automatically have that $K$ is norm closed. $\blacksquare$ Corollary 1: Let $X$ be a normed linear space. If $K \subseteq X$ is convex then the $\mathrm{weak-closure} (K) = \mathrm{norm-closure}(K)$. (2) Proof: Since $\mathrm{weak-closure}(K)$ is weakly closed we have by Mazur's theorem that $\mathrm{weak-closure}(K)$ is norm-closed and thus: \begin{align} \quad \mathrm{norm-closure} (K) \subseteq \mathrm{weak-closure}(K) \end{align} (3) Similarly, since $\mathrm{norm-closure}(K)$ is norm closed we have by Mazur's theorem that $\mathrm{norm-closure}(K)$ is weakly closed and thus: \begin{align} \quad \mathrm{norm-closure}(K) \supseteq \mathrm{weak-closure}(K) \end{align} Hence $\mathrm{weak-closure}(K) = \mathrm{norm-closure}(K)$. $\blacksquare$ Corollary 2: Let $X$ be a normed linear space. If $K$ is a norm closed convex subset of $X$ and $(x_n)$ is a sequence in $K$ that weakly converges to $x \in X$ then $x \in K$. Proof: By Mazur's Theorem, since $K$ is a convex subset of $X$ that is norm closed it is also weakly closed. So $K$ is equal to the weak closure of $K$. If $(x_n)$ is a sequence in $K$ that weakly converges to $x \in K$ then since $K = \bar{K}_{\mathrm{weak}}$ we have that $x \in K$. $\blacksquare$
Dirichlet's Test for Convergence of Complicated Series of Real Numbers Recall from the Dirichlet's Test for Convergence of Series of Real Numbers page that if $(a_n)_{n=1}^{\infty}$ and $(b_n)_{n=1}^{\infty}$ are two sequences of real numbers where $(A_n)_{n=1}^{\infty}$ denotes the sequence of partial sums for $(a_n)_{n=1}^{\infty}$ and if: $(A_n)_{n=1}^{\infty}$ is bounded. $(b_n)_{n=1}^{\infty}$ is a decreasing convergent sequence such that $\displaystyle{\lim_{n \to \infty} b_n = 0}$ Then we can conclude that $\displaystyle{\sum_{n=1}^{\infty} a_nb_n}$ converges. We will now look at a complicated example of applying Dirichlet's test for convergence. Consider the following series:(1) Suppose that we want to find all $x$ for which the series above converges. Notice that $\sin x = 0$ if $x = m\pi$ for any $m \in \mathbb{N}$, and so $\frac{\sin nx}{n} = 0$ if $x = m\pi$, and trivially, all of the terms in the series above will equal $0$ and converge to $0$. So assume that the sine function does not equal $0$ for our choice of $x$. Using Dirichlet's test, we will show that for all $x \in \mathbb{R}$ we can establish convergence. Let:(2) Let $(A_n)_{n=1}^{\infty}$ denote the sequence of partial sums for $(a_n)_{n=1}^{\infty}$. Then for each $n \in \mathbb{N}$ we have that:(3) It can be shown that:(4) Notice that the right hand side of the inequality above is fixed for every $x \in \mathbb{R}$. This shows that the sequence of partial sums $(A_n)_{n=1}^{\infty}$ is bounded. We now establish that the sequence $(b_n)_{n=1}^{\infty}$ is decreasing. Consider the difference, $b_{n+1} - b_n$:(5) This shows that $b_{n+1} \leq b_n$ for all $n \in \mathbb{N}$, so $(b_n)_{n=1}^{\infty}$ is a decreasing sequence. We now show that this sequence converges to $0$. Notice that:(6) So, by Dirichlet's test we must have that the series $\displaystyle{\sum_{n=1}^{\infty} a_nb_n = \sum_{n=1}^{\infty} \left ( \left ( 1 + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n} \right ) \frac{\sin nx}{n} \right )}$ converges.
The Range and Kernel of Linear Operators The Range and Kernel of Linear Operators Definition: Let $X$ and $Y$ be linear spaces and let $T : X \to Y$ be a linear operator. The Range of $T$ denoted $\mathrm{range}(T)$ is the image of $X$ under $T$, that is, $\mathrm{range} (T) = T(X)$. The Kernel of $T$ denoted $\ker (T)$ is the set of all points $x \in X$ for which $T(x) = 0$, that is, $\ker (T) = \{ x \in X : T(x) = 0 \}$. We will now prove some results regarding the range/kernel of linear operators. Proposition 1: Let $X$ and $Y$ be linear spaces and let $T : X \to Y$ be a linear operator. Then $\ker (T)$ is a subspace of $X$ and $\mathrm{range} (T)$ is a subspace of $Y$. Proposition 2: Let $(X, \| \cdot \|_X)$ and $(Y, \| \cdot \|_Y)$ be normed linear spaces and let $T : X \to Y$ be a bounded linear operator. If $T$ is bounded then $\ker (T)$ is closed in $X$. Proof: Let $(x_n)$ be a sequence in $\ker (T)$ such that $(x_n)$ converges to some $x \in X$. Then $T(x_n) = 0$ for all $n \in \mathbb{N}$. Since $T$ is bounded (continuous) and since $(x_n)$ converges to $x$ we have that $(T(x_n))$ converges to $T(x)$. But $(T(x_n)) = (0)$. So $T(x) = 0$. Hence $x \in \ker (T)$. Alternatively, note that if $T$ is bounded then it is continuous. Since $\{ 0 \} \subset Y$ is closed (as it is a singleton set) we have that $T^{-1}(0) = \ker (T)$ is closed in $X$. $\blacksquare$ For an analogous result to proposition 2, we will see later that The Open Mapping Theorem tells us that if $(X, \| \cdot \|_X)$ and $(Y, \| \cdot \|_Y)$ are Banach spaces and $T : X \to Y$ is a bounded linear operator then the range $T(X)$ is closed if and only if $T$ is an open mapping. Proposition 3: Let $(X, \| \cdot \|_X)$ be a normed linear space and let $f : X \to \mathbb{R}$ be a linear functional on $X$. Then $f$ is a bounded linear functional if and only if $\ker (f)$ is closed. Proof: $\Rightarrow$ Set $Y = \mathbb{R}$. Then by proposition 2 we have that $\ker (f)$ is closed in $X$. (1) $\Leftarrow$ Suppose that $\ker (f)$ is closed and suppose instead that $f$ is not bounded. Then for each $n \in \mathbb{N}$ there exists an $x_n \in X$. \begin{align} \quad |f(x_n)| \geq n \| x_n \|_X \end{align} (2) From our assumption that $\| f \| = \infty$, that is $\| f \| = \sup_{x \in X, \| x \|_X=1} |T(x)|$ we may assume that $\| x_n \| = 1$ for each $n \in \mathbb{N}$, and so: \begin{align} \quad |f(x_n)| \geq n \quad (*) \end{align} (3) Observe that if $f = 0$ then $f$ is bounded. So we may assume that $f \neq 0$. Let $x \in X$ be such that $f(x) \neq 0$,and for each $n \in \mathbb{N}$ let: \begin{align} \quad s_n = x - \frac{x_n}{f(x_n)} f(x) \end{align} (4) Note that each $s_n$ is well-defined since $f(x_n) \neq 0$ by $(*)$. Also observe that $f(s_n) = 0$ for all $n \in \mathbb{N}$. So $(s_n)$ is a sequence in $\ker (f)$. Also note that $s_n$ converges to $x$ since: \begin{align} \quad \lim_{n \to \infty} \| s_n - x \| = \lim_{n \to \infty} \left \| -\frac{x_n}{f(x_n)} f(x) \right \|_X = \lim_{n \to \infty} \left ( f(x) \frac{\| x_n \|_X}{f(x_n)} \right ) \leq \lim_{n \to \infty} \frac{f(x)}{n} = 0 \end{align} So $(s_n)$ is a sequence in $\ker (f)$ that converges to $x \in X$. Since $\ker (f)$ is closed, $x \in \ker (f)$. But $f(x) \neq 0$, which is a contradiction. So the assumption that $f$ is not bounded is false. So $f$ is a bounded linear functional on $X$. $\blacksquare$
In the book Bayesian Data Analysis by Gelman et al. (3rd edition, 2014), a hierarchical model (or one-way random-effects ANOVA) is presented in section 5.4 as follows, \begin{equation}\label{eq:lme1} y_{ij} = b_0 + \lambda_i + \varepsilon_{ij}, \end{equation} where the data $y_{ij}$ come from the $i$th measuring entity (e.g., student performance in a school district) collected under the $j$th condition (e.g., a school within the district), $b_0$ is the population mean, $\lambda_i$ is the deviation of the $i$th measuring entity from the population mean, and $\varepsilon_{ij}$ is the measuring error ($i=1, 2, \ldots, k;\ j=1, 2, \ldots, n$). A posterior inference is derived in the book for the effect of each measuring entity $\theta_i=b_0 + \lambda_i$ based on a Gaussian assumption with a known variance $\sigma^2$ for the residuals $\varepsilon_{ij}$ and a prior distribution $G(0, \tau^2)$ for $\lambda_i$. Specifically, the mean and variance for $\theta_i$ are estimated as below: \begin{align} {\rm mean}(\theta_i) &= \frac{\frac{n}{\sigma^2}\bar{y}_{i\cdot}+\frac{1}{\tau^2}b_0}{\frac{n}{\sigma^2}+\frac{1}{\tau^2}} \\[7pt] {\rm Var}(\theta_i) &= \frac{1}{\frac{n}{\sigma^2}+\frac{1}{\tau^2}} \end{align} where $\bar{y}_{i\cdot}=\frac{1}{n}\sum_{j=1}^n y_{ij}$. Even though the variance for the $\lambda_i$ is assumed to be known, I could solve the model as a mixed-effects model through, for example, function lmer() in the R package lme4, and use the estimated variances $\tau^2$ and $\sigma^2$ to obtain the posterior distribution using the formulation above. Is this a reasonable and solid approach? I know that I could directly obtain the posterior distribution through R packages such as brms and rstanarm. However, the computational cost is too heavy in my case, and that's why I'm trying to see if the above closed form is a a reasonable approach to directly obtaining the posterior distribution by plugging the variance estimate $\hat{\sigma}^2$ from lmer, rather than going through the typical Bayesian route..
I guess this is more a math question than a statistics question. I do not understand how the value of a first derivative can be larger than the range of the original function. I must have a fundamental mistake but I fail to locate it. The logistic CDF is $$\Lambda(X\beta) \in [0,1]$$ and in this case $$X\beta = \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1^2x_2$$ Then the marginal effects of $x_1$ and $x_2$ are $\frac{\partial \Lambda(X\beta)}{\partial x_1} = (\beta_1 + 2\beta_3x_1x_2)\lambda(X\beta)$ $\frac{\partial \Lambda(X\beta)}{\partial x_2} = (\beta_2 + \beta_3x_1^2)\lambda(X\beta)$ The R example below shows that there are many (82 out of 1000) cases where the first derivatives are larger than $\pm1$ set.seed(123);library(data.table)n = 1000b1 = .72b2 = -.86 b3 = 2.91x1 <- rnorm(n = n, sd = 1)x2 <- rnorm(n = n, sd = 1)xb = (b1 * x1) + (b2 * x2) + (b3 * x1^2 * x2)dx1 = (b1 + 2 * b3 * x1 * x2) * dlogis(xb)dx2 = (b2 + b3 * x1^2) * dlogis(xb)res <- data.table(x1, x2, dx1, dx2)head(res[abs(dx1)>=1 | abs(dx2)>=1]) x1 x2 dx1 dx21: 1.5587083 -0.01798024 0.1089530 1.214970932: 0.3598138 2.83222602 1.2401926 -0.090110813: -0.6250393 -1.64849482 1.3765467 0.056740454: -1.6866933 0.22855697 -0.3596850 1.751340475: 2.1689560 -0.26083224 -0.3165865 1.578855306: 0.6443765 -1.69186241 -1.4007296 0.08673243
Previous Article Stochastic partial differential equation models for spatially dependent predator-prey equations DCDS-BHome This Issue Next Article Qualitative analysis on an SIS epidemic reaction-diffusion model with mass action infection mechanism and spontaneous infection in a heterogeneous environment Detailed analytic study of the compact pairwise model for SIS epidemic propagation on networks Institute of Mathematics, Eötvös Loránd University Budapest, Numerical Analysis and Large Networks Research Group, Hungarian Academy of Sciences, Pázmány Péter sétány 1/C, H-1117 Budapest, Hungary The global behaviour of the compact pairwise approximation of SIS epidemic propagation on networks is studied. It is shown that the system can be reduced to two equations enabling us to carry out a detailed study of the dynamic properties of the solutions. It is proved that transcritical bifurcation occurs in the system at $ \tau = \tau _c = \frac{\gamma n}{\langle n^{2}\rangle-n} $, where $ \tau $ and $ \gamma $ are infection and recovery rates, respectively, $ n $ is the average degree of the network and $ \langle n^{2}\rangle $ is the second moment of the degree distribution. For subcritical values of $ \tau $ the disease-free steady state is stable, while for supercritical values a unique stable endemic equilibrium appears. We also prove that for subcritical values of $ \tau $ the disease-free steady state is globally stable under certain assumptions on the graph that cover a wide class of networks. Keywords:SIS epidemic, pairwise approximation, transcritical bifurcation, global stability, network process. Mathematics Subject Classification:Primary: 34C23, 34D23, 92C42. Citation:Noémi Nagy, Péter L. Simon. Detailed analytic study of the compact pairwise model for SIS epidemic propagation on networks. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 99-115. doi: 10.3934/dcdsb.2019174 References: [1] [2] K. T. D. Eames and M. J. Keeling, Modeling dynamic and network heterogeneities in the spread of sexually transmitted diseases, [3] [4] [5] J. K. Hale, [6] T. House and M. J. Keeling, Insights from unifying modern approximations to infections on networks, [7] M. J. Keeling, D. A. Rand and A. J. Morris, Correlation models for childhood epidemics, [8] [9] H. Matsuda, N. Ogita, A. Sasaki and K. Sato, Statistical mechanics of population: The lattice Lotka-Volterra model, [10] [11] M. A. Porter and J. P. Gleeson, [12] P. L. Simon, M. Taylor and I. Z. Kiss, Exact epidemic models on graphs using graph automorphism driven lumping, [13] M. Taylor, P. L. Simon, D. M. Green, T. House and I. Z. Kiss, From Markovian to pairwise epidemic models and the performance of moment closure approximations, show all references References: [1] [2] K. T. D. Eames and M. J. Keeling, Modeling dynamic and network heterogeneities in the spread of sexually transmitted diseases, [3] [4] [5] J. K. Hale, [6] T. House and M. J. Keeling, Insights from unifying modern approximations to infections on networks, [7] M. J. Keeling, D. A. Rand and A. J. Morris, Correlation models for childhood epidemics, [8] [9] H. Matsuda, N. Ogita, A. Sasaki and K. Sato, Statistical mechanics of population: The lattice Lotka-Volterra model, [10] [11] M. A. Porter and J. P. Gleeson, [12] P. L. Simon, M. Taylor and I. Z. Kiss, Exact epidemic models on graphs using graph automorphism driven lumping, [13] M. Taylor, P. L. Simon, D. M. Green, T. House and I. Z. Kiss, From Markovian to pairwise epidemic models and the performance of moment closure approximations, [1] Shouying Huang, Jifa Jiang. Global stability of a network-based SIS epidemic model with a general nonlinear incidence rate. [2] Toshikazu Kuniya, Yoshiaki Muroya. Global stability of a multi-group SIS epidemic model for population migration. [3] Xiaomei Feng, Zhidong Teng, Kai Wang, Fengqin Zhang. Backward bifurcation and global stability in an epidemic model with treatment and vaccination. [4] [5] Jianquan Li, Zhien Ma. Stability analysis for SIS epidemic models with vaccination and constant population size. [6] [7] [8] [9] Yukihiko Nakata, Yoichi Enatsu, Yoshiaki Muroya. On the global stability of an SIRS epidemic model with distributed delays. [10] Yoichi Enatsu, Yukihiko Nakata, Yoshiaki Muroya. Global stability for a class of discrete SIR epidemic models. [11] Toshikazu Kuniya, Mimmo Iannelli. $R_0$ and the global behavior of an age-structured SIS epidemic model with periodicity and vertical transmission. [12] [13] Kie Van Ivanky Saputra, Lennaert van Veen, Gilles Reinout Willem Quispel. The saddle-node-transcritical bifurcation in a population model with constant rate harvesting. [14] Russell Johnson, Francesca Mantellini. A nonautonomous transcritical bifurcation problem with an application to quasi-periodic bubbles. [15] C. Connell McCluskey. Global stability of an $SIR$ epidemic model with delay and general nonlinear incidence. [16] Gang Huang, Edoardo Beretta, Yasuhiro Takeuchi. Global stability for epidemic model with constant latency and infectious periods. [17] [18] Deqiong Ding, Wendi Qin, Xiaohua Ding. Lyapunov functions and global stability for a discretized multigroup SIR epidemic model. [19] Yongli Cai, Yun Kang, Weiming Wang. Global stability of the steady states of an epidemic model incorporating intervention strategies. [20] 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Under the auspices of the Computational Complexity Foundation (CCF) We describe a new proof of the PCP theorem that is based on a combinatorial amplification lemma. The *unsat value* of a set of constraints C={c_1,..,c_n}, denoted unsat(C), is the smallest fraction of unsatisfied constraints, ranging over all possible assignments for the underlying variables. We prove a new combinatorial amplification lemma that doubles the unsat-value of a constraint-system, with only a linear blowup in the size of the system. Iterative application of this lemma yields a proof for the PCP theorem. The amplification lemma relies on a new notion of "graph powering" that can be applied to systems of constraints. This powering amplifies the unsat-value of a constraint system provided that the underlying graph structure is an expander. We also apply the amplification lemma to construct PCPs and locally-testable codes whose length is linear up to a *polylog* factor, and whose correctness can be probabilistically verified by making a *constant* number of queries. Namely, we prove $SAT \in PCP_{\half,1}[\log_2(n\cdot\poly\log n),\,O(1)]$. This answers an open question of Ben-Sasson et al. (STOC '04). Let C={c_1,...,c_n} be a set of constraints over a set of variables. The {\em satisfiability-gap} of C is the smallest fraction of unsatisfied constraints, ranging over all possible assignments for the variables. We prove a new combinatorial amplification lemma that doubles the satisfiability-gap of a constraint-system, with only a linear blowup in the size of the system. Iterative application of this lemma yields a self-contained (combinatorial) proof for the PCP theorem. The amplification lemma relies on a new notion of "graph powering" that can be applied to systems of constraints. This powering amplifies the satisfiability-gap of a constraint system provided that the underlying graph structure is an expander. We also apply the amplification lemma to construct PCPs and locally-testable codes whose length is {\em quasi-linear}, and whose correctness can be probabilistically verified by making a {\em constant} number of queries. Namely, we prove SAT \in PCP_{0.5,1}[log(n * polylog n) , O(1)]. This answers an open question of Ben-Sasson et al. (STOC '04). The gap amplification lemma of Dinur (ECCC TR05-46) states that the satisfiability gap of every d-regular constraint expander graph G (with self-loops) can be amplified by graph powering, as long as the satisfiability gap of G is not too large. We show that the last requirement is necessary. Namely, for infinitely many d and every t, there exists an integer n and a d-regular constraint expander G on n vertices over alphabet {0, 1} such that sat-gap(G) > 1/2 - o(d), but sat-gap(G^t) < 1/2. This note is based on the original version of Irit Dinur's paper (ECCC TR05-046). It contains two suggestions concerning the product construction. First, instead of using paths of a fixed length t, one can use paths with varying lengths in order to simplify some calculations. Second, one can view Proposition 2.4 as guaranteeing some sort of pairwise independence, and use the second-moment method instead of explicitly bounding the overcount (as in Lemma 5.3). An important extension of the proof of the PCP theorem by Irit Dinur (J. ACM 54(3), also ECCC TR05-046) is a gap amplification theorem for Assignment Testers. Specifically, this theorem states that the rejection probability of an Assignment Tester can be amplified by a constant factor, at the expense of increasing the output size of the Assignment Tester by a constant factor. We point out a gap in the proof of this theorem, and show that this gap can be filled.
Under which condition is the largest eigenvalue of a positive semi-definite matrix strictly larger than the largest of the matrix's diagonal entries? I doubt that there's a nice necessary and sufficient condition, but a sufficient condition is that at least one off-diagonal entry in the row with the largest diagonal entry is nonzero. EDIT: Let your matrix by $A$. Suppose the largest diagonal entry is $d_{ii}$ and $d_{ij} \ne 0$. Consider the $2 \times 2$ submatrix consisting of the $i$'th and $j$'th row and column. This is also positive semidefinite; if $p(\lambda)$ is its characteristic polynomial, $p(d_{ii}) = -d_{ij}^2 < 0$, while of course $p(\lambda) \to +\infty$, so there must be a root $\mu$ of $p$ greater than $d_{ii}$. This translates to there being a nonzero vector $u$ having only its $i$'th and $j$'th entries nonzero such that $\langle A u, u \rangle = \mu \langle u, u \rangle$, and then the largest eigenvalue of $A$ is at least $\mu$. Hint: $$\lambda_{max} = \max_{|v|=1} \langle Av, v\rangle$$
In what follows $X$ will be a scheme and $G$ a group scheme. In the examples I will take $X=\mathbb{P}^1_k$ and $G=\mathbb{G}_{m}$. When reading about "the torsor..." I found many definitions, not sure which are equivalent and why exactly: (Principle $G$-bundle) A $G$-torsor $P$ over a scheme $X$ is a scheme $P\to X$ with a $G$ action $G\times P\to P$ , which is locally trivial in the sense that there is a covering map $Y\to X$ (in the Zariski,etale,fppf,... topology), s.t. $Y\times _X P\to Y$ is isomorphic to $Y\times G\to Y$. Example: $P=X\times G$ or $\mathbb{A}^2_k-\{0\} \to X$ the Hopf bundle (Sheaf version) A $G$-torsor $P$ is a sheaf on $(Sch/X)$ (with the Zariski, etale,fppf,... topology), if there is a covering $\{U_i\to X\}$, s.t. $P|_{U_i}$ is a trivial $G_{U_i}$ torsor (on sheafs a trivial torsor is a sheaf with a $G$ simply transitive action and non empty global sections). Example: $Hom(-,X)$ for a principal $G$-bundle as in 1) ? (Small sheaf version) Let $\mathcal{G}$ be a sheaf of abelian groups on the topological space $X$. A $\mathcal{G}$-torsor is a sheaf $P$ on $X$ with a $\mathcal{G}$ action with non-empty stalks on which $G$ acts simply transitive. (Tannakian version) A $G$-torsor is an exact tensor functor $Rep_k(G)\to Bun_X$, where $X$ is a scheme over $k$. (cf this paper 4.1.) If $G$ happens to be $GL_n$ then we even have two more notions: $P$ is a vector bundle, i.e. a locally free sheaf of rank $n$ $P$ is a geometric vector bundle, i.e. $P\to X$ is a morphism which is locally of the form $pr_2:\mathbb{A}_n\times U_i\to U_i$ in a compatible way (cf. Hartshore ex.5.19 for more details) Now here is what I think is equivalent: If we consider the etale topology, 1. and 4. are equivalent I think I read that 1. and 2. are equivalent in some good cases, if some affinity condition is fulfilled In the $GL_n$ case 3., 5. and 6. are all equivalent I am not sure if 1. 2. or 4. are connected to 3. (well $Hom(-,\mathbb{G}_m)=\mathcal{O}_X^*$ when restricted to X seems no coincidence?) Further questions: What do $\check{\mathrm{H}}^1(X,\mathcal{G})$ and $\check{\mathrm{H}}^1(X,G)$ really capture? Is the line bundle corresponding to the Hopf bundle in 1. $\mathcal{O}(-1)$?
I''ve been struggeling with this problem for the last couple of days. The main goal is to use the probabilistic classification output $p(C_k|x)$, from for example a logistic regression, to enhance and make a overall classification performance more robust, over a given set of measurements. In other words: I want to make a recursive bayesian estimation. Now, I'm wondering if there is any change to do this. Assuming I would just count $p(C_k|x) > 0.5$ as a success, $y=1$ (interpreting as a Bernoulli distribution). I could easily set up a $Beta(\alpha_0,\beta_0)$ and start updating, as the $Beta$ is a conjugate prior, with \begin{align*} \alpha_1 &= \alpha_0 + y, \\ \beta_1 &= \beta_0 + 1 - y. \end{align*} So this works fine and is mathematically correct, but I would really like to incorporate the probabilistic output, as it provides valuable additional information. When directly assuming \begin{equation*} y = p(C_k|x), \end{equation*} and the update $\alpha$ and $\beta$, I get exactly the kind of behavior I'm trying to achieve. Unfortunately, I guess this violates the assumption of being $Bernoulli$ distributed and hence $Beta$ would not be the right conjugate prior, or is ist? I could set up a $Normal$ prior and interpret the output of the logistic regression also as a $Normal$ distribution (somehow) but the $Beta$ fits perfectly my needs as it is bounded to $0...1$
In the colloquial sense of the word "justified," it is not justified. I will describe why it is justified mathematically and under what circumstances and in what case it is not justified. Let me begin with the simplest of equations $$\tilde{w}=R\bar{w}+\epsilon,\epsilon\sim\mathcal{N}(0,\sigma^2).$$ Let us assume that this equation is an element of our problem. From a static model, it maps to $w_{t+1}=Rw_t+\epsilon_{t+1}.$ Through Donsker's scale invariance, it can then be shown that this, in turn, could be mapped to a continuous time model, but we won't do that here as it won't add any value to the discussion. Using either Ito or Stratonovich calculus, we can correctly solve a wide variety of problems, though both methods of calculus assume that all parameters are known. It is a very important assumption because the above equation has no solution within Frequentist axioms and remaining consistent with mean-variance finance. To understand why, if $R$ is unknown, then Mann and Wald have shown that the maximum likelihood estimator is the ordinary least squares estimator for any $\epsilon$ drawn from any distribution centered on zero with a defined, fixed variance. However, note that if $R<1$, then capital will go to zero. If $R=1$, then it follows that $R$ is essentially currency and bears no interest so nobody would "invest" in it, though they may hold money for a variety of other reasons. It must be the case that $R>1$. Expecting a positive return is not a surprising thing. The estimator for $R$ is the least squares estimator in all circumstances, so it is both the best estimator in Fisher's Likelihood-based method of statistics and Pearson and Neyman's Frequentist method of statistics. So far, so good. The question then becomes, "what is the sampling distribution of $\hat{R}$?" That is the rub. White in 1958 was able to show that the limiting distribution is the Cauchy distribution, which has neither a mean nor, as a consequence, a variance. Any use of least squares has zero power to find the parameter. In other words, if mean-variance models are true, there cannot exist a test to measure it with positive power as the distribution-free methods available are Thiel's polynomial regression and quantile regression. Both are median based. So, if you assume normality, the models are valid if all assumptions are met, including models with a normal distribution. Mathematically, the models are valid but inapplicable to a world where the parameters are not known with certainty. I have proposed a new stochastic calculus that first-order stochastically dominates Ito methods, but it is in peer review right now. I will try and remember to come back and post if it is published. I dropped the assumption in Ito calculus that the parameters were known and proposed both a Bayesian and a Frequentist stochastic calculus. If the parameters are unknown, then it is possible to derive the distribution of returns. That is because if $$r_t=\frac{p_{t+1}q_{t+1}}{p_tq_t}-1,$$ then $r$ is a function of prices and quantities, which are data. The definition of a statistic is any function of data. As such, returns are not data; they are statistics. Their distribution should be derived. As $r$ is the product of the ratios of prices and the ratio of quantities, then $r$ is the sum of the price ratios times the existential states quantities could finish in. Note also that we just ignored dividends and liquidity costs. That is ill-advised but would make this a very long, long, post. The existential states are bankruptcy where $q_{t+1}=0$, cash-for-stock mergers where $q_{t+1}=w$, stock-for-stock mergers where $q^f_{t+1}=kq^j_{t+1},$ and the going concern state where $q_{t+1}=mq_t$ where $m$ corrects for splits and stock dividends. The remainder concerns the ratio of prices. The distribution of prices can be derived by combining auction theory with the terms and conditions of the contract or asset. As such, antiques should have a different return than stocks, which should have a different return than bonds. From auction theory, in equilibrium, we know that in a double auction there is no winner's curse so the optimal solution is for each bidder to bid their expectation. The sampling distribution of very many expectations is the normal distribution. For argument purposes here I am ignoring thin markets because the answer comes out the same, but it takes another forty pages of proofs. If we restrict ourselves to the case where $q_t=q_{t+1}$ and impose an equilibrium assumption, which is overly restrictive, but again, it is a length of discourse issue, then prices are the ratio of two normal distributions that are truncated at -100%. If one treats the equilibrium prices as (0,0) by translating them as $p_t-p_t^*,\forall{t}$, then by well-known theorems, the distribution of returns will converge to the Cauchy distribution, though truncated. For the going concern case, the distribution of returns, ignoring dividends and not correcting for liquidity costs, must be $$\left[\frac{\pi}{2}+\tan^{-1}\left(\frac{\mu}{\sigma}\right)\right]^{-1}\frac{\sigma}{\sigma^2+(r_t-\mu)^2}-1.$$ If you want to test it, I would suggest downloading Carnival Cruise Lines daily prices. Construct daily returns correcting for weekends. Construct the Bayesian posterior predictive distribution and you will find it nearly perfectly overlaps the kernel density estimate. The problem with using the normal distribution is that the Cauchy distribution has no first or higher moments that are defined. The consequence of this is that estimates of $\beta$ are completely without power and have perfect asymptotic relative inefficiency when compared to any valid median estimator. With respect to the log-normal distribution, everything that is listed above still holds. Because log-normal models can be derived from normal models, nothing is different. For example, you can derive Black-Scholes from the Capital Asset Pricing Model. That is because, while the normal distribution assumes additive errors, they can be converted to multiplicative errors with a model change by noting the relationship between difference equations and models using exponential constructions. This counter-intuitive observation does depend on knowledge of the parameters. When they are unknown, the concave nature of the logarithm will create a different result. See Curtiss, J.H. (1941) On the Distribution of the Quotient of Two Chance Variables. Annals of Mathematical Statistics, 12, 409-421. Gurland, J. (1948) Inversion Formulae for the Distribution of Ratios. The Annals of Mathematical Statistics, 19, 228-237. Harris, D.E.(2017) The Distribution of Returns. Journal of Mathematical Finance, 7, 769-804. Marsaglia, G. (1965) Ratios of Normal Variables and Ratios of Sums of Uniform Variables. Journal of the American Statistical Association, 60, 193-204. Marsaglia, G. (2006) Ratios of Normal Variables. Journal of Statistical Software, 16, 1-10. Mann, H. and Wald, A. (1943) On the Statistical Treatment of Linear Stochastic Difference Equations. Econometrica, 11, 173-200. White, J.S. (1958) The Limiting Distribution of the Serial Correlation Coefficient in the Explosive Case. The Annals of Mathematical Statistics, 29, 1188-1197.
The set of $2\times 2$ Symmetric Matrices is a Subspace Problem 586 Let $V$ be the vector space over $\R$ of all real $2\times 2$ matrices. Let $W$ be the subset of $V$ consisting of all symmetric matrices. (a) Prove that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Determine the dimension of $W$. Contents Proof. Recall that $A$ is symmetric if $A^{\trans}=A$. (a) Prove that $W$ is a subspace of $V$. We verify the following subspace criteria: The zero vector in $V$ is the $2\times 2$ zero matrix $O$. It is clear that $O^{\trans}=O$, and hence $O$ is symmetric. Thus $O\in W$ and condition 1 is met. Let $A, B$ be arbitrary elements in $W$. That is, $A$ and $B$ are symmetric matrices. We show that the sum $A+B$ is also symmetric. We have \begin{align*} (A+B)^{\trans}=A^{\trans}+B^{\trans}=A+B. \end{align*} The second equality follows as $A, B$ are symmetric. Hence $A+B$ is symmetric and $A+B\in W$. Condition 2 is met. To check condition 3, let $A\in W$ and $r\in \R$. We have \begin{align*} (rA)^{\trans}=rA^{\trans}=rA, \end{align*} where the second equality follows since $A$ is symmetric. This implies that $rA$ is symmetric, and hence $rA\in W$. So condition 3 is met, and we conclude that $W$ is a subspace of $V$ by subspace criteria. (b) Find a basis of $W$. Let \[A=\begin{bmatrix} a_{11} & a_{12}\\ a_{21}& a_{22} \end{bmatrix}\] be an arbitrary element in the subspace $W$. Then since $A^{\trans}=A$, we have \[\begin{bmatrix} a_{11} & a_{21}\\ a_{12}& a_{22} \end{bmatrix}=\begin{bmatrix} a_{11} & a_{12}\\ a_{21}& a_{22} \end{bmatrix}.\] This implies that $a_{12}=a_{21}$, and hence \begin{align*} A&=\begin{bmatrix} a_{11} & a_{12}\\ a_{12}& a_{22} \end{bmatrix}\\[6pt] &=a_{11}\begin{bmatrix} 1 & 0\\ 0& 0 \end{bmatrix}+a_{12}\begin{bmatrix} 0 & 1\\ 1& 0 \end{bmatrix}+a_{22}\begin{bmatrix} 0 & 0\\ 0& 1 \end{bmatrix}. \end{align*} Let $B=\{v_1, v_2, v_3\}$, where $v_1, v_2, v_3$ are $2\times 2$ matrices appearing in the above linear combination of $A$. Note that these matrices are symmetric. Hence we showed that any element in $W$ is a linear combination of matrices in $B$. Thus $B$ is a spanning set for the subspace $W$. We show that $B$ is linearly independent. Suppose that we have \[c_1v_1+c_2v_2+c_3v_3=\begin{bmatrix} 0 & 0\\ 0& 0 \end{bmatrix}.\] Then it follows that \[\begin{bmatrix} c_1 & c_2\\ c_2& c_3 \end{bmatrix}=\begin{bmatrix} 0 & 0\\ 0& 0 \end{bmatrix}.\] Thus $c_1=c_2=c_3=$ and the set $B$ is linearly independent. As $B$ is a linearly independent spanning set, we conclude that $B$ is a basis for the subspace $W$. (c) Determine the dimension of $W$. Recall that the dimension of a subspace is the number of vectors in a basis of the subspace. In part (b), we found that $B=\{v_1, v_2, v_3\}$ is a basis for the subspace $W$. As $B$ consists of three vectors, the dimension of $W$ is $3$. Related Question (Skew-Symmetric Matrices) A matrix $A$ is called skew-symmetric if $A^{\trans}=-A$. The solution is given in the post ↴ Subspace of Skew-Symmetric Matrices and Its Dimension Add to solve later
Direct Sum Theorems Recall the following definition and lemma regarding the direct sum of a set of subspaces $U_1, U_2, ..., U_m$ of a vector space $V$. Definition: Let $U_1, U_2, ..., U_m$ all be vector subspaces of the $\mathbb{F}$-vector space $V$. We define the direct sum of these subspaces $\bigoplus_{i=1}^{m} = U_1 \oplus U_2 \oplus ... \oplus U_m = V$ is defined to be a sum of the subspaces $U_1, U_2, ..., U_m$ to which each element in $V$ can be uniquely written as $u_1 + u_2 + ... + u_m$ where $u_i \in U_i$ for $i = 1, 2, ..., m$. Lemma 1: Let $U_1, U_2, ..., U_m$ be vector subspaces of the $\mathbb{F}$-vector space $V$. Then these subspaces form a direct sum $\bigoplus_{i=1}^{m} U_i = V$ if and only if the sum of these subspaces is equal to $V$, that is $\sum_{i=1}^{m} U_i = V$ and when $\sum_{i=1}^{m} u_i = 0$ where $u_i \in U_i$ implies that $u_i = 0$ for every $i = 1, 2, ..., m$. We will now look at some more important theorems regarding the direct sum of subspaces. Theorem 1: If $U_1$ and $U_2$ are subspaces of the vector space $V$ over the field $\mathbb{F}$, then $U_1 + U_2 = U_1 \oplus U_2 = V$ if and only $U_1 \cap U_2 = \{ 0 \}$. Intuitively this should make sense. The subspaces $U_1$ and $U_2$ can only form a direct sum of $V$, that is $V = U_1 \oplus U_2$ if these subspaces share no vectors in common apart from the zero vector. For example, if $U_1$ contains a nonzero vector from $U_2$, call it $x$, that is $x \in U_1 \cap U_2$, then:(1) Therefore not all sums of vectors $u_1 + u_2$ are uniquely determined since $x$ can be derived in two different ways. We will now demonstrate the formal proof. Proof:$\Rightarrow$ Suppose that $U_1 + U_2 = U_1 \oplus U_2 = V$. We first want to show that $U \cap U_2 = \{ 0 \}$. We note that since $V = U_1 \oplus U_2$, then if $u_1 + u_2 = 0$ implies that $u_1 = 0$ and $u_2 = 0$. Now $0 = x + (-x)$, so $x \in U_1$ and $-x \in U_2$, and since $V = U_1 \oplus U_2$ this implies that $x = -x = 0$. $\Leftarrow$ Suppose that $U_1 \cap U_2 = \{ 0 \}$. We want to show that $U_1 + U_2 = U_1 \oplus U_2 = V$. Now suppose that for $u_1 \in U_1$ and $u_2 \in U_2$ that $0 = u_1 + u_2$. Therefore $u_1 = -u_2$. Therefore $u_1 = -u_2 \in U_1 \cap U_2 = \{ 0 \}$ and so $u_1 = -u_2 = 0$ or rather $u_1 = u_2 = 0$ and thus by lemma 1 since $\sum_{i=1}^{m} u_i = 0$ implies that $u_1 = u_2 = 0$, we have that $U_1 + U_2 = U_1 \oplus U_2 = V$ $\blacksquare$ Corollary 1: If $U_1, U_2, ..., U_m$ are subspaces of the vector space $V$ over the field $\mathbb{F}$, then $\sum_{i=1}^{m} U_i = \bigoplus_{i=1}^{m} U_i = V$ if and only $U_i \cap \left ( \sum_{j \neq i} U_j \right) = \{ 0 \}$ where $i = 1, 2, ..., m$. Proof:$\Rightarrow$ Suppose that $\sum_{i=1}^{m} U_i = \bigoplus_{i=1}^{m} U_i =V$, that is the sum is direct. We first want to show that $U_i \cap \left ( \sum_{j \neq i} U_j \right) = \{ 0 \}$ where $i = 1, 2, ..., m$. Now since $V = \oplus_{i=1}^{m} U_i$, then this implies that if $u_1 + u_2 + ... u_m = 0$ implies $u_1 = u_2 = ... = u_m$. Without loss of generality consider $U_1 \cap \left ( \sum_{j=2}^{m} U_j \right)$. Then $u_1 \in U_1$ and $(u_2 + u_3 + ... + u_m) \in \sum_{j=2}^{m} U_j$. Now if $0 = x + (-x)$ then $x \in U_1$ and $-x \in \sum_{j=2}^{m} U_j$ and since $V = U_1 \bigoplus_{j=2}^{m} U_i$ this implies that $x = -x = 0$. $\Leftarrow$ Now suppose that $U_i \cap \left ( \sum_{j\neq i}^{m} U_j \right ) = \{ 0 \}$. We want to show that $\sum_{i=1}^{m} U_i = \bigoplus_{i=1}^{m} U_i = V$. Without loss of generality let $U_1 \cap \left ( \sum_{j=2}^{m} U_j \right ) = \{ 0 \}$, and let $u_1 \in U_1$ and $u_2 \in \left ( \sum_{j=2}^{m} U_j \right )$ such that $u_1 + u_2 = 0$. Therefore $u_1 = -u_2$ and so $u_1 = -u_2 \in U_1 \cap \left ( \sum_{j=2}^{m} U_j \right ) = \{ 0 \}$ which implies that $u_1 = -u_2 = 0$ or rather $u_1 = u_2 = 0$, and so we have that $\sum_{i=1}^{m} U_i = \bigoplus_{i=1}^{m} U_i = V$. $\blacksquare$
Difference between revisions of "Main Page" (→Threads) (→Proof strategies) Line 35: Line 35: == Proof strategies == == Proof strategies == − It is natural to look for strategies based on + It is natural to look for strategies based on * [[Szemerédi's original proof of Szemerédi's theorem]]. * [[Szemerédi's original proof of Szemerédi's theorem]]. Line 43: Line 43: * The [[triangle removal lemma]]. * The [[triangle removal lemma]]. * [[Ergodic-inspired methods]]. * [[Ergodic-inspired methods]]. + == Bibliography == == Bibliography == Revision as of 23:25, 13 February 2009 The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Useful background materials Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (inactive) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (active) There is also a chance that we will be able to improve the known bounds on Moser's cube problem. Here are some unsolved problems arising from the above threads. Here is a tidy problem page. Proof strategies It is natural to look for strategies based on one of the following: Szemerédi's original proof of Szemerédi's theorem. Szemerédi's combinatorial proof of Roth's theorem. Ajtai-Szemerédi's proof of the corners theorem. The density increment method. The triangle removal lemma. Ergodic-inspired methods. The Furstenberg-Katznelson argument. Bibliography Density Hales-Jewett H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. Behrend-type constructions M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. Triangles and corners M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Let $\phi:R \rightarrow S$ be a ring homomorphism and let $J$ be an ideal in $S$, then it is quite easy to prove that $\phi^{-1}(J)$ is an ideal in $R$. But can someone help me with proving the opposite? i.e. if $\phi^{-1}(J)$ is an ideal in $R$ then $J$ is an ideal in $S$. (of course, it could be that the statement I am making is false in which case, can someone explain why that is so?) The statement you want to prove is wrong, think for example of $\phi \colon \mathbf Z \to \mathbf Q$ the inklusion and $J = 2\mathbf Z$. Then $\phi^{-1}(J) = 2\mathbf Z \subseteq \mathbf Z$ is an ideal, but $J = 2\mathbf Z\subseteq \mathbf Q$ is not, as $\mathbf Q$ is a field and $2\mathbf Z \ne 0,\mathbf Q$. The converse is not true. Let $\phi:\mathbb{Z}\rightarrow \mathbb{Q}$ be the inclusion homomorphism. Then $\phi^{-1}(\mathbb{Z})= \mathbb{Z}$ is an ideal in $\mathbb{Z}$ but $\mathbb{Z}$ is not an ideal in $\mathbb{Q}$. Ideals are kernels of ring homomorphisms. Viewing them like this might help understand why this only works one way: the pre-image along map $g$ of a kernel of map $f$ is the kernel of the composition $fg$ but knowing that the pre-image of a subset $X$ is the kernel of some homomorphism $g$ does not help to get a homomorphism $f$ that has $X$ as kernel.
Suppose we have a positive point charge And we put a positive test charge in in the electric field, how can we keep this charge static, And how external work May keep it static because i think it will continue to repel closed as unclear what you're asking by Chris♦, ZeroTheHero, user191954, Frobenius, John Rennie Sep 11 '18 at 9:48 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. There's a mathematical result called Earnshaw's Theorem that proves that keeping a point charge stably confined in an electrostatic field alone is impossible (specifically, it proves that there are no local maxima/minima in the electric potential in free space, only saddle points). If you use both an electrostatic and a magnetic field, however, confinement to a particular region is possible. In fact, this is the basis of the Penning trap, which is used in many laboratory situations to trap ions (see diagram below, from https://en.wikipedia.org/wiki/Penning_trap#/media/File:Penning_Trap.jpg). Since the uniform magnetic field makes charges travel in a helix, it confines the point charge trajectories to a thin cylinder, and the electric fields deflect the charges away from the ends of that cylinder. In this context, I take "static" mean to zero velocity and acceleration. Zero velocity is a question of the choosing the right reference frame. Zero acceleration means zero net force, so you will have to apply a force to the test charge that is equal and opposite the force applied by the electric field. If the only (non-negligible) force acting on the test charge is the one applied by the electric field, then the charge will accelerate and move. In the situation you describe, it will be repelled by the positive point charge. We usually neglect gravity in these scenarios, for a variety of reasons. But we can easily imagine a situation in which the attractive gravitational force on the test charge (due to the other particle) exactly balances the repulsive force applied by the electric field, yielding a zero net force. You mentioned "external work." The work done by the net force acting on an object will equal that object's change in kinetic energy. If the object is static, the work done by the net force must be zero. You can't have a stationary test charge if it only experiences the field of one other charge. It is impossible. As others have pointed out, if you include gravity or some other force than it is possible. But only using a single electric field, you cannot keep some charge in this field stationary. This can be seen using Newton's second law. The magnitude of the net force experienced by the test charge is $$\sum F=F_1=q_{test}\cdot E=m_{test}\cdot a\neq0$$ Therefore, the test charge will accelerate. If you want $0$ acceleration, then you need some other force equal and opposite to the force produced by the field of the positive point charge $$\sum F=F_1+F_2=0$$ so that $$F_2=-q_{test}\cdot E$$ This force can be anything. Another electric field. Gravity. As long as the forces balance out you can get equilibrium, but you cannot do it with just a single force.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Combination of inclusive and differential $ \mathrm{t}\overline{\mathrm{t}} $ charge asymmetry measurements using ATLAS and CMS data at $ \sqrt{s}=7 $ and 8 TeV Journal of High Energy Physics (Online), ISSN 1029-8479, 04/2018, Volume 2018, Issue 4 Journal Article 2. Search for heavy particles decaying into top-quark pairs using lepton-plus-jets events in proton–proton collisions at $\sqrt{s} = 13$ $\text {TeV}$ with the ATLAS detector European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 07/2018, Volume 78, Issue 7 Here, a search for new heavy particles that decay into top-quark pairs is performed using data collected from proton–proton collisions at a centre-of-mass... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 3. Combinations of single-top-quark production cross-section measurements and $|f_{\rm LV}V_{tb}|$ determinations at $\sqrt{s}=7$ and 8 TeV with the ATLAS and CMS experiments 02/2019 JHEP 05 (2019) 088 This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 4. Combined Measurement of the Higgs Boson Mass in pp Collisions at sqrt[s]=7 and 8 TeV with the ATLAS and CMS Experiments Physical review letters, ISSN 0031-9007, 05/2015, Volume 114, Issue 19, p. 191803 Journal Article 5. Combination of inclusive and differential tt¯ charge asymmetry measurements using ATLAS and CMS data at s√=7 and 8 TeV ISSN 1029-8479, 2018 mass spectrum: (2top) | experimental results | signature | statistical analysis: Bayesian | mass dependence | CMS | CERN LHC Coll | ATLAS | top: pair production | data analysis method | 7000 GeV-cms8000 GeV-cms | p p: colliding beams | p p: scattering | charge: asymmetry: measured | final state: ((n)jet lepton) Journal Article 6. Observation of Higgs boson production in association with a top quark pair at the LHC with the ATLAS detector Physics Letters B, ISSN 0370-2693, 09/2018, Volume 784, Issue C, pp. 173 - 191 The observation of Higgs boson production in association with a top quark pair ( ), based on the analysis of proton–proton collision data at a centre-of-mass... PHYSICS, NUCLEAR | SEARCH | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences PHYSICS, NUCLEAR | SEARCH | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Journal Article 7. Measurements of the Higgs boson production and decay rates and constraints on its couplings from a combined ATLAS and CMS analysis of the LHC $pp$ collision data at $\sqrt{s}=$ 7 and 8 TeV Journal of High Energy Physics, ISSN 1029-8479, 06/2016, Volume 8, Issue 8, p. 045 JHEP08(2016)045 Combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and... Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Hadron-Hadron scattering (experiments) | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Higgs physics Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Hadron-Hadron scattering (experiments) | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Higgs physics Journal Article The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2017, Volume 77, Issue 5, pp. 1 - 53 During 2015 the ATLAS experiment recorded $$3.8\,{\mathrm{fb}}^{-1}$$ 3.8 fb - 1 of proton–proton collision data at a centre-of-mass energy of... PHYSICS, PARTICLES & FIELDS | Collisions (Nuclear physics) | Protons | Data acquisition systems | Physics - High Energy Physics - Experiment | High Energy Physics - Phenomenology | Physics | PARTICLE ACCELERATORS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences PHYSICS, PARTICLES & FIELDS | Collisions (Nuclear physics) | Protons | Data acquisition systems | Physics - High Energy Physics - Experiment | High Energy Physics - Phenomenology | Physics | PARTICLE ACCELERATORS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 9. Measurements of the Higgs boson production and decay rates and constraints on its couplings from a combined ATLAS and CMS analysis of the LHC pp collision data at $ \sqrt{s}=7 $ and 8 TeV ISSN 1029-8479, 2016 decay rate: measured | experimental results | Higgs particle: mass | CMS | Higgs particle: coupling | Higgs particle: decay modes | top: associated production | CERN LHC Coll | vector boson: associated production | CERN Lab | ATLAS | channel cross section: measured | 7000 GeV-cms8000 GeV-cms | gluon: fusion | Higgs particle: branching ratio | Higgs particle: hadroproduction | p p: colliding beams | p p: scattering | Higgs particle: decay rate | vector boson: fusion Journal Article European Physical Journal C, ISSN 1434-6044, 2017, Volume 77, Issue 7, pp. 1 - 73 The reconstruction of the signal from hadrons and jets emerging from the proton–proton collisions at the Large Hadron Collider (LHC) and entering the ATLAS... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | HADRONIC CALIBRATION | NUCLEUS | DETECTOR | PHYSICS, PARTICLES & FIELDS | Particle accelerators | Electromagnetism | Collisions (Nuclear physics) | Hadrons | Large Hadron Collider | Particle collisions | Accident reconstruction | Transverse momentum | Clusters | Clustering | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PARTICLE ACCELERATORS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | HADRONIC CALIBRATION | NUCLEUS | DETECTOR | PHYSICS, PARTICLES & FIELDS | Particle accelerators | Electromagnetism | Collisions (Nuclear physics) | Hadrons | Large Hadron Collider | Particle collisions | Accident reconstruction | Transverse momentum | Clusters | Clustering | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PARTICLE ACCELERATORS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 11. Search for Higgs boson decays to beyond-the-Standard-Model light bosons in four-lepton events with the ATLAS detector at $\sqrt{s}=13$ TeV Journal of High Energy Physics (Online), ISSN 1029-8479, 02/2018, Volume 2018, Issue 6 Journal Article 02/2018 Phys. Rev. Lett. 120 (2018) 202007 A search for the narrow structure, $X(5568)$, reported by the D0 Collaboration in the decay sequence $X \to B^0_s \pi^\pm$,... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 13. Measurement of the cross section for isolated-photon plus jet production in $pp$ collisions at $\sqrt s=13$ TeV using the ATLAS detector 12/2017 Phys. Lett. B 780 (2018) 578 The dynamics of isolated-photon production in association with a jet in proton-proton collisions at a centre-of-mass energy of 13... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 14. Search for new phenomena in high-mass final states with a photon and a jet from $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 09/2017, Volume 78, Issue 2 Eur. Phys. J. C (2018) 102 A search is performed for new phenomena in events having a photon with high transverse momentum and a jet collected in 36.7... Physics - High Energy Physics - Experiment | Engineering (miscellaneous); Physics and Astronomy (miscellaneous) | Subatomic Physics | Astrophysics | Settore ING-INF/07 - Misure Elettriche e Elettroniche | dk/atira/pure/researchoutput/pubmedpublicationtype/D016428 | channel cross section: upper limit | final state: (jet photon) | Journal Article | Settore FIS/04 - Fisica Nucleare e Subnucleare | Science & Technology | Engineering (miscellaneous) | Subatomär fysik | Ciências Físicas [Ciências Naturais] | p p: colliding beams | Randall-Sundrum model | 13000 GeV-cms | Physics and Astronomy (miscellaneous) | experimental results | photon: transverse momentum | Settore FIS/01 - Fisica Sperimentale | Nuclear and particle physics. Atomic energy. Radioactivity | CERN LHC Coll | quark: mass: lower limit | space-time: higher-dimensional | Parton distributions; hierarchy problem; LHC | new particle: production | transverse momentum: high | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | new physics: search for | mass spectrum | resonance: production | quark: excited state | p p: scattering | black hole: quantum | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | black hole: model | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Physics - High Energy Physics - Experiment | Engineering (miscellaneous); Physics and Astronomy (miscellaneous) | Subatomic Physics | Astrophysics | Settore ING-INF/07 - Misure Elettriche e Elettroniche | dk/atira/pure/researchoutput/pubmedpublicationtype/D016428 | channel cross section: upper limit | final state: (jet photon) | Journal Article | Settore FIS/04 - Fisica Nucleare e Subnucleare | Science & Technology | Engineering (miscellaneous) | Subatomär fysik | Ciências Físicas [Ciências Naturais] | p p: colliding beams | Randall-Sundrum model | 13000 GeV-cms | Physics and Astronomy (miscellaneous) | experimental results | photon: transverse momentum | Settore FIS/01 - Fisica Sperimentale | Nuclear and particle physics. Atomic energy. Radioactivity | CERN LHC Coll | quark: mass: lower limit | space-time: higher-dimensional | Parton distributions; hierarchy problem; LHC | new particle: production | transverse momentum: high | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | new physics: search for | mass spectrum | resonance: production | quark: excited state | p p: scattering | black hole: quantum | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | black hole: model | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 15. Search for dark matter produced in association with bottom or top quarks in $\sqrt{s}$ = 13 TeV pp collisions with the ATLAS detector European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 10/2017, Volume 78, Issue 1 Journal Article 16. Searches for heavy $ZZ$ and $ZW$ resonances in the $\ell\ell qq$ and $\nu\nu qq$ final states in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector 08/2017 JHEP 03 (2018) 009 This paper reports searches for heavy resonances decaying into $ZZ$ or $ZW$ using data from proton--proton collisions at a centre-of-mass... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 17. Measurement of the photon identification efficiencies with the ATLAS detector using LHC Run-1 data The European Physical Journal C: Particles and Fields, ISSN 1434-6052, 2016, Volume 76, Issue 12, pp. 1 - 42 The algorithms used by the ATLAS Collaboration to reconstruct and identify prompt photons are described. Measurements of the photon identification efficiencies... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Photons | Simulation | Large Hadron Collider | Computer simulation | Collision dynamics | Transverse momentum | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Photons | Simulation | Large Hadron Collider | Computer simulation | Collision dynamics | Transverse momentum | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article
K-function Estimates Ripley's reduced second moment function \(K(r)\) from a point pattern in a window of arbitrary shape. Usage Kest(X, …, r=NULL, rmax=NULL, breaks=NULL, correction=c("border", "isotropic", "Ripley", "translate"), nlarge=3000, domain=NULL, var.approx=FALSE, ratio=FALSE) Arguments X The observed point pattern, from which an estimate of \(K(r)\) will be computed. An object of class "ppp", or data in any format acceptable to as.ppp(). … Ignored. r Optional. Vector of values for the argument \(r\) at which \(K(r)\) should be evaluated. Users are advised notto specify this argument; there is a sensible default. If necessary, specify rmax. rmax Optional. Maximum desired value of the argument \(r\). breaks This argument is for internal use only. correction Optional. A character vector containing any selection of the options "none", "border", "bord.modif", "isotropic", "Ripley", "translate", "translation", "rigid", "none", "good"or "best". It specifies the edge correction(s) to be applied. Alternatively correction="all"selects all options. nlarge Optional. Efficiency threshold. If the number of points exceeds nlarge, then only the border correction will be computed (by default), using a fast algorithm. domain Optional. Calculations will be restricted to this subset of the window. See Details. var.approx Logical. If TRUE, the approximate variance of \(\hat K(r)\) under CSR will also be computed. ratio Logical. If TRUE, the numerator and denominator of each edge-corrected estimate will also be saved, for use in analysing replicated point patterns. Details The \(K\) function (variously called ``Ripley's K-function'' and the ``reduced second moment function'') of a stationary point process \(X\) is defined so that \(\lambda K(r)\) equals the expected number of additional random points within a distance \(r\) of a typical random point of \(X\). Here \(\lambda\) is the intensity of the process, i.e. the expected number of points of \(X\) per unit area. The \(K\) function is determined by the second order moment properties of \(X\). An estimate of \(K\) derived from a spatial point pattern dataset can be used in exploratory data analysis and formal inference about the pattern (Cressie, 1991; Diggle, 1983; Ripley, 1977, 1988). In exploratory analyses, the estimate of \(K\) is a useful statistic summarising aspects of inter-point ``dependence'' and ``clustering''. For inferential purposes, the estimate of \(K\) is usually compared to the true value of \(K\) for a completely random (Poisson) point process, which is \(K(r) = \pi r^2\). Deviations between the empirical and theoretical \(K\) curves may suggest spatial clustering or spatial regularity. This routine Kest estimates the \(K\) function of a stationary point process, given observation of the process inside a known, bounded window. The argument X is interpreted as a point pattern object (of class "ppp", see ppp.object) and can be supplied in any of the formats recognised by as.ppp(). The estimation of \(K\) is hampered by edge effects arising from the unobservability of points of the random pattern outside the window. An edge correction is needed to reduce bias (Baddeley, 1998; Ripley, 1988). The corrections implemented here are border the border method or ``reduced sample'' estimator (see Ripley, 1988). This is the least efficient (statistically) and the fastest to compute. It can be computed for a window of arbitrary shape. isotropic/Ripley Ripley's isotropic correction (see Ripley, 1988; Ohser, 1983). This is implemented for rectangular and polygonal windows (not for binary masks). translate/translation Translation correction (Ohser, 1983). Implemented for all window geometries, but slow for complex windows. rigid Rigid motion correction (Ohser and Stoyan, 1981). Implemented for all window geometries, but slow for complex windows. none Uncorrected estimate. An estimate of the K function withoutedge correction. (i.e. setting \(e_{ij} = 1\) in the equation below. This estimate is biasedand should not be used for data analysis, unlessyou have an extremely large point pattern (more than 100,000 points). best Selects the best edge correction that is available for the geometry of the window. Currently this is Ripley's isotropic correction for a rectangular or polygonal window, and the translation correction for masks. good Selects the best edge correction that can be computed in a reasonable time. This is the same as "best"for datasets with fewer than 3000 points; otherwise the selected edge correction is "border", unless there are more than 100,000 points, when it is "none". The estimates of \(K(r)\) are of the form $$ \hat K(r) = \frac a {n (n-1) } \sum_i \sum_j I(d_{ij}\le r) e_{ij} $$ where \(a\) is the area of the window, \(n\) is the number of data points, and the sum is taken over all ordered pairs of points \(i\) and \(j\) in X. Here \(d_{ij}\) is the distance between the two points, and \(I(d_{ij} \le r)\) is the indicator that equals 1 if the distance is less than or equal to \(r\). The term \(e_{ij}\) is the edge correction weight (which depends on the choice of edge correction listed above). Note that this estimator assumes the process is stationary (spatially homogeneous). For inhomogeneous point patterns, see Kinhom. If the point pattern X contains more than about 3000 points, the isotropic and translation edge corrections can be computationally prohibitive. The computations for the border method are much faster, and are statistically efficient when there are large numbers of points. Accordingly, if the number of points in X exceeds the threshold nlarge, then only the border correction will be computed. Setting nlarge=Inf or correction="best" will prevent this from happening. Setting nlarge=0 is equivalent to selecting only the border correction with correction="border". If X contains more than about 100,000 points, even the border correction is time-consuming. You may want to consider setting correction="none" in this case. There is an even faster algorithm for the uncorrected estimate. Approximations to the variance of \(\hat K(r)\) are available, for the case of the isotropic edge correction estimator, assuming complete spatial randomness (Ripley, 1988; Lotwick and Silverman, 1982; Diggle, 2003, pp 51-53). If var.approx=TRUE, then the result of Kest also has a column named rip giving values of Ripley's (1988) approximation to \(\mbox{var}(\hat K(r))\), and (if the window is a rectangle) a column named ls giving values of Lotwick and Silverman's (1982) approximation. If the argument domain is given, the calculations will be restricted to a subset of the data. In the formula for \(K(r)\) above, the first point \(i\) will be restricted to lie inside domain. The result is an approximately unbiased estimate of \(K(r)\) based on pairs of points in which the first point lies inside domain and the second point is unrestricted. This is useful in bootstrap techniques. The argument domain should be a window (object of class "owin") or something acceptable to as.owin. It must be a subset of the window of the point pattern X. Some writers, particularly Stoyan (1994, 1995) advocate the use of the ``pair correlation function'' $$ g(r) = \frac{K'(r)}{2\pi r} $$ where \(K'(r)\) is the derivative of \(K(r)\). See pcf on how to estimate this function. Value Essentially a data frame containing columns the vector of values of the argument \(r\) at which the function \(K\) has been estimated the theoretical value \(K(r) = \pi r^2\) for a stationary Poisson process If var.approx=TRUE then the return value also has columns rip and ls containing approximations to the variance of \hat K(r)Kest(r) under CSR. If ratio=TRUE then the return value also has two attributes called "numerator" and "denominator" which are "fv" objects containing the numerators and denominators of each estimate of K(r). Envelopes, significance bands and confidence intervals To compute simulation envelopes for the \(K\)-function under CSR, use envelope. Warnings The estimator of \(K(r)\) is approximately unbiased for each fixed \(r\). Bias increases with \(r\) and depends on the window geometry. For a rectangular window it is prudent to restrict the \(r\) values to a maximum of \(1/4\) of the smaller side length of the rectangle. Bias may become appreciable for point patterns consisting of fewer than 15 points. While \(K(r)\) is always a non-decreasing function, the estimator of \(K\) is not guaranteed to be non-decreasing. This is rarely a problem in practice. References Baddeley, A.J. Spatial sampling and censoring. In O.E. Barndorff-Nielsen, W.S. Kendall and M.N.M. van Lieshout (eds) Stochastic Geometry: Likelihood and Computation. Chapman and Hall, 1998. Chapter 2, pages 37--78. Cressie, N.A.C. Statistics for spatial data. John Wiley and Sons, 1991. Diggle, P.J. Statistical analysis of spatial point patterns. Academic Press, 1983. Ohser, J. (1983) On estimators for the reduced second moment measure of point processes. Mathematische Operationsforschung und Statistik, series Statistics, 14, 63 -- 71. Ohser, J. and Stoyan, D. (1981) On the second-order and orientation analysis of planar stationary point processes. Biometrical Journal 23, 523--533. Ripley, B.D. (1977) Modelling spatial patterns (with discussion). Journal of the Royal Statistical Society, Series B, 39, 172 -- 212. Ripley, B.D. Statistical inference for spatial processes. Cambridge University Press, 1988. Stoyan, D, Kendall, W.S. and Mecke, J. (1995) Stochastic geometry and its applications. 2nd edition. Springer Verlag. Stoyan, D. and Stoyan, H. (1994) Fractals, random shapes and point fields: methods of geometrical statistics. John Wiley and Sons. See Also localK to extract individual summands in the \(K\) function. pcf for the pair correlation. reduced.sample for the calculation of reduced sample estimators. Aliases Kest Examples # NOT RUN { X <- runifpoint(50) K <- Kest(X) K <- Kest(cells, correction="isotropic") plot(K) plot(K, main="K function for cells") # plot the L function plot(K, sqrt(iso/pi) ~ r) plot(K, sqrt(./pi) ~ r, ylab="L(r)", main="L function for cells")# } Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
Tagged: module theory Problem 434 Let $R$ be a ring with $1$. A nonzero $R$-module $M$ is called irreducible if $0$ and $M$ are the only submodules of $M$. (It is also called a simple module.) (a) Prove that a nonzero $R$-module $M$ is irreducible if and only if $M$ is a cyclic module with any nonzero element as its generator. Add to solve later (b) Determine all the irreducible $\Z$-modules. Problem 432 (a) Let $R$ be an integral domain and let $M$ be a finitely generated torsion $R$-module. Prove that the module $M$ has a nonzero annihilator. In other words, show that there is a nonzero element $r\in R$ such that $rm=0$ for all $m\in M$. Here $r$ does not depend on $m$. Add to solve later (b) Find an example of an integral domain $R$ and a torsion $R$-module $M$ whose annihilator is the zero ideal. Problem 431 Let $R$ be a commutative ring and let $I$ be a nilpotent ideal of $R$. Let $M$ and $N$ be $R$-modules and let $\phi:M\to N$ be an $R$-module homomorphism. Prove that if the induced homomorphism $\bar{\phi}: M/IM \to N/IN$ is surjective, then $\phi$ is surjective.Add to solve later Problem 417 Let $R$ be a ring with $1$ and let $M$ be an $R$-module. Let $I$ be an ideal of $R$. Let $M’$ be the subset of elements $a$ of $M$ that are annihilated by some power $I^k$ of the ideal $I$, where the power $k$ may depend on $a$. Prove that $M’$ is a submodule of $M$. Problem 415 (a) Let $R$ be a commutative ring. If we regard $R$ as a left $R$-module, then prove that any two distinct elements of the module $R$ are linearly dependent. Add to solve later (b) Let $f: M\to M’$ be a left $R$-module homomorphism. Let $\{x_1, \dots, x_n\}$ be a subset in $M$. Prove that if the set $\{f(x_1), \dots, f(x_n)\}$ is linearly independent, then the set $\{x_1, \dots, x_n\}$ is also linearly independent. Read solution Problem 410 Let $R$ be a ring with $1$ and let $M$ be a left $R$-module. Let $S$ be a subset of $M$. The annihilator of $S$ in $R$ is the subset of the ring $R$ defined to be \[\Ann_R(S)=\{ r\in R\mid rx=0 \text{ for all } x\in S\}.\] (If $rx=0, r\in R, x\in S$, then we say $r$ annihilates $x$.) Suppose that $N$ is a submodule of $M$. Then prove that the annihilator \[\Ann_R(N)=\{ r\in R\mid rn=0 \text{ for all } n\in N\}\] of $M$ in $R$ is a $2$-sided ideal of $R$. Problem 409 Let $R$ be a ring with $1$. An element of the $R$-module $M$ is called a torsion element if $rm=0$ for some nonzero element $r\in R$. The set of torsion elements is denoted \[\Tor(M)=\{m \in M \mid rm=0 \text{ for some nonzero} r\in R\}.\] (a) Prove that if $R$ is an integral domain, then $\Tor(M)$ is a submodule of $M$. (Remark: an integral domain is a commutative ring by definition.) In this case the submodule $\Tor(M)$ is called torsion submodule of $M$. (b) Find an example of a ring $R$ and an $R$-module $M$ such that $\Tor(M)$ is not a submodule. Add to solve later (c) If $R$ has nonzero zero divisors, then show that every nonzero $R$-module has nonzero torsion element. Problem 408 Let $R$ be a ring with $1$ and $M$ be a left $R$-module. (a) Prove that $0_Rm=0_M$ for all $m \in M$. Here $0_R$ is the zero element in the ring $R$ and $0_M$ is the zero element in the module $M$, that is, the identity element of the additive group $M$. To simplify the notations, we ignore the subscripts and simply write \[0m=0.\] You must be able to and must judge which zero elements are used from the context. (b) Prove that $r0=0$ for all $s\in R$. Here both zeros are $0_M$. (c) Prove that $(-1)m=-m$ for all $m \in M$. Add to solve later (d) Assume that $rm=0$ for some $r\in R$ and some nonzero element $m\in M$. Prove that $r$ does not have a left inverse.
I can give the expectation but I don't know for the monotonicity. Let $C \sim \chi^2(d,\theta)$ (where $d$ degrees of freedom, and $\theta$ non-centrality parameter). If $d > 2$, then$$\mathbb{E}(1/C) = \frac{1}{2}\exp(-\theta/2)\frac{\Gamma(d/2-1)}{\Gamma(d/2)}{}_1\!F_1(d/2-1,d/2,\theta/2).$$(I don't know for $d \leq 2$, perhaps the expectation is infinite in this case). Check with R: > df <- 10 > ncp <- 2 > x <- rchisq(1e7, df, ncp) > mean(1/x) [1] 0.1036415 > > 1/2 * gamma(df/2-1)/gamma(df/2) * exp(-ncp/2) * gsl::hyperg_1F1(df/2-1, df/2, ncp/2) [1] 0.1036383 I got this formula from Gupta & Nagar's book Matrix variate distributions. This book provides a formula for the expectation of $\det(W)^h$ where $W$ follows a noncentral Wishart distribution, and the formula for the expectation of $1/C$ is a particular case. Note that ${}_1\!F_1(d/2-1,d/2,\theta/2)$ has form ${}_1\!F_1(a,a+1,z)$. Wolfram provides some simplification formulas for ${}_1\!F_1(a,a+1,z)$, but they make sense for $z<0$ only, as far as I can see. EDIT Wait... The family of the noncentral $\chi^2$ distributions is stochastically increasing in the noncentrality parameter. Therefore $X'X$ is stochastically decreasing in $\sigma$, and $1/X'X$ is stochastically increasing in $\sigma$. Consequently, $\mathbb{E}(1/X^{'}X)$ is an increasing function of $\sigma$, as well as $\mathbb{E}(\sigma^2/X^{'}X)$. EDIT 2 Thanks to a relation given in Wikipedia, the formula for the expectation can be simplified to:$$\mathbb{E}(1/C) = \frac{1}{2}\frac{\Gamma(d/2-1)}{\Gamma(d/2)}{}_1\!F_1(1,d/2,-\theta/2).$$Now, in view of the integral representation of ${}_1\!F_1$, it is clear that ${}_1\!F_1(a,b,z)$ is increasing in $z$.
Linear Maps Review Linear Maps Review We will now review some of the recent content regarding linear maps. Recall from the Linear Maps that a Linear Mapor Linear Transformationfrom a vector space $V$ to another vector space $W$ is a function $T : V \to W$ with the property that $T(au + bv) = aT(u) + bT(v)$ for all vectors $u, v \in V$ and for all scalars $a, b \in \mathbb{F}$. The simplest example of a linear map is the zero map which takes every vector $v \in V$ and maps it to the zero vector in $W$, that is $0(v) = 0$ for all $v \in V$. Another example of a linear map is the identity map $I : V \to V$ which takes every vector $v \in V$ and maps it to back to itself, that is, $T(v) = v$ for all $v \in V$. A more complex example of a linear map is that differentiation of polynomials map $T : \wp (\mathbb{R}) \to \wp (\mathbb{R})$ which takes every polynomial and maps it to its derivative, that is, $T(p(x)) = p'(x)$ for every $p(x) \in \wp (\mathbb{R})$. The set of all linear maps from a vector space $V$ to another vector space $W$ is denoted $\mathcal L (V, W)$. On the Addition, Multiples, and Compositions of Linear Maps page we saw that if $S$ and $T$ are both linear maps from $V$ to $W$ then their sum $S + T : V \to W$ defined by $(S + T)(v) = S(v) + T(v)$ is also a linear map from $V$ to $W$. Furthermore, we saw that if $T$ is a linear map from $V$ to $W$ and if $a \in \mathbb{F}$ then the scalar product $aT : V \to W$ defined by $(aT)(v) = aT(v)$ is also a linear map from $V$ to $W$. We also looked at compositions of linear maps. If $T \in \mathcal L (U, V)$ and $S \in \mathcal L (V, W)$ then the composition $S \circ T : U \to W$ defined by $S \circ T = ST(v) = S(T(v))$ is also a linear map, but from $U$ to $W$. On the Null Space of a Linear Map page we defined the Null Spaceof a linear map as the set of vectors in $V$ which are mapped to $0$ in $W$, that is: \begin{align} \quad \mathrm{null} (T) = \{ v \in V : T(v) = 0 \} \end{align} For example, the null space of the zero map is $V$ since every vector $v \in V$ is mapped to the zero vector. As another example, the null space of the identity map is $\{ 0 \}$ since $T(v) = v$ implies that only the zero vector is mapped to $0$. We also saw that $\mathrm{null} (T)$ is always subspace of the domain vector space $V$. On the Range of a Linear Map page we defined the Rangeof a linear map as the set of vectors in $W$ which are mapped to from [[$ V $], that is: \begin{align} \quad \mathrm{range} (T) = \{ T(v) : v \in V \} \end{align} For example, the range of the zero map is $\{ 0 \}$ since every vector $v \in V$ is mapped to $0 \in W$. The range of the identity map is $V$ since $T(v) = v$ for every vector $v \in V$. We also saw that $\mathrm{range} (T)$ is always a subspace of the codomain vector space $W$. On the Injective and Surjective Linear Maps page we defined a linear map $T \in \mathcal L (V, W)$ to be Injectiveor One-to-oneif whenever $T(u) = T(v)$ we have that $u = v$. One example of an injective linear map is the identity map. We saw that a linear map $T$ is injective if and only if $\mathrm{null} (T) = \{ 0 \}$. We also defined a linear map $T \in \mathcal L (V, W)$ to be Surjectiveor Ontoif $\mathrm{range} (T) = W$, that is, for every vector $w \in W$ there exists a vector $v \in V$ such that $T(v) = w$. One example of a surjective linear map is the differentiation of polynomials map. On The Dimension of The Null Space and Range we noted that if $T \in \mathcal L (V, W)$ and $V$ is a finite-dimensional vector space then the following formula for the dimension of $V$ holds: \begin{align} \quad \mathrm{dim} (V) = \mathrm{dim} (\mathrm{null} (T)) + \mathrm{dim} ( \mathrm{range} (T)) \end{align} As one important corollary to this formula, we noted that if $\mathrm{dim} V > \mathrm{dim} W$ then $T$ cannot be injective. This is because if $\mathrm{dim} V > \mathrm{dim} W$ then: \begin{align} \quad \mathrm{dim} (\mathrm{null} (T)) = \mathrm{dim} (V) - \mathrm{dim} (\mathrm{range} (T)) ≥ \mathrm{dim} V - \mathrm{dim} (W) > 0 \quad \Leftrightarrow \quad \mathrm{null} (T) \neq \{ 0 \} \end{align} Similarly, if $\mathrm{dim} V < \mathrm{dim} W$ then $T$ cannot be surjective. This is because if $\mathrm{dim} V < \mathrm{dim} W$ then: \begin{align} \quad \mathrm{dim} (\mathrm{range} (T)) = \mathrm{dim} (V) - \mathrm{dim} (\mathrm{dim} (\mathrm{null} (T)) ≤ \mathrm{dim} V < \mathrm{dim} W \quad \Leftrightarrow \quad \mathrm{range} (T) \neq W \end{align}
4:28 AM @MartinSleziak Here I am! Thank you for opening this chat room and all your comments on my post, Martin. They are really good feedback to this project. @MartinSleziak Yeah, using a chat room to exchange ideas and feedback makes a lot of sense compared to leaving comments in my post. BTW. Anyone finds a \oint\frac{1}{1-z^2}dz expression in old posts? Send to me and I will investigate why this issue occurs. @MartinSleziak It is OK, don't feel anything bad. As long as there is a place that comes to people's mind if they want to report some issue on Approach0, I am willing to come to that place and discuss. I am really interested in pushing Approach0 forward. 4:57 AM Hi @WeiZhong thanks for joining the room. I will write a bit more here when I have more time. For now two minor things. I just want to make sure that you know that the answer on meta is community wiki. Which means that various users are invited to edit it, you can see from revision history who added what to the question. You can see in revision history that this bullet point was added by Workaholic: "I searched for \oint $\oint$, but I only got results related to \int $\int$. I tried for \oint \frac{dz}{1-z^2} $\oint \frac{dz}{1-z^2}$ which is an integral that appears quite often but it did not yield any correct results." So if you want to make sure that this user is notified about your comments, you can simply add @Workaholic. Any of the editors can be pinged. And I noticed also this about one of the quizzes (I did not check whether some of the other quizzes have similar problem.) I suppose that the quizzes are supposed to be chosen in such way that Approach0 indeed helps to find the question. I.e., each quiz was created with some specific question in mind, which should be among the search results. Is that correct? I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$. However when I try the query from this quiz, I get completely different results. I vaguely recall that I tried some quizzes, including this one, and they worked. (By which I mean that the answer to the question from the quiz could be found among the search results.) So is this perhaps due to some changes that were made since then? Or is that simply because when I tried the quiz last time, less questions were indexed. (And now that question is still somewhere among the results, but further down.) I was wondering whether to add the word to my last message, but it is probably not a bug. It is simply that search results are not exactly as I would expect. My impression from the search results is that not only x, y, z are replaced by various variables, but also 5,6,7 are replaced by various numbers. 5:40 AM I think that this implicitly contains a question whether when searching for $x^5+y^6=z^7$ also the questions containing $x^2+y^2=z^2$ or $a^3+b^3=c^3$ should be matches. For the sake of completeness I will copy here the part of quiz list which is relevant to the quiz I mentioned above: "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", Hmm, I should have posted this as a single multiline message. But now I see that it is already too late to delete the above messages. Sorry for the duplication: { /* 4 */ "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" }, "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" }, 8 hours later… 1:19 PM @MartinSleziak OK, I get it. So next time I would definitely reply to whom actually makes the revision. @MartinSleziak Yes, remember the first time when we talk in a chat room? At that version of approach0, when a very limited posts have been indexed, you can actually get relevant posts on $i^5+j^6=k^7$. However, when I has enlarged the index (now almost the entire MSE), that piece of quiz (in fact, some quiz I selected earilier like [this one]()) does not find relevant posts anymore. I have noticed that "quiz" does not work, but I am really lazy and have not investigated it. Instead of change that "quiz", I agree to investigate on why that relevant result has gone. As far as I can guess, there can be two reasons: 1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware 1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware In order to investigate this problem, I am trying to find the original posts that you and me have seen (as you remember vaguely) which is relevant to $i^5+j^6=k^7$ quiz, if you find that post, please send me the URL. @MartinSleziak It can be a bug, but I need to know if my index does contain a relevant post, so first let us find that post we think relevant. And I will have a look whether or not it is in my index, perhaps the crawler just missed that one. If it is in our index currently, then I should spend some time to find out the reason. @MartinSleziak As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are structurallyrelevant to query. So $x^5+y^6=z^7$ will get you $x^2+y^2=z^2$ or $a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical. After filtering out these structurally relevant expressions, Approach0 will evaluate their symbolic relevance degree with regarding to query expression. Suppose $x^5+y^6=z^7$ gives you $x^2+y^2=z^2$, $a^3+b^3=c^3$ and also $x^5+y^6=z^7$, expression $x^5+y^6=z^7$ will be ranked higher than $x^2+y^2=z^2$ and $a^3+b^3=c^3$, this is because $x^5+y^6=z^7$ has higher symbolic score (in fact, since it has identical symbol set to query, it has the highest possible symbolic score). I am sorry, I should use "and" instead of "or". Let me repeat the message before previous one below: As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are structurallyrelevant to query. So $x^5+y^6=z^7$ will get you both$x^2+y^2=z^2$ and$a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical. Now the next things for me to do is to investigate some "missing results" suggested by you. 1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed) 1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed) 2:23 PM Unfortunately, I fail to find any relevant old post in neither case 1 nor case 2 after a few tries (using MSE default search). So the only thing I can do now is to do an "integrated test" (see the new code I have just pushed to Github: github.com/approach0/search-engine/commit/…) An "integrated test" means I make a minimal index with a few specified math expressions and search a specified query, and see if the results is expected. For example, the test case tests/cases/math-rank/oint.txt specified the query $\oint \frac{dz}{1-z^2}$, and the entire index has just two expressions: $\oint \frac{dz}{1-z^2}$ and $\oint \frac{dx}{1-x^2}$, and the expected search result is both these two expressions are HIT (i.e. they should appear in search result) 10 hours ago, by Martin Sleziak I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$. 2:39 PM For anyone interested, I post the screenshot of integrated test results here: imgur.com/a/xYBD5 3:04 PM For example like this: chat.stackexchange.com/transcript/message/32711761#32711761 You get the link by clicking on the little arrow next to the message and then clicking on "permalink". I am mentioning this because (hypothetically) if Workaholic only sees your comment a few days later and then they come here to see what the message you refer to, they might have problem with finding it if there are plenty of newer messages. However, this room does not have that much traffic, so very likely this is not going to be a problem in this specific case. Another possible way to linke to a specific set of messages is to go to the transcript and then choose a specific day, like this: chat.stackexchange.com/transcript/46148/2016/10/1 Or to bookmark a conversation. This can be done from the room menu on the right. This question on meta.SE even has some pictures. This is also briefly mentioned in chat help: chat.stackexchange.com/faq#permalink 3:25 PM @MartinSleziak Good to learn this. I just posted another comment with permalink in that meta post for Workaholic to refer. I just checked the index on server, yes, that post is indeed indexed. (for my own reference, docID = 249331) 2 hours later… 5:13 PM Update: I have fixed that quiz problem. See: approach0.xyz/search/… That is not strictly a bug, it is because I put a restriction on the number of document to be searched in one posting list (not trying to be very technical). I have pushed my new code to GitHub (see commit github.com/approach0/search-engine/commit/…), this change gets rid of that restriction and now that relevant post is shown as the 2nd search result. 2 hours later… 6:57 PM « first day (2 days earlier) next day → last day (1104 days later) »
4:28 AM @MartinSleziak Here I am! Thank you for opening this chat room and all your comments on my post, Martin. They are really good feedback to this project. @MartinSleziak Yeah, using a chat room to exchange ideas and feedback makes a lot of sense compared to leaving comments in my post. BTW. Anyone finds a \oint\frac{1}{1-z^2}dz expression in old posts? Send to me and I will investigate why this issue occurs. @MartinSleziak It is OK, don't feel anything bad. As long as there is a place that comes to people's mind if they want to report some issue on Approach0, I am willing to come to that place and discuss. I am really interested in pushing Approach0 forward. 4:57 AM Hi @WeiZhong thanks for joining the room. I will write a bit more here when I have more time. For now two minor things. I just want to make sure that you know that the answer on meta is community wiki. Which means that various users are invited to edit it, you can see from revision history who added what to the question. You can see in revision history that this bullet point was added by Workaholic: "I searched for \oint $\oint$, but I only got results related to \int $\int$. I tried for \oint \frac{dz}{1-z^2} $\oint \frac{dz}{1-z^2}$ which is an integral that appears quite often but it did not yield any correct results." So if you want to make sure that this user is notified about your comments, you can simply add @Workaholic. Any of the editors can be pinged. And I noticed also this about one of the quizzes (I did not check whether some of the other quizzes have similar problem.) I suppose that the quizzes are supposed to be chosen in such way that Approach0 indeed helps to find the question. I.e., each quiz was created with some specific question in mind, which should be among the search results. Is that correct? I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$. However when I try the query from this quiz, I get completely different results. I vaguely recall that I tried some quizzes, including this one, and they worked. (By which I mean that the answer to the question from the quiz could be found among the search results.) So is this perhaps due to some changes that were made since then? Or is that simply because when I tried the quiz last time, less questions were indexed. (And now that question is still somewhere among the results, but further down.) I was wondering whether to add the word to my last message, but it is probably not a bug. It is simply that search results are not exactly as I would expect. My impression from the search results is that not only x, y, z are replaced by various variables, but also 5,6,7 are replaced by various numbers. 5:40 AM I think that this implicitly contains a question whether when searching for $x^5+y^6=z^7$ also the questions containing $x^2+y^2=z^2$ or $a^3+b^3=c^3$ should be matches. For the sake of completeness I will copy here the part of quiz list which is relevant to the quiz I mentioned above: "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", Hmm, I should have posted this as a single multiline message. But now I see that it is already too late to delete the above messages. Sorry for the duplication: { /* 4 */ "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" }, "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" }, 8 hours later… 1:19 PM @MartinSleziak OK, I get it. So next time I would definitely reply to whom actually makes the revision. @MartinSleziak Yes, remember the first time when we talk in a chat room? At that version of approach0, when a very limited posts have been indexed, you can actually get relevant posts on $i^5+j^6=k^7$. However, when I has enlarged the index (now almost the entire MSE), that piece of quiz (in fact, some quiz I selected earilier like [this one]()) does not find relevant posts anymore. I have noticed that "quiz" does not work, but I am really lazy and have not investigated it. Instead of change that "quiz", I agree to investigate on why that relevant result has gone. As far as I can guess, there can be two reasons: 1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware 1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware In order to investigate this problem, I am trying to find the original posts that you and me have seen (as you remember vaguely) which is relevant to $i^5+j^6=k^7$ quiz, if you find that post, please send me the URL. @MartinSleziak It can be a bug, but I need to know if my index does contain a relevant post, so first let us find that post we think relevant. And I will have a look whether or not it is in my index, perhaps the crawler just missed that one. If it is in our index currently, then I should spend some time to find out the reason. @MartinSleziak As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are structurallyrelevant to query. So $x^5+y^6=z^7$ will get you $x^2+y^2=z^2$ or $a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical. After filtering out these structurally relevant expressions, Approach0 will evaluate their symbolic relevance degree with regarding to query expression. Suppose $x^5+y^6=z^7$ gives you $x^2+y^2=z^2$, $a^3+b^3=c^3$ and also $x^5+y^6=z^7$, expression $x^5+y^6=z^7$ will be ranked higher than $x^2+y^2=z^2$ and $a^3+b^3=c^3$, this is because $x^5+y^6=z^7$ has higher symbolic score (in fact, since it has identical symbol set to query, it has the highest possible symbolic score). I am sorry, I should use "and" instead of "or". Let me repeat the message before previous one below: As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are structurallyrelevant to query. So $x^5+y^6=z^7$ will get you both$x^2+y^2=z^2$ and$a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical. Now the next things for me to do is to investigate some "missing results" suggested by you. 1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed) 1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed) 2:23 PM Unfortunately, I fail to find any relevant old post in neither case 1 nor case 2 after a few tries (using MSE default search). So the only thing I can do now is to do an "integrated test" (see the new code I have just pushed to Github: github.com/approach0/search-engine/commit/…) An "integrated test" means I make a minimal index with a few specified math expressions and search a specified query, and see if the results is expected. For example, the test case tests/cases/math-rank/oint.txt specified the query $\oint \frac{dz}{1-z^2}$, and the entire index has just two expressions: $\oint \frac{dz}{1-z^2}$ and $\oint \frac{dx}{1-x^2}$, and the expected search result is both these two expressions are HIT (i.e. they should appear in search result) 10 hours ago, by Martin Sleziak I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$. 2:39 PM For anyone interested, I post the screenshot of integrated test results here: imgur.com/a/xYBD5 3:04 PM For example like this: chat.stackexchange.com/transcript/message/32711761#32711761 You get the link by clicking on the little arrow next to the message and then clicking on "permalink". I am mentioning this because (hypothetically) if Workaholic only sees your comment a few days later and then they come here to see what the message you refer to, they might have problem with finding it if there are plenty of newer messages. However, this room does not have that much traffic, so very likely this is not going to be a problem in this specific case. Another possible way to linke to a specific set of messages is to go to the transcript and then choose a specific day, like this: chat.stackexchange.com/transcript/46148/2016/10/1 Or to bookmark a conversation. This can be done from the room menu on the right. This question on meta.SE even has some pictures. This is also briefly mentioned in chat help: chat.stackexchange.com/faq#permalink 3:25 PM @MartinSleziak Good to learn this. I just posted another comment with permalink in that meta post for Workaholic to refer. I just checked the index on server, yes, that post is indeed indexed. (for my own reference, docID = 249331) 2 hours later… 5:13 PM Update: I have fixed that quiz problem. See: approach0.xyz/search/… That is not strictly a bug, it is because I put a restriction on the number of document to be searched in one posting list (not trying to be very technical). I have pushed my new code to GitHub (see commit github.com/approach0/search-engine/commit/…), this change gets rid of that restriction and now that relevant post is shown as the 2nd search result. 2 hours later… 6:57 PM « first day (2 days earlier) next day → last day (1104 days later) »
A cryptographic hash function $f : \{0,1\}^{*} \to \{0,1\}^n$ has three properties: (1) preimage resistance, (2) second-preimage resistance, and (3) collision resistance. Even further, these properties form a hierarchy where each property implies the one before it, i.e., a collision-resistant function is also second-preimage resistant, and a second-preimage resistant function is also preimage-resistant (with a condition on $f$). In the case of (3) ⇒ (2), it's not too hard to see why: if an adversary cannot find any colliding message pairs, then they certainly cannot find a colliding message when one of the messages is fixed. However, (2) ⇒ (1) is substantially trickier. For some intuition, consider a second-preimage resistant hash function $f$ that was not preimage resistant (modeled by being given access to a preimage-finding oracle). Suppose you were given a $m_1$; then you could compute $H(m_1)$ and consult the oracle for the preimage of $H(m_1)$. The oracle would then return a $m_2$ such that $H(m_1) = H(m_2)$. This is very nearly a second preimage. The only question is if $m_1 \ne m_2$. Intuitively, given that $f$ maps infinitely-many inputs to a finite number of outputs, there "should be" a high probability that $m_1 \ne m_2$. For all real-life hash functions, this is pretty much the case, so a second-preimage resistant hash function should not lack preimage resistance. However, it is possible to define "pathological" hash functions that have perfect, provable second-preimage resistance but not preimage resistance. The example given in chapter 9 of the Handbook of Applied Cryptography is this: $$f(x) = \begin{cases} 0 || x & \text{if } x \text{ is } n \text{ bits long}\\ 1 || g(x) & \text{otherwise}\end{cases}$$ where $g(x)$ is a collision-resistant hash function. In this case, for digests beginning with $0$, it's trivial to find a preimage (indeed, it's just the identity function), but such cases are provably second-preimage resistant, as there are no possible second preimages. In other words, this $f$ is bijective across the space of $n$-bit inputs. To be more precise about when (2) ⇒ (1), Rogaway and Shrimpton have presented a theoretical analysis of the various relations between the three properties listed above in their Cryptographic Hash-Function Basics. Essentially, their analysis treats a hash function as having a finite, fixed-length domain, i.e. $f : \{0,1\}^m \to \{0,1\}^n$, wherein they show "conventional implications", like the implication (3) ⇒ (2); these are essentially "true" implications in the sense that they are unconditional, and "provisional implications", like the implication that (2) ⇒ (1); these are conditional in nature, relying on how much $f$ compresses the message space (as the message space gets larger relative to the digest space, the "stronger" the implication in a probabilistic sense). So, provisional implications are essentially true if a hash function compresses the message space to a sufficient degree. (The "sufficient" example they provide is a hash compressing 256-bit messages to 128 bits.) Hence, second-preimage resistance implies preimage resistance only if the function in question compresses its input sufficiently. For length-preserving, length-extending, or low-compression functions, second-preimage resistance does not necessarily imply preimage resistance (as stated by the authors on page 8 about halfway down the page). This should be intuitive given the above algorithm for finding second preimages given a preimage oracle. If you are expanding 6-bit inputs to 256 bits, it's actually quite unlikely that a preimage oracle would be able to find a second preimage. This isn't a formal argument, by any means, but it's a nice heuristic one. Now, back to real life. Given the above algorithm for using a preimage oracle to find second preimages, I would not expect any real-life hash functions to have preimage attacks and not second-preimage attacks, especially since real hash functions typically compress data well. On the other hand, I'm not personally aware of any historically-used, non-toy cryptographic hash function which has a second-preimage attack but not a preimage attack. Typically, collision resistance is the first thing attacked by cryptanalysts since it is (in a sense) the "hardest" property to satisfy. But if a hash function is found to be broken with regard to collisions, cryptanalysts typically go straight for the heart: preimage attacks. So, I don't know how much luck you'll have trying to find such a hash function. You can look at the hash function lounge for some historic hash functions; it hasn't been updated since 2008, apparently, but still contains some useful info. I glanced through a few attacks and found mostly collision and preimage attacks, but you may have more luck.
I'm trying to figure out what are the theoretical and practical, implications and limitations, when a high-dimensional chaotic process is modeled as a random process. I understand how low-dimensional chaos (callin' it LD chaos from now on) is different from a stochastic process. However I'm unclear on the pitfalls of approximating high-dimensional chaos (HD chaos) as a random process, in terms of linear stochastic PDEs. In my field of earth system modeling, weather systems or ocean mesoscale turbulence are (i think) good examples of HD chaos. Some modelers will use stochastic processes to represent unresolved turbulence. For example, a noisy diffusion equation has been studied to describe the statistical evolution of the ocean surface temperature field $T(x,t)$ $$\frac{dT(x,t)}{dt}=\kappa\nabla^2 T(x,t)+\eta(x,t)$$ $$<\eta>=0$$ $$<\eta(x,y)\eta(x',t')>\propto \delta(x-x')\delta(t-t')$$ where $\eta$ is a white noise representing turbulent weather forcing from the atmosphere. Or likewise you could have a conservative noise term appearing in the temperature flux. If the process that $\eta$ represents is really LD chaos, I see how this would be a bad model as LD chaotic dynamics are governed by an attractor, which is definitely not random. My question then is, if $\eta$ is a stochastic process that is approximating HD chaos, is this approximation going to lead to trouble or caveats? I know you could also do things like change the autocorrelation function (maybe to decay as a power-law), but is that actually enough? Is HD chaos actually distinguishable from noise? Does HD chaos have some complicated attractor? If HD chaos is represented as a random process, can it be proven that this affects the statistics of the solution?
The amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Contents The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with amsmath. Let's check an example: \begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation} You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned. This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code: \usepackage{amsmath} To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document. \begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation For equations longer than a line use the multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. \begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. As mentioned before, the ampersand character & determines where the equations align. Let's check a more complex example: \begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*} Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a &; also that each equation is separated from the one before by an &. Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. The asterisk trick to set/unset the numbering of equations also works here. For more information see
The definition that you are using is not the most general. If you insist on applying the way you do, it only applies for a uniform flow in between two counter-mooving walls. Then, the velocity profile is indeed linear. A Newtonian fluid is defined by the approximation that local stress (or drag) is proportional to local strain. I would write your equation as, $$ \tau_{ij}(\vec{y}) = \mu \left(\frac{\partial u_i}{\partial y_j}(\vec{y}) + \frac{\partial u_j}{\partial y_i}(\vec{y}) \right)\, . \hspace{2cm} (*)$$ $\vec{u}(\vec{y})$ is the local velocity field. Note that $\tau_{ij}(\vec{y})$ can be a complicated function of $\vec{y}$. The left hand side of equation $(*)$ is the stress on the fluid at position $\vec{y}$. It has two indices because strain affects each component of $\vec{u}(\vec{y})$ in each direction differently. Under stress alone the local velocity field will change according to, $$ \partial_t u_i(\vec{y}) = \sum_j \partial_{y_j} \tau_{ij}(\vec{y}) \, . \hspace{2cm} (**)$$ The general case will of course involve the full Navier-Stokes equation with this term included. Note that $(**)$ is just a definition of $\tau_{ij}(\vec{y})$ which applies in the most general cases. It just states that the local velocity field will be affected by its neighbours. The right-hand side of $(*)$ represents the the Newtonian approximation. It is natural to assume that if the fluid is uniform (i.e. its spatial derivatives vanish) it will not be under strain. If it is not uniform (i.e. when different parts of the fluid move at different speeds) however there will be a strain and $\tau_{ij}(\vec{y})$ may be a complicated function of the derivatives of $\vec{u}(\vec{y})$. Assuming that this function can be Taylor expanded, the leading term (for small inhomogeneities, $\partial_{y_i}u_j(\vec{y})$ small) is given by the right hand side of equation $(*)$. Note that your equation follows from $(*)$ if the fluid only changes in one direction, $\vec{u}(x,y,z) = \vec{u}(x)$.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Abel's Test for Convergence of Series of Real Numbers Examples 1 Recall from Abel's Test for Convergence of Series of Real Numbers page the following test for convergence/divergence of a geometric series: Let's now look at some examples of using Abel's test. Example 1 Show that the series $\displaystyle{\sum_{n=1}^{\infty} \frac{n^3n! \cos \left ( \frac{1}{n^2} \right )}{e^n(n+2)!}}$ converges. Let $\displaystyle{(a_n)_{n=1}^{\infty} = \left (\frac{n^3n!}{e^n(n+2)!} \right )_{n=1}^{\infty}}$ and let $\displaystyle{(b_n)_{n=1}^{\infty} = \left ( \cos \left ( \frac{1}{n^2} \right ) \right )_{n=1}^{\infty}}$. Using the ratio test and we see that:(1) So by the ratio test we have that the series $\displaystyle{\sum_{n=1}^{\infty} a_n}$ converges. Furthermore, the sequence $(b_n)_{n=1}^{\infty}$ is decreasing and converges to $1$. So by Abel's test, $\displaystyle{\sum_{n=1}^{\infty} \frac{n^3n! \cos \left ( \frac{1}{n^2} \right )}{e^n(n+2)!}}$ converges. Example 2 Show that the series $\displaystyle{\sum_{n=1}^{\infty} \left ( \frac{n^2 + 3n + 1}{n^4 + 2n^2} \cdot \sum_{k=1}^{n} \frac{2}{k^2} \right )}$ converges. Let $\displaystyle{(a_n)_{n=1}^{\infty} = \left ( \frac{n^2 + 3n + 1}{n^4 + 2n^2} \right )_{n=1}^{\infty}}$ and let $\displaystyle{(b_n)_{n=1}^{\infty} = \left ( \sum_{k=1}^{n} \frac{2}{k^2} \right )_{n=1}^{\infty}}$. Notice that for sufficiently large $n$ that:(2) The series $\displaystyle{\sum_{n=1}^{\infty} \frac{1}{n^2}}$ converges, and using the limit comparison test with this series we see that:(3) So by the limit comparison test, since $0 < L = 1 < \infty$ we have that $\displaystyle{\sum_{n=1}^{\infty}\frac{n^2 + 3n + 1}{n^4 + 2n^2}}$ converges. Now look at the sequence $(b_n)_{n=1}^{\infty}$. Notice that this is simply the sequence of partial sums to the sequences $\displaystyle{\sum_{n=1}^{\infty} \frac{2}{n^2}}$ which converges. So $(b_n)_{n=1}^{\infty}$ converges and is clearly monotonic. So, by Abel's test we conclude that $\displaystyle{\sum_{n=1}^{\infty} \left ( \frac{n^2 + 3n + 1}{n^4 + 2n^2} \cdot \sum_{k=1}^{n} \frac{2}{k^2} \right )}$ converges.
I assume this is a thought experiment. Your question is more complicated than you think because of the ill-defined parameters of this situation. For one, the word 'amount' here could be taken to mean mass, volume or number of particles and each would be a different scenario. This question also raises a common myth that most people still believe which is 'steam burns are a lot more dangerous/serious than hot water burns because you need to consider the latent heat of vaporisation'. In everyday life, this is almost always not true. Now, first things first; $1$kg of steam at $100°C$ does have more heat in it than $1$kg of water at $100°C$. That being said, steam "usually" does not pose a source of danger that is significantly more hazardous than boiling water does. This is because of many things. Firstly, if your steam is not being forcibly convected or circulated at a high speed, the rate of heat transfer in steam is much lower than in boiling water. This is because steam is a gas and water is a liquid. The great distance between particles in a gas just does not allow for efficient heat flow. Boiling water almost always boils through nucleate boiling and that itself lends a great increase in the convection of boiling liquid water. In other words, boiling water creates bubbles that make the water churn and roll. Unless of course, you're assuming that the water is kept exactly at $100°C$ with just enough heat to maintain its temperature but no more to trigger boiling but even then water wins in passing the heat by virtue of being a liquid. Barring the odd case of superheated, high-pressure steam, you're not conducting heat well enough. Secondly, not many people often interact with "real" steam in their everyday lives. And by "real", I mean steam that is transparent, hot, dense and highly saturates the air. The white mist most people perceive as steam is not steam, it is "wet steam". It is usually a mix of steam, air, water droplets and steam that has cooled down but remains in a gaseous state i.e. water vapour. Let me be clear, water vapour is not steam. This has to do with phases of water that vary with temperature and pressure. You can easily picture this mentally by observing the fact that clouds are not boiling hot fluffs of water. Unless you're working at a factory of sorts, you're not going to encounter pure, hot, compressed steam. Additionally, there are other things to consider. Let's say you get steam that condenses and releases all of its energy into your skin to form a water droplet at say, $50°C$. Assuming no strong currents in the steam, that droplet of water is now actually protecting your skin from further steam burns, a sort of reverse-Leidenfrost effect. The water droplet will have to rise to $100°C$ first and then vaporise before any new steam can touch and burn your skin. I assume near constant pressure and no significant movement of air/steam that would "wick away" the droplet or evaporate it to expose more skin. The water droplet would not conduct the heat energy from the steam to your skin any more efficiently than boiling water would conduct heat from the surrounding water to your skin. If you're going by number of particles, you'd need a lot more steam than you would hot water, volume-wise. If you're going by mass, you'd also need a lot more steam than you would water, volume wise. I assume the time of exposure here isn't nearly enough for those huge volumes of steam to fully transfer heat, nor is the scenario feasible. You're not going to compare dunking your hand in a pot of boiling water to standing in a pressurised room of $100°C$ steam, are you ? The only case to consider here is if you take 'amount' to mean equal volumes of steam vs water and even then the math is against you: Given $Q=mc \Delta T$ and $Q=mL_f$ and $\rho =\frac{m}{V}$ $1m^3$ of steam at $100°C$ to $50°C$ water = $(1m^3\times0.645kg/m^3)(50K\times4.2kJ/kgK+2257kJ/kg)$ = $1591kJ$ of energy $1m^3$ of water at $100°C$ to $50°C$ of water = $(1m^3\times970kg/m^3)(50K\times4.2kJ/kgK)$ = $203 700 kJ$ of energy (I'm estimating here but you get the point) But isn't the heat of vaporisation of water absurdly bigger than the specific heat capacity of water ? Yes, it is. In fact, making $1$kg of steam from $1$kg of $100°C$ water requires more than five times the amount of energy than taking $1$kg of water from $0°C$ to $100°C$. But here's the thing: Do you know how hard it is to boil a kettle of water empty ? You'd have to boil that kettle for a solid half-hour, potentially longer. You'd never run into anything nearly like that much steam. Both my parents are doctors and have been for decades. Neither one of them have treated close to any bad steam burns but lots of hot water burns and spills. I practically lived in the clinic equipment room because I had afternoon school and my parents had to take me to work in the mornings. While my parents were seeing patients one wall away, I was told to be quiet and locked in that room with a table, chairs, boxes of stuff, my books and a huge, silver cylindrical autoclave. The autoclave was used to sterilise surgical equipment and it was built like a vault. The pressure needed to sterilise equipment with steam was so high that the turning door-lock sometimes got stuck and the nurses had to call my dad to crank it open. One day I tried messing around with it and took a shot of steam to my hand: first degree burns that only needed a plaster. Years later, I remember a much more intense pain, bigger blisters, more bandages and antibiotic creams because I spilt spaghetti water over my hand trying to help my mum in the kitchen. Still not convinced ? Ask any cook whether boiling or steaming is a faster cooking method. Unless you've got some kind of space-tech steamer, boiling's going to be faster with a pressure cooker being the fastest. Picture pouring a kettle of boiling water and the steam from that wafting up on your fingers. Okay now picture pouring that same water onto your fingers. Which would you choose ? Pure steam isn't easy to get. Steam dissipates super quick because steam expands a lot. The expansion of steam is the main force that drives steam turbines around the world. In fact, steam so wants to get "up and out" that it's an effective lifting gas. Lifting gas ! Of course given the right conditions, steam can and will burn worse than water but the pressure needed is industrial standard. You'd have to break a steam-carrying pipe or hold your hand over a steam vent for a silly amount of time. Given a short amount of time, hot water will burn you worse. High speed, pressurised movement of the steam is the dominating factor here. TLDR; if you had a box of $100°C$ steam and a box of $100°C$ water and you had to put your hand in one of them for 10 seconds, choose the box of steam.
Permanent link: https://www.ias.ac.in/article/fulltext/pram/068/01/0043-0049 Parametric dependence of the intensity of 182 Å Balmer-𝛼 line $(C^{5+}; n = 3 \rightarrow 2)$, relevant to xuv soft X-ray lasing schemes, from laser-produced carbon plasma is studied in circular spot focusing geometry using a flat field grating spectrograph. The maximum spectral intensity for this line in space integrated mode occurred at a laser intensity of $1.2 \times 10^{13}$ W cm -2. At this laser intensity, the space resolved measurements show that the spectral intensity of this line peaks at $\sim 1.5$ mm from the target surface indicating the maximum population of C 5+ ions $(n = 3)$, at this distance. From a comparison of spatial intensity variation of this line with that of C 5+ Ly-𝛼 $(n = 2 \rightarrow 1)$ line, it is inferred that $n = 3$ state of C 5+ ions is predominantly populated through three-body recombination pumping of C$^{6+}$ ions of the expanding plasma consistent with quantitative estimates on recombination rates of different processes. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2,\beta_3\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form. Basis of coefficient ring in terms of a root \(\nu\) of \(x^{4}\mathstrut -\mathstrut \) \(x^{3}\mathstrut -\mathstrut \) \(4\) \(x^{2}\mathstrut +\mathstrut \) \(2\) \(x\mathstrut +\mathstrut \) \(1\): \(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \( \nu^{2} - \nu - 2 \) \(\beta_{3}\) \(=\) \( \nu^{3} - \nu^{2} - 3 \nu + 1 \) \(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(\beta_{2}\mathstrut +\mathstrut \) \(\beta_{1}\mathstrut +\mathstrut \) \(2\) \(\nu^{3}\) \(=\) \(\beta_{3}\mathstrut +\mathstrut \) \(\beta_{2}\mathstrut +\mathstrut \) \(4\) \(\beta_{1}\mathstrut +\mathstrut \) \(1\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. \( p \) Sign \(2\) \(-1\) \(7\) \(-1\) \(11\) \(-1\) \(13\) \(-1\) This newform can be constructed as the kernel of the linear operator \(T_{3}^{4} \) \(\mathstrut +\mathstrut 3 T_{3}^{3} \) \(\mathstrut -\mathstrut 2 T_{3}^{2} \) \(\mathstrut -\mathstrut 6 T_{3} \) \(\mathstrut +\mathstrut 3 \) acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(4004))\).
A myriad (from Ancient Greek μυριάδες, myriades) is technically the number ten thousand; in that sense, the term is used almost exclusively in translations from Greek, Latin, or Chinese, or when talking about ancient Greek numbers. More generally, a myriad may be an indefinitely large number of things. [1] History The Aegean numerals of the Minoan and Mycenaean civilizations included a single unit to denote tens of thousands. It was written with a symbol composed of a circle with four dashes 𐄢. . [2] In Classical Greek numerals, a myriad was written as a capital mu: Μ, as lower case letters did not exist in Ancient Greece. To distinguish this numeral from letters, it was sometimes given an overbar: M. Multiples were written above this sign, so that for example \stackrel{\delta\phi\pi\beta}{\Mu} would equal 4,582×10,000 or 45,820,000. The etymology of the word myriad itself is uncertain: it has been variously connected to PIE *meu- ("damp") in reference to the waves of the sea and to Greek myrmex (μύρμηξ, "ant") in reference to their swarms. [3] The largest number named in Ancient Greek was the myriad myriad (written MM) or hundred million. In his Sand Reckoner, Archimedes of Syracuse used this quantity as the basis for a numeration system of large powers of ten, which he used to count grains of sand. Usage Greek In Modern Greek, the word "myriad" is rarely used to denote 10,000, but a million is ekatommyrio (εκατομμύριο, lit. 'hundred myriad') and a thousand million is disekatommyrio (δισεκατομμύριο, lit. 'twice hundred myriad'). English In English, "myriad" is most commonly used to mean "some large but unspecified number". It may be either an adjective or a noun: both "there are myriad people outside" and "there is a myriad of people outside" are in use. [4] (There are small differences: the former could imply that it is a diverse group of people; the latter does not but could possibly indicate a group of exactly ten thousand.) The Merriam-Webster Dictionary notes that confusion over the use of myriad as a noun "seems to reflect a mistaken belief that the word was originally and is still properly only an adjective ... however, the noun is in fact the older form, dating to the 16th century. The noun 'myriad' has appeared in the works of such writers as Milton (plural 'myriads') and Thoreau ('a myriad of'), and it continues to occur frequently in reputable English." [4] "Myriad" is also infrequently used in English as the specific number 10,000. Owing to the possible confusion with the generic meaning of "large quantity", however, this is generally restricted to translation of other languages like ancient Greek, Chinese, and Hindi where numbers may be grouped into sets of 10,000 (myriads). Such use permits the translator to remain closer to the original text and avoid repeated and unwieldy mentions of "tens of thousands": for example, "the original number of the crews supplied by the several nations I find to have been twenty-four myriads" [5] and "What is the distance between one bridge and another? Twelve myriads of parasangs". [6] In British English, a "myriad" is a 100-×-100-kilometer (62 × 62 mi) area, that is, 10,000 square kilometers, particularly on the Ordinance Survey's National Grid. In other languages Europe Most European languages include variations of "myriad" with similar meanings to the English word. Additionally, the prefix myria- indicating multiplication times ten thousand (×10 4) was part of the original metric system adopted by France in 1795. [7] Although it was not retained after the 11th CGPM conference in 1960, "myriameter" is sometimes still encountered as a translation of the Scandinavian mile (Swedish & Norwegian: mil) of 10 kilometers (6.21 mi). The myriagramme was a French approximation of the avoirdupois quartier of 25 pounds (11 kg) and the "myriaton" appears in Isaac Asimov's Foundation trilogy. East Asia In East Asia, the traditional numeral systems of China, Korea, and Japan are all decimal-based but grouped into ten thousands rather than thousands. The character for myriad is 萬 in traditional script and 万 in simplified form in both mainland China and Japan. The pronunciation varies within China and abroad: wàn (Mandarin), wan 5 (Hakka), bān (Minnan), maan 6 (Cantonese), man (Japanese and Korean), and vạn (Vietnamese). Vietnam is peculiar within the Sinosphere in largely rejecting Chinese numerals in favor of its own: vạn is less common than the native mười nghìn ("ten thousand") and its numerals are grouped in threes. Because most East Asian numerals are grouped into fours, higher orders of numbers are provided by the powers of 10,000: 10,000 2 was 萬萬 in ancient texts but is now written as 1,0000,0000 and called 億; 10,000 3 is 1,0000,0000,0000 or 兆; 10,000 4 is 1,0000,0000,0000,0000 or 京; and so on. Conversely, Chinese, Japanese, and Korean generally don't have native words for powers of one thousand: what is called "one million" in English is "one hundred wan" in Chinese and "one billion" is "ten chō" in Japanese. Unusually, Vietnam employs its former translation of 兆, một triệu, to mean 1,000,000 rather than the Chinese figure. 萬 and 万 are also employed colloquially in myriad expressions, clichés, and chengyu (idioms) in the senses of "vast", "numerous", "numberless", and "infinite". A skeleton key is a 万能钥匙 ("myriad-use key"); [8] the emperor was the "lord of myriad chariots" (萬乘之主); [9] Zhu Xi's 月映万川 ("the moon reflects in myriad rivers") had the sense of supporting greater empiricism in Chinese philosophy; [10] and Ha Qiongwen's popular 1959 propaganda poster 毛主席万岁, which could be literally read as "Chairman Mao is 10,000 years old", in fact meant "Long live Chairman Mao". [11] See also References ^ Oxford English Dictionary, Third Edition, June 2003, s.v. 'myriad' ^ Samuel Verdan (20 Mar 2007). "Systèmes numéraux en Grèce ancienne: description et mise en perspective historique" (in Français). Retrieved 2 Mar 2011. ^ Schwartzman, Steven. The Words of Mathematics: An Etymological Dictionary of Mathematical Terms Used in English, p. 142. The Mathematical Assoc. of America, 1994. ^ a b Merriam-Webster Online. "Myriad". 2013. Accessed 1 November 2013. ^ Herodotus. The History of Herodotus, VII.184. Translation by G.C. Macaulay, 1890. Accessed 1 Nov 2013. ^ Janowitz, Naomi. The Poetics of Ascent: Theories of Language in a Rabbinic Ascent Text, p. 118. SUNY Press (New York), 1989. Accessed 1 November 2013. ^ L'Histoire Du Mètre: " La Loi Du 18 Germinal An 3". 2005. Accessed 1 November 2013. (French) ^ Nciku.com. "万能钥匙". Accessed 1 November 2013. ^ Wai Keung Chan, Timothy. Considering the End: Mortality in Early Medieval Chinese Poetic Representation, 23. Brill, 2012. Accessed 1 November 2013. ^ Chen Derong. Metaphorical Metaphysics in Chinese Philosophy, p. 29. Lexington Books (Lanham), 2011. Accessed 1 November 2013. ^ Yeh Wen-hsin & al. Visualizing China, 1845–1965: Moving and Still Images in Historical Narratives, pp. 416 ff. Brill, 2012. Accessed 1 November 2013. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
Probably it's not a good fit for this site, but I'll make a cw answer here (others may add to it). If you want to ask about specific tex issues, better make small example in new questions just asking about one point at a time. I didn't look at your class but there are several odd things in the example document. (The file is not as bad as the length of this list makes it seem, but since you asked for comments.....) \sloppy %Even spacing in text You shouldn't encourage \sloppy certainly not as a document wide default. It allows arbitrarily uneven spacing, in order to help with tricky linebreaking in special cases. \makecover %Make cover page \newpage You should not need \newpage (if your class is defining \makecover to make a cover page it should have handles the page breaking internally.) \section*{\MakeUppercase{Sisällysluettelo}} There should be no formatting in the argument to headings within the document, if the design needs uppercased section headings that can be coded into the class definition of \section* \underline{\textbf{HUOM!}}\\ This is presumably some kind of fake heading, it would be much better to use a latex sectioning command (perhaps a custom one defined in the class) for example this markup would allow the page to break immediately after the heading which a real heading command always prevents. ”sokeaksi” looks odd to me, with two right quotation marks, but language customs vary a lot here, perhaps that's OK. F = 0,65\times (\mathrm{afk}_{c} sin \kappa_{r}) probably this is just filler text but anyway sin should presumably be \sin \begin{thebibliography}{99} It's normally better to use bibtex (or biber/biblatex) to manage the bibliography rather than manage it by hand. \MakeUppercase{PROJEKTISUUNNITELMA} again this seems to be a fake heading \value{chapter} That will expand to \c@chapter and be the left side of an assignment, don't you get a missing number error at that point? \begin{table}[H] \centering \label{my-label} The \label here will reference the current section, not the table as it is used with no \caption. \subsection*{\textbf{3 TYÖNSUORITTAMINEN}} as above there shouldn't be \textbf formatting in a heading, but also why use unnumbered sections and then number by hand?
فهرست مطالب Volume:8 Issue:5, 2011 Special Issue: Fuzzy Mathatics تاریخ انتشار: 1390/09/17 تعداد عناوین: 11 | Page 1The starting point of this paper is given by Priestley’s papers, where a theory of representation of distributive lattices is presented. The purpose of this paper is to develop a representation theory of fuzzy distributive lattices in the finite case. In this way, some results of Priestley’s papers are extended. In the main theorem, we show that the category of finite fuzzy Priestley spaces is equivalent to the dual of the category of finite fuzzy distributive lattices. Several examples are also presented. Page 13Motivated by the recent study on categorical properties of latticevalued topology, the paper considers a generalization of the notion of topological system introduced by S. Vickers, providing an algebraic and a coalgebraic category of the new structures. As a result, the nature of the category TopSys of S. Vickers gets clari ed, and a metatheorem is stated, claiming that (latticevalued) topology can be embedded into algebra. Page 31In this paper, let $L$ be a complete residuated lattice, and let {\bf Set} denote the category of sets and mappings, $LF$-{\bf Pos} denote the category of $LF$-posets and $LF$-monotone mappings, and $LF$-{\bf CSLat}$(\sqcup)$, $LF$-{\bf CSLat}$(\sqcap)$ denote the category of $LF$-complete lattices and $LF$-join-preserving mappings and the category of $LF$-complete lattices and $LF$-meet-preserving mappings, respectively. It is proved that there are adjunctions between {\bf Set} and $LF$-{\bf CSLat}$(\sqcup)$, between $LF$-{\bf Pos} and $LF$-{\bf CSLat}$(\sqcup)$, and between $LF$-{\bf Pos} and $LF$-{\bf CSLat}$(\sqcap)$, that is, {\bf Set}$\dashv LF$-{\bf CSLat}$(\sqcup)$, $LF$-{\bf Pos}$\dashv LF$-{\bf CSLat}$(\sqcup)$, and $LF$-{\bf Pos}$\dashv$ $LF$-{\bf CSLat}$(\sqcap)$. And a usual mapping $f$ generates the traditional Zadeh forward powerset operator $f_L^\rightarrow$ and the fuzzy forward powerset operators $\widetilde{f}^\rightarrow, \widetilde{f}_\ast^\rightarrow, \widetilde{f}^{\ast\rightarrow}$ defined by the author et al via these adjunctions. Moreover, it is also shown that all the fuzzy powerset operators mentioned above can be generated by the underlying algebraic theories. Page 59In this paper, our focus of attention is the proper propagation of fuzzy degrees in determinization of $Nondeterministic$ $Fuzzy$ $Finite$ $Tree$ $Automata$ (NFFTA). Initially, two determinization methods are introduced which have some limitations (one in behavior preserving and other in type of fuzzy operations). In order to eliminate these limitations and increasing the efficiency of FFTA, we define the notion of fuzzy complex state and $Complex$ $FFTA$ (CFFTA). Also, we define $\nabla$-normalization operation in algebra of fuzzy complex state to solve the multi membership state problem in fuzzy automata. Furthermore, we discuss the relationship between FFTA and CFFTA. Finally, determinization of CFFTA is presented. Page 69This paper, deals with some equivalence relations in fuzzy subgroups. Further the probability of commuting two fuzzy subgroups of some finite abelian groups is defined. Page 81By means of a kind of new idea, we consider the $(\in,\ivq)$-fuzzy $h$-ideals of a hemiring. First, the concepts of $(\in,\ivq)$-fuzzy left(right) $h$-ideals of a hemiring are provided and some related properties are investigated. Then, a kind of quotient hemiring of a hemiring by an $(\in,\ivq)$-fuzzy $h$-ideal is presented and studied. Moreover, the notions of generalized $\varphi$-compatible $(\in,\ivq)$-fuzzy left(right) $h$-ideals of a hemiring are introduced and some properties of them are provided. Finally, the relationships among $(\in,\ivq)$-fuzzy $h$-ideals, quotient hemirings and homomorphisms are explored and several homomorphism theorems are provided. FUZZY REFLEXIVITY OF FELBIN'S TYPE FUZZY NORMED LINEAR SPACES AND FIXED POINT THEOREMS IN SUCH SPACESPage 103Abstract. An idea of fuzzy re exivity of Felbin''s type fuzzy normed linear spaces is introduced and its properties are studied. Concept of fuzzy uniform normal structure is given and using the geometric properties of this concept xed point theorems are proved in fuzzy normed linear spaces. Page 117In this note, we aim to present some properties of the space of all weakly fuzzy bounded linear operators, with the Bag and Samanta’s operator norm on Felbin’s-type fuzzy normed spaces. In particular, the completeness of this space is studied. By some counterexamples, it is shown that the inverse mapping theorem and the Banach-Steinhaus’s theorem, are not valid for this fuzzy setting. Also finite dimensional normed fuzzy spaces are considered briefly. Next, a Hahn-Banach theorem for weakly fuzzy bounded linear functional with some of its applications are established. Page 131In this paper, the gradual real numbers are considered and the notion of the gradual normed linear space is given. Also some topological properties of such spaces are studied, and it is shown that the gradual normed linear space is a locally convex space, in classical sense. So the results in locally convex spaces can be translated in gradual normed linear spaces. Finally, we give an example of a gradual normed linear space which is not normable in classical analysis. Page 141In this paper, the notion of $\psi -$ weak contraction [18] is extended to fuzzy metric spaces. The existence of common xed points for two mappings is established where one mapping is $\psi -$ weak contraction with respect to another mapping on a fuzzy metric space. Our result generalizes a result of Gregori and Sapena [9]. Page 149In this paper we investigate the algebraic properties of dimension of fuzzy hypervector spaces. Also, we prove that two isomorphic fuzzy hypervector spaces have the same dimension.
One strength of the human mind is its ability to find patterns and draw connections between disparate concepts, a trait that often enables science, poetry, visual art, and a myriad of other human endeavors. In a more concrete sense, the brain assembles acquired knowledge and links pieces of information into a network. Knowledge networks also seem to have a physical aspect in the form of interconnected neuron pathways in the brain. During her invited address at the 2018 SIAM Annual Meeting, held in Portland Ore., last July, Danielle Bassett of the University of Pennsylvania illustrated how brains construct knowledge networks. Citing early 20th century progressive educational reformer John Dewey, she explained that the goal of a talk—and learning in general—is to map concepts from the speaker/teacher’s mind to those of his or her listeners. When the presenter is successful, the audience gains new conceptual networks. More generally, Bassett explored how humans acquire knowledge networks, whether that process can be modeled mathematically, and how such models may be tested experimentally. Fundamental research on brain networks can potentially facilitate the understanding and treatment of conditions as diverse as schizophrenia and Parkinson’s disease. In mathematical terms, a network is a type of graph: a set of points connected by lines. The particular model for knowledge networks is based on the assumption that humans essentially experience phenomena as discrete events or concepts arranged sequentially in time. Each of these is modeled as a node in a graph, with lines called “edges” linking them together. The edges represent possible transitions between the events or concepts; a particular graph thus describes how knowledge networks interconnect ideas. The first big question for this model is whether optimal pathways of connectivity that maximize learning exist. Hot Thoughts and Modular Thinking To address this problem, Bassett and her collaborators constructed a knowledge network consisting of nodes that represent random stimuli. Each node was connected to the same number of other nodes—creating a \(k\)-regular graph, in graph theory terms—to ensure equal transition probabilities between nodes. The researchers tested human reactions, assigning key presses or abstract “avatars” to every node. Test subjects “learned” the graph by performing set tasks, and Bassett’s team quantified these performances based on the amount of time the subjects spent responding to each task. The experiments involved two types of graphs: graphs in which nodes clustered together in groups and graphs with evenly-connected nodes in a more lattice-like structure. For example, consider a four-regular graph of 15 nodes where each node is connected to four others. The modular graph could simply cluster the nodes into three linked groups of five (see Figure 1), while the lattice-like graph lacks modules [3]. Figure 1. Test subjects had better reaction times (RTs) for switching tasks arranged on nodes within clusters on a graph than for tasks across clusters (left). The graph topology also affected performance, with better behavior on clustered nodes than nodes with a lattice-like topology (right), even when both graphs had the same number of edges per node. Figure adapted from [4]. Although both graphs were \(k\)-regular, subjects learned the modular graphs more efficiently and performed faster on transitions between nodes within a module than transitions between modules themselves, regardless of the nature of the task. Edges between clusters are called “cross-cluster surprisal” because they had slower reaction times than those within modules. These results suggest that human minds “lump” concepts together, a premise that is borne out by other psychological tests. Despite having equal transition probabilities for all graph edges, the brain evidently distinguishes between topological “distances” by the type of transition performed in the graph. This indicates that humans implicitly recognize the graph’s topology. To quantify how well test subjects recalled the learned material, Bassett and her colleagues assigned a probability to time interval \(\Delta t\) between when the event actually happened and when the subjects thought it occurred. Drawing on thermal physics, the team associated \(\Delta t\) with the concept of cognitive “free energy,” which the brain minimizes to reduce computational resources and recall errors [4]. In this language, the probability \(P\) of recall for a given time interval \(\Delta t\) after an event is \[P (\Delta t)= \frac{1}{Z}\textrm{e}^{-\beta \Delta t},\] where \(Z\) is the partition function \[Z = \sum_{\Delta t} \textrm{e}^{- \beta \Delta t}\] and \(\beta\) is the parameter that sets the distribution scale. Bassett suggested that humans operate at a particular “temperature” \(T \thicksim 1/\beta\) that changes over the course of their lives (this is similar to how temperature is defined in information theory). High “temperatures” (limit as \(\beta \rightarrow 0\)) result in a flat probability distribution, which means poor graph recall. Low “temperatures” (\(\beta \rightarrow \infty\)) ensure that the probability drops precipitously for nonzero \(\Delta t\) values, thus corresponding to an accurate memory. For moderate “temperatures,” the subjects reproduce the basic graph — but with some errors (see Figure 2). Figure 2. Recall of tasks can be modeled using statistical mechanics, where the inverse “temperature” parameterizes the probability of recall over time. High temperature or low produces a messy graph recall, while lower temperatures represent increased accuracy. The darkness of the graphs’ edges indicates how well subjects remember the transitions between tasks on the graph topology from Figure 1. Figure courtesy of [4]. Continuing with the physics metaphor, this model is akin to the difference between materials at various temperatures. Solids are often highly ordered at low temperatures, with atoms arranged in predictable patterns; raising temperatures destroys the order, giving rise to random and time-variable fluid arrangements. Networks and Learning Bassett then presented a larger question: Is it possible to design optimal knowledge networks that help people learn? As is moderately obvious, there is much individual variation among humanity in terms of cognitive function. At the same time, the mind’s ability to reconstruct modular patterns seems to indicate a general, shared means of operation. Bassett posed the question as follows: Are the actual, physical neural networks in a brain modular? To explore this hypothesis, she and her collaborators tested subjects while they were inside a magnetic resonance imaging (MRI) machine. They found that real brain networks are multilayered and dynamic, as opposed (for example) to one static group of cells that routinely corresponds to learning. More to the point, the flexibility of physical brain networks apparently linked to individual cognitive capabilities. Low flexibility limited subjects’ ability to learn and retain information, but Bassett’s team noted that certain test subjects with schizophrenia exhibited very high flexibility along with other deficits in function, leading the group to hypothesize an optimal range for flexibility to support cognitive performance [1]. Bassett and her colleagues were interested in connecting their mathematical model with the MRI results to examine a possible correlation between cognitive control and brain dynamics. MRI studies indicate that network control increases as children develop, approaching an asymptotic maximum roughly at age 20 in healthy brains [2]. When discussing cognitive abilities and mental health, researchers must always be cautious about ethical complications. Bassett and her collaborators are prudent to argue that one should handle issues involving brain control with caution in order to prevent misuse [5]. Understanding the way the mind works leads to questions of how and when mind control modification is possible or advisable. While control structures can be beneficial in treating some cognitive conditions resulting from lack of internal control, they can give rise to possible misuses as well (and not just science-fiction-style whole-brain hijacking scenarios). Bassett presented a therapeutic argument for this avenue of research: if human brains control their network functions in particular ways, introducing external controls to change or enhance certain behaviors might be possible. For conditions like epilepsy or Parkinson’s disease, these modifications could be extremely helpful. Ultimately, network models facilitate one’s understanding of how the mind, and possibly the brain itself, functions. Thus, researchers could extend many areas of applied mathematics—currently used in information theory, network control, and thermodynamics—to the study of the mind. Such connections satisfy the human impulse to find and transform patterns into something new. Bassett’s presentation is available from SIAM either as slides with synchronized audio or a PDF of slides only. References [1] Braun, U., Schäfer, A., Walter, H., Erk, S., Romanczuk-Seiferth, N., Haddad, L,…Bassett, D.S. (2015). Dynamic reconfiguration of frontal brain networks during executive cognition in humans. PNAS, 112(37), 11678-83. [2] Gu, S., Pasqualetti, F., Cieslak, M., Telesford, Q.K., Yu, A.B., Kahn, A.E.,… Bassett, D.S. (2015). Controllability of structural brain networks. Nat. Comm., 6, 8414. [3] Kahn, A., Karuza, E.A., Vettel, J.M., & Bassett, D.S. (2018). Network constraints on learnability of probabilistic motor sequences. Nat. Hum. Behav., 2, 936-947. [4] Lynn, C.W., Kahn, A.E., & Bassett, D.S. (2018). Structure from noise: Mental errors yield abstract representations of events. Preprint, arXiv:1805.12491. [5] Medaglia, J.D., Zurn, P., Sinnott-Armstrong, W., & Bassett, D.S. (2017). Mind control as a guide for the mind. Nat. Hum. Behav., 1, 0119. Further Reading Kim, J.Z., Soffer, J.M., Kahn, A.E., Vettel, J.M., Pasqualetti, F., & Bassett, D.S. (2018). Role of graph architecture in controlling dynamical networks with applications to neural systems. Nat. Phys., 14, 91-98.
Orthonormal Basis of Null Space and Row Space Problem 366 Let $A=\begin{bmatrix} 1 & 0 & 1 \\ 0 &1 &0 \end{bmatrix}$. (a) Find an orthonormal basis of the null space of $A$. (b) Find the rank of $A$. (c) Find an orthonormal basis of the row space of $A$. ( The Ohio State University, Linear Algebra Exam Problem) Add to solve later Contents Solution. First of all, note that $A$ is already in reduced row echelon form. (a) Find an orthonormal basis of the null space of $A$. Let us find a basis of null space of $A$. The null space consists of the solutions of $A\mathbf{x}=0$. Since $A$ is in reduced row echelon form, the solutions $\mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}$ satisfy \[x_1=-x_3 \text{ and } x_2=0,\] hence the general solution is \[\mathbf{x}=\begin{bmatrix} -x_3 \\ 0 \\ x_3 \end{bmatrix}=x_3\begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix}.\] Therefore, the set \[\left\{\, \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \,\right\}\] is a basis of the null space of $A$. Since the length of the basis vector is $\sqrt{(-1)^2+0^2+1^2}=\sqrt{2}$, it is not orthonormal basis. Thus, we divide the vector by its length and obtain an orthonormal basis \[\left\{\, \frac{1}{\sqrt{2}}\begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \,\right\}.\] (b) Find the rank of $A$. From part (a), we see that the nullity of $A$ is $1$. The rank-nullity theorem says that \[\text{rank of $A$} + \text{ nullity of $A$}=3.\] Hence the rank of $A$ is $2$. The second way to find the rank of $A$ is to use the definition of the rank: The rank of a matrix $B$ is the number of nonzero rows in a reduced row echelon matrix that is row equivalent to $B$. Since $A$ is in echelon form and it has two nonzero rows, the rank is $2$. The third way to find the rank is to use the leading 1 method. By the leading 1 method, we see that the first two columns form a basis of the range, hence the rank of $A$ is $2$. (c) Find an orthonormal basis of the row space of $A$. By the row space method, the nonzero rows in reduced row echelon form a basis of the row space of $A$. Thus \[ \left\{\, \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \,\right\}\] is a basis of the row space of $A$. Since the dot (inner) product of these two vectors is $0$, they are orthogonal. The length of the vectors is $\sqrt{2}$ and $1$, respectively. Hence an orthonormal basis of the row space of $A$ is \[ \left\{\, \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \,\right\}\] Linear Algebra Midterm Exam 2 Problems and Solutions True of False Problems and Solutions: True or False problems of vector spaces and linear transformations Problem 1 and its solution: See (7) in the post “10 examples of subsets that are not subspaces of vector spaces” Problem 2 and its solution: Determine whether trigonometry functions $\sin^2(x), \cos^2(x), 1$ are linearly independent or dependent Problem 3 and its solution (current problem): Orthonormal basis of null space and row space Problem 4 and its solution: Basis of span in vector space of polynomials of degree 2 or less Problem 5 and its solution: Determine value of linear transformation from $R^3$ to $R^2$ Problem 6 and its solution: Rank and nullity of linear transformation from $R^3$ to $R^2$ Problem 7 and its solution: Find matrix representation of linear transformation from $R^2$ to $R^2$ Problem 8 and its solution: Hyperplane through origin is subspace of 4-dimensional vector space Add to solve later
This article is cited in scientific papers (total in 2 2 papers) Index sets for $n$-decidable structures categorical relative to $m$-decidable presentations E. B. Fokina , S. S. Goncharov a , V. Harizanov bc , O. V. Kudinov d , D. Turetsky bc e a Vienna University of Technology, Institute of Discrete Mathematics and Geometry, Wiedner Hauptstrase 8-10/104, 1040, Vienna, Austria b Sobolev Institute of Mathematics, pr. Akad. Koptyuga 4, Novosibirsk, 630090, Russia c Novosibirsk State University, ul. Pirogova 2, Novosibirsk, 630090, Russia d George Washington Univ, Washington, DC, 20052, USA e Kurt Gödel Research Center for Mathematical Logic, University of Vienna, Währinger Strase 25, 1090, Vienna, Austria Abstract: We say that a structure is categorical relative to $n$-decidable presentations (or autostable relative to $n$-constructivizations) if any two $n$-decidable copies of the structure are computably isomorphic. For $n=0$, we have the classical definition of a computably categorical (autostable) structure. Downey, Kach, Lempp, Lewis, Montalbán, and Turetsky proved that there is no simple syntactic characterization of computable categoricity. More formally, they showed that the index set of computably categorical structures is $\Pi^1_1$-complete. We study index sets of $n$-decidable structures that are categorical relative to $m$-decidable presentations, for various $m,n\in\omega$. If $m\ge n\ge0$, then the index set is again $\Pi^1_1$-complete, i.e., there is no nice description of the class of $n$-decidable structures that are categorical relative to $m$-decidable presentations. In the case $m=n-1\ge0$, the index set is $\Pi^0_4$-complete, while if $0\le m\le n-2$, the index set is $\Sigma^0_3$-complete. Keywords: index set, structure categorical relative to $n$-decidable presentations, $n$-decidable structure categorical relative to $m$-decidable presentations. DOI: https://doi.org/10.17377/alglog.2015.54.407 Full text: PDF file (147 kB) References: PDF file HTML file English version: Algebra and Logic, 2015, 54:4, 336–341 Bibliographic databases: UDC: 510.53 Received: 12.09.2015 Citation: E. B. Fokina, S. S. Goncharov, V. Harizanov, O. V. Kudinov, D. Turetsky, “Index sets for $n$-decidable structures categorical relative to $m$-decidable presentations”, Algebra Logika, 54:4 (2015), 520–528; Algebra and Logic, 54:4 (2015), 336–341 Citation in format AMSBIB \Bibitem{FokGonHar15} \by E.~B.~Fokina, S.~S.~Goncharov, V.~Harizanov, O.~V.~Kudinov, D.~Turetsky \paper Index sets for $n$-decidable structures categorical relative to $m$-decidable presentations \jour Algebra Logika \yr 2015 \vol 54 \issue 4 \pages 520--528 \mathnet{http://mi.mathnet.ru/al708} \crossref{https://doi.org/10.17377/alglog.2015.54.407} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=3468414} \transl \jour Algebra and Logic \yr 2015 \vol 54 \issue 4 \pages 336--341 \crossref{https://doi.org/10.1007/s10469-015-9353-6} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000365784700007} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-84957967781} Linking options: http://mi.mathnet.ru/eng/al708 http://mi.mathnet.ru/eng/al/v54/i4/p520 Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: N. Bazhenov, “Autostability spectra for decidable structures”, Math. Struct. Comput. Sci., 28:3, SI (2018), 392–411 M. Harrison-Trainor, “There is no classification of the decidably presentable structures”, J. Math. Log., 18:2 (2018), UNSP 1850010 Number of views: This page: 124 Full text: 16 References: 15 First page: 8
What is Superposition of Waves? According to the principle of superposition. The resultant displacement of a number of waves in a medium at a particular point is the vector sum of the individual displacements produced by each of the waves at that point. Principle of Superposition of Waves Considering two waves, travelling simultaneously along the same stretched string in opposite directions as shown in the figure above. We can see images of waveforms in the string at each instant of time. It is observed that the net displacement of any element of the string at a given time is the algebraic sum of the displacements due to each wave. Let us say two waves are travelling alone and the displacements of any element of these two waves can be represented by y 1(x, t) and y 2(x, t). When these two waves overlap, the resultant displacement can be given as y(x,t). Mathematically, y (x, t) = y 1(x, t) + y 2(x, t) As per the principle of superposition, we can add the overlapped waves algebraically to produce a resultant wave. Let us say the wave functions of the moving waves are y 1 = f1(x–vt), y 2 = f2(x–vt) ………. y n = f n (x–vt) then the wave function describing the disturbance in the medium can be described as y = f 1(x – vt)+ f 2(x – vt)+ …+ f n(x – vt) or, y=∑ i=1 to n = fi (x−vt) Let us consider a wave travelling along a stretched string given by, y 1(x, t) = A sin (kx – ωt) and another wave, shifted from the first by a phase φ, given as y 2(x, t) = A sin (kx – ωt + φ) From the equations we can see that both the waves have the same angular frequency, same angular wave number k, hence the same wavelength and the same amplitude A. Now, applying the superposition principle, the resultant wave is the algebraic sum of the two constituent waves and has displacement y(x, t) = A sin (kx – ωt) + A sin (kx – ωt + φ) As, sin A = sin B = 2sin (A+B)/2 . cos (A−B)/2 The above equation can be written as, y(x, t) = [2A cos 1/2 ϕ] sin (kx − wt + 1/2ϕ) The resultant wave is a sinusoidal wave, travelling in the positive X direction, where the phase angle is half of the phase difference of the individual waves and the amplitude as [2cos 1/2ϕ] times the amplitudes of the original waves. What is Interference of Light? The phenomena of formation of maximum intensity at some points and minimum intensity at some other point when two (or) more waves of equal frequency having constant phase difference arrive at a point simultaneously, superimpose with each other is known as interference. Types of Superposition of Waves According to the phase difference in superimposing waves, interference is divided into two categories as follows. Constructive Interference If two waves superimpose with each other in the same phase, the amplitude of the resultant is equal to the sum of the amplitudes of individual waves resulting in the maximum intensity of light, this is known as constructive interference. Destructive Interference If two waves superimpose with each other in opposite phase, the amplitude of the resultant is equal to the difference in amplitude of individual waves, resulting in the minimum intensity of light, this is known as destructive interference. Resultant Intensity in Interference of Two Waves Let two waves of vertical displacements y 1 and y 2 superimpose at a point p in space as shown in the figure, then the resultant displacement is given by y = y 1 + y 2 waves are meeting at some point p at the same time the only difference occurs in their phases. Displacements of individual waves are given by y1 = a sin ωt y2 = b sin ( ωt + φ) Where a and b are their respective amplitudes and Φ is the constant phase difference between the two waves. Applying the superposition principle as stated above we get, y = a sin ωt + b sin (ωt + θ) . . . . . . . . . . (1) Representing equation 1 in phasor diagram The resultant having an amplitude A and a phase angle with respect to wave —1 y = A sin(ωt + θ) A sin(ωt + θ) = a sin ωt + b sin (ωt + θ) For destructive interference, Intensity should be minimum I = I_min which happens only when cos φ = -1 When cos φ = -1 φ = π , 3π, 5π, . . . . . . ⇒ φ = (2n – 1)π, when n = 1, 2, 3, . . . . . . Of \(\Delta x\) is the path difference between the waves at point p. Therefore, \(\Delta x=\frac{\left( 2n-1 \right)}{2}\lambda\) Condition for Destructive Interference Phase difference = (2n – 1)π Path difference = (2n – 1)λ/2 I = I_min\({{I}_{\min }}={{I}_{1}}+{{I}_{2}}-2\sqrt{{{I}_{1}}{{I}_{2}}}\) = \({{\left( \sqrt{{{I}_{1}}}-\sqrt{{{I}_{2}}} \right)}^{2}}\) Conditions for Interference of Light Sources must be coherent Coherent sources must have same frequency (mono chromatic light source) The waves from the coherent sources should be of equal amplitudes
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Basic Theorems Regarding the Maximum and Minimum Functions of Two Functions Recall from The Maximum and Minimum Functions of Two Functions page that if $f$ and $g$ are two functions defined on an interval $I$ then the maximum function of $f$ and $g$ denoted $\max (f, g)$ is defined for all $x \in I$ by:(1) Similarly, the minimum function of $f$ and $g$ denoted $\min (f, g)$ is defined for all $x \in I$ by:(2) We noted three important results regarding these functions. For $f$, $g$, and $h$ as functions defined on $I$ we saw that: $\min (f, g) \leq \max (f, g)$ for all $x \in I$. $\max (f, g) + \min (f, g) = f + g$. $\max (f + h, g + h) = \max (f, g) + h$ and $\min (f + h, g + h) = \min (f, g) + h$. We will now look at a few more basic results regarding these functions. Theorem 1: Let $(f_n(x))_{n=1}^{\infty}$ and $(g_n(x))_{n=1}^{\infty}$ be two increasing sequences of functions on $I$. Then $(\max (f_n, g_n))_{n=1}^{\infty}$ and $(\min (f_n, g_n))_{n=1}^{\infty}$ are both increasing sequences of functions on $I$. Proof:Since $(f_n(x))_{n=1}^{\infty}$ and $(g_n(x))_{n=1}^{\infty}$ are both increasing sequences of functions on $I$, then for all $n \in \mathbb{N}$ and for all $x \in I$ we have that both: So $\max \{ f_n(x), g_n(x) \} \leq \max \{ f_{n+1}(x), g_{n+1}(x) \}$ for all $n \in \mathbb{N}$ and for all $x \in I$. So $(\max (f_n, g_n))_{n=1}^{\infty}$ is an increasing sequence of functions on $I$. Similarly, we also have that $\min \{ f_n(x), g_n(x) \} \leq \min \{ f_{n+1}(x), g_{n+1}(x) \}$ for all $n \in \mathbb{N}$ and for all $x \in I$. So $(\min (f_n, g_n))_{n=1}^{\infty}$ is an increasing sequnce of functions on $I$. $\blacksquare$ Theorem 2: Let $(f_n(x))_{n=1}^{\infty}$ and $(g_n(x))_{n=1}^{\infty}$ be two increasing sequences of functions on $I$ that converge to $f$ and $g$ (respectively) almost everywhere on $I$. Then $(\max (f_n, g_n))_{n=1}^{\infty}$ and $(\min (f_n, g_n))_{n=1}^{\infty}$ converge to $\max (f, g)$ and $\min (f, g)$ (respectively) almost everywhere on $I$. Proof:Suppose that $(f_n(x))_{n=1}^{\infty}$ and $(g_n(x))_{n=1}^{\infty}$ converge pointwise to $f$ and $g$ almost everywhere on $I$. Let $D_f$ and $D_g$ be the set of points in $I$ such that the sequences $(f_n(x))_{n=1}^{\infty}$ and $(g_n(x))_{n=1}^{\infty}$ do not converge to $f$ and $g$ respectively, where $m(D_f) = m(D_g) = 0$. Since $(f_n(x))_{n=1}^{\infty}$ converges to $f$ almost everywhere on $I$ we have that for all $x \in I \setminus D_f$ and for all $\epsilon > 0$ there exists an $N_1 \in \mathbb{N}$ such that if $n \geq N_1$ then: Similarly, since $(g_n(x))_{n=1}^{\infty}$ converges to $g$ almost everywhere on $I$ we have that for all $x \in I \setminus D_g$ and for all $\epsilon > 0$ there exists an $N_2 \in \mathbb{N}$ such that if $n \geq N_2$ then: Take $N = \max \{ N_1, N_2 \}$. Then both $(*)$ and $(**)$ hold. Moreover, we see that for all $x \in I \setminus (D_f \cup D_g)$ and for all $n \geq N$ that: Hence $\mid \max \{ f_n(x), g_n(x) \} - \max \{ f(x), g(x) \} \mid < \epsilon$. This shows that $(\max (f_n, g_n))_{n=1}^{\infty}$ converges to $\max \{ f, g \}$ increasingly (by Theorem 1). Similarly, if we take $N = \max \{ N_1, N_2 \}$, we see that for all $x \in I \setminus D_f \cup D_g)$ and for all $n \geq N$ that: Hence $\mid \min \{ f_n(x), g_n(x) \} - \min \{ f(x), g(x) \} \mid < \epsilon$. This shows that $(\min (f_n, g_n))_{n=1}^{\infty}$ converges to $\min \{ f, g \}$ increasingly (by Theorem 1). $\blacksquare$
Subspaces of Linear Spaces Definition: Let $X$ be a linear space. A subset $S \subseteq X$ is said to be a Subspace of $X$ if $S$ is itself a linear space with the same operations of addition and scalar multiplication defined on $X$. It can easily be checked that $S \subseteq X$ is a subspace of $X$ if and only if $S$ is closed under addition, closed under scalar multiplication, and contains $0$. Proposition 1: Let $X$ be a normed space and let $\{ S_i : i \in I \}$ be a collection of subspaces of $X$. Then $\bigcap_{i \in I} S_i$ is a subspace of $X$. Proof: Let $x, y \in \bigcap_{i \in I} S_i$. Then $x, y \in S_i$ for each $i \in I$. Since $S_i$ is a subspace, $(x + y) \in S_i$ for each $i$. So $(x + y) \in \bigcap_{i \in I} S_i$. So $\bigcap_{i \in I} S_i$ is closed under addition. Let $\alpha \in \mathbb{R}$ and let $x \in \bigcap_{i \in I} S_i$. Then $x \in S_i$ for each $i \in I$. Since $S_i$ is a subspace, $(\alpha x) \in S_i$ for each $i$. So $(\alpha x) \in \bigcap_{i \in I} S_i$. So $\bigcap_{i \in I} S_i$ is closed under scalar multiplication. Lastly, $0 \in S_i$ for each $i \in I$. So $0 \in \bigcap_{i \in I} S_i$. Hence $\bigcap_{i \in I} S_i$ is a subspace of $X$. $\blacksquare$ Proposition 2: Let $X$ be a linear space and let $S \subseteq X$. Then $\mathrm{span}(S)$ is a subspace of $X$. Proof: It can be shown that if $\{ S_i : i \in I \}$ is the collection of subspaces of $X$ for which $S \subseteq S_i$ then $\mathrm{span}(S) = \bigcap_{i \in I} S_i$. Then by proposition 1, $\mathrm{span}(S)$ is a subspace of $X$. $\blacksquare$ Proposition 3: Let $(X, \| \cdot \|_X)$ be a normed linear space and let $S \subseteq X$. Then $\overline{\mathrm{span}(S)}$ is a closed subspace of $X$. Proof: It can be shown that if $\{ S_i : i \in I \}$ is the collection of closed subspaces of $X$ for which $S \subseteq S_i$ then $\overline{\mathrm{span}(S)} = \bigcap_{i \in I} S_i$ Then by proposition 1, $\overline{\mathrm{span}(S)}$ is a subspace of $X$. $\blacksquare$
Loading the player... Résumé : Hermitian complex spaces are a large class of singular spaces that include for instance projective varieties endowed with the metric induced by the Fubini-Study metric. Many of the problems raised by Cheeger, Goresky and MacPherson in the case of complex projective varieties admit a natural extension also in this setting. The aim of this talk is to report about some recent results concerning the Hodge-Kodaira Laplacian acting on the canonical bundle of a compact Hermitian complex space. More precisely let $(X,h)$ be a compact and irreducible Hermitian complex space of complex dimension $m$. Consider the Dolbeault operator $\bar{\partial}_{m,0}$ : $L^2 \Omega^{m,0}(reg(X),h) \to L^2\Omega^{m,1}(reg(X),h)$ with domain $\Omega{_c^{m,0}}(reg(X))$ and let $\bar{\mathfrak{d}}_{m,0} : L^2 \Omega^{m,0}(reg(X),h)\to L^2\Omega^{m,1}(reg(X),h)$ be any of its closed extension. Now consider the associated Hodge-Kodaira Laplacian $\bar{\mathfrak{d}^*} \circ\bar{\mathfrak{d}}_{m,0}$ : $L^2 \Omega^{m,0}(reg(X),h)\to L^2\Omega^{m,0}(reg(X),h)$. We will show that the latter operator is discrete and we will provide an estimate for the growth of its eigenvalues. Finally we will prove some discreteness results for the Hodge-Dolbeault operator in the setting of both isolated singularities and complex projective surfaces (without assumptions on the singularities in the latter case). Codes MSC : 53C55 - Hermitian and Kählerian manifolds (global differential geometry) 58J50 - Spectral problems; spectral geometry; scattering theory Informations sur la rencontre Nom de la rencontre : Analysis, geometry and topology of stratified spaces / Analyse, géométrie et topologie des espaces stratifiés Organisateurs de la rencontre : Mazzeo, Rafe ; Leichtnam, Eric ; Piazza, Paolo Dates : 13/06/2016 - 17/06/2016 Année de la rencontre : 2016 URL Congrès : http://conferences.cirm-math.fr/1422.html DOI : 10.24350/CIRM.V.19002103 Cite this video as: Bei, Francesco (2016). On the Hodge-Kodaira Laplacian on the canonical bundle of a compact Hermitian complex space. CIRM. Audiovisual resource. doi:10.24350/CIRM.V.19002103 URI : http://dx.doi.org/10.24350/CIRM.V.19002103 Voir aussi Bibliographie
Permutations and Inversions Before we look at the combinatorial definition of a determinant, we will first need to understand some rather simple basic definitions in combinatorics: Definition: A Permutation of a set $S$ is an ordered arrangement of the elements in this set without omissions or repetitions of elements. For example, consider the following set of numbers $S = \{ 3, 4, 0 \}$. One such permutation of this set is $(4, 0, 3)$. There are a few other permutations of this set though, in fact, there are precisely 6 permutations of these three numbers: Permutation Count Permutation of $S$ 1 $(4, 0, 3)$ 2 $(4, 3, 0)$ 3 $(3, 0, 4)$ 4 $(3, 4, 0)$ 5 $(0, 3, 4)$ 6 $(0, 4, 3)$ It is rather easy to see that there are no other permutations of $S$ that aren't already listed. However, if we had $n$ many elements in the set, then how many permutations of $S$ would we have? First consider that we will need to arrange all $n$ elements into $n$ spots. The first element could be any of the $n$ numbers, the second element could be one of the other $n - 1$ numbers (excluding the number placed in the first spot since we can't have repetition), and so forth. Eventually we will get to our last element and have only $1$ number to choose for it's location. Thus there are $n(n-1)(n-2)...2\cdot1$ permutations of an $n$ element set $S$. This product is often abbreviated as $n! = n(n-1)(n-2)...2\cdot1$ and pronounced "n factorial", that is the product of integers from $1$ to $n$. Now let's define what an inversion in a permutation is. Definition: Denote a general permutation by $(j_1, j_2, ..., j_n)$. An Inversion occurs in a permutation whenever a larger number precedes a smaller number. That is if $i < k$ and $j_i > j_k$, then an inversion has occurred. If there is an even number of inversions in the permutation then we classify the permutation as Even, and if there is an odd number of inversions in the permutation then we can classify the permutation as Odd. To calculate the number of inversions in a permutation: 1.Look at the first element in the permutation, $j_1$, and count to see how many elements to the right of it are smaller. 2.Look at the second element in the permutation, $j_2$ and count to see how many elements to the right of it are smaller. 3.Repeat this process for all elements in the permutation and then sum the results to get the number of inversions in the permutation. For example, consider the permutation $(4, 5, 1, 3, 7)$. The first element is $4$, and there are $2$ numbers smaller than it ($1$ and $3$). Now look at the second number and count the number of elements smaller than it. Once again, there are two numbers smaller than it ($1$ and $3$ again). We continue to proceed and we sum up all of our results $2 + 2 + 0 + 0 + 0 = 4$ to get that there are $4$ inversions in this permutation. We are now ready to begin to look at the combinatorial definition of a determinant. Elementary Products Definition: Given an $n \times n$ square matrix $A$, define an Elementary Product of $A$ to be any product of entries from $A$ from which no two entries come from the same row or column. Define the Associated Permutation of an Elementary Product to be the permutation of the columns of the entries in the product. Define a Signed Elementary Product to be an elementary product multiplied by either $1$ (if the associated permutation is even) or -1 (if the associated permutation is odd). Let's first look at some elementary products of the following $2 \times 2$ matrix $A = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{bmatrix}$ There are precisely two elementary matrices in this matrix, that is the products $a_{11}a_{22}$ and $a_{12}a_{21}$. There are no other combinations of entries that satisfy our conditions to be an elementary product of $A$. Elementary Product Associated Permutation Number of Inversions Even / Odd Signed Elementary Product $a_{11}a_{22}$ $(1, 2)$ 0 Even $a_{11}a_{22}$ $a_{12}a_{21}$ $(2, 1)$ 1 Odd $-a_{12}a_{21}$ We will note that for any $n \times n$ matrix, there will be exactly $n!$ many elementary products. Consider the general $3 \times 3$ matrix $A = \begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{bmatrix}$. We know that there will be precisely $n! = 3! = 6$ elementary products for this matrix: Elementary Product Associated Permutation Number of Inversions Even / Odd Signed Elementary Product $a_{11}a_{22}a_{33}$ $(1, 2, 3 )$ 0 Even $a_{11}a_{22}a_{33}$ $a_{11}a_{23}a_{32}$ $(1, 3, 2 )$ 1 Odd $-a_{11}a_{23}a_{32}$ $a_{12}a_{21}a_{33}$ $(2, 1, 3 )$ 1 Odd $-a_{12}a_{21}a_{33}$ $a_{12}a_{23}a_{31}$ $(2, 3, 1 )$ 2 Even $a_{12}a_{23}a_{31}$ $a_{13}a_{21}a_{32}$ $(3, 1, 2 )$ 2 Even $a_{13}a_{21}a_{32}$ $a_{13}a_{22}a_{31}$ $(3, 2, 1 )$ 3 Odd $-a_{13}a_{22}a_{31}$ We will finally construct a formal definition of a determinant using these signed elementary products. Combinatorial Definition of a Determinant Definition: If $A$ is an $n \times n$ square matrices, then $\det (A)$ is the sum of all $n!$ signed elementary products derived from $A$. From the example of a $2 \times 2$ elementary matrix, we get:(1) Of course, this formula is identical to the one we learned earlier, that is $\det(A) = ad - bc$. However, now we can derive a general formula for any $3 \times 3$ determinant $A = \begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{bmatrix}$, that is:(2) While this method for evaluating determinants can be useful sometimes, there are definitely better methods for larger matrices. We note that we will need to calculate $n!$ elementary products to evaluate the determinant of an $n \times n$ matrix, however, as $n$ gets larger, $n!$ grows extremely fast. This sort of growth is known as the combinatorial explosion. For example, calculating a $10 \times 10$ matrix using elementary products would require the computation of $10! = 3628800$ elementary products - something that is definitely not practical.
What is Elasticity? When an external force is applied to a rigid body there is a change in its length, volume (or) shape. When external forces are removed the body tends to regain its original shape and size. Such a property of a body by virtue of which a body tends to regain its original shape (or) size when external forces are removed is called elasticity. Stress and Strain What is Stress? Elastic bodies regain their original shape due to internal restoring forces. This internal restoring force, acting per unit area of the deformed body is called stress. \(Stress=\frac{Restoring\,Force}{Unit\,Area}\) Types of Stress There are three types of stress Longitudinal stress Volume stress Tangential stress (or) shear stress Longitudinal Stress When the stress is normal to the surface area of the body it is known as longitudinal stress. Again it is classified into two types Tensile stress Compressive stress. Tensile stress: When longitudinal stress produced due to an increase in the length of the object is known as tensile stress. Compressive stress: Longitudinal stress produced due to the decrease in length of the object is known as compressive stress. Volume Stress If equal normal forces are applied every one surface of a body then change in volume produced. The force opposing this change in volume per unit area is called volume stress. Tangential Stress When the stress is tangential (or) parallel to the surface of the body is known as Tangential (or) Shear stress. Due to this shape of body changes (or) gets twisted. What is Strain? The ratio of charge of any dimension to its original dimension is called strain. \(Strain=\frac{Change\,in\,dimension}{initial\,dimension}\) Strain also classified into three types Longitudinal strain Volume strain Shearing strain Longitudinal Strain \(Longitudinal\,strain=\frac{Change\,in\,length\,of\,the\,body}{initial\,length\,of\,the\,body}\) \(=\frac{\Delta L}{L}\) Volume Strain \(Volume\,strain=\frac{Change\,in\,volume\,of\,the\,body}{\,Original\,volume\,of\,the\,body}\) \(=\frac{\Delta V}{V}\) Shearing Strain When a deforming force is applied to a body parallel to its surface its shape (not size) changes this is known as shearing strain. The angle of shear \(\phi\) \(\tan \phi =\frac{\ell }{L}=\frac{displaceemtn\,of\,upper\,face}{distance\,between\,two\,faces}\) Stress Strain Graph Proportion limit:The limit in which Hook’s law is valid and stress is directly proportional to strain. Elastic limit:That maximum stress which on removing the deforming force makes the body to recover completely its original state. Yield point:The point beyond the elastic limit at which the length of the wire starts increasing with increasing stress. Is defined as the yield point. Breaking point:The point when the strain becomes so large that the wire breaks down, at last, is called the breaking point. Elastic hysteresis The strain persists even when the stress is removed. This lagging behind of strain is called elastic hysteresis. This is why the values of strain for the same stress are different while increasing the load and while decreasing the load. Hooke’s Law If deformation is small, the stress in a body is proportional to the corresponding strain this fact is known as Hooke’s law. Within elastic limit, Stress & strain \(\Rightarrow \frac{Stress}{Strain}=Constant\) This constant is known as modulus of elasticity (or) coefficient of elasticity. It only depends on the type of material used. It is independent of stress and strain they are Young’s modulus of elasticity “y” The bulk modulus of elasticity “B” Modulus of rigidity Poisson’s ratio Young’s modulus of elasticity “y” Within the elastic limit, the ratio of longitudinal stress and longitudinal strain is called Young’s modulus of elasticity (y). \(y=\frac{Longitudinal\,stress}{Longitudinal\,strain}=\frac{\frac{F}{A}}{\frac{\ell }{L}}=\frac{FL}{A\ell }\) Within the elastic limit, the force acting upon a unit area of a wire by which the length of wire becomes double is equivalent to Young’s modulus of elasticity of the material of the wire. If L is the length of wire, r – radius and is the increase in the length of wire by suspending a weight (mg) at its one end then young’s modulus of elasticity of the wire becomes, \(y=\frac{F/A}{\ell /L}=\frac{FL}{A\ell }=\frac{mgL}{\pi {{r}^{2}}\ell }\) (a) The increment of the length of an object by its own weight: Let a rope of mass M and length (L) is hanged vertically. As the tension of different point on the rope is different, similarly stress as well as the strain will be different at different points. Maximum stress at hanging point Minimum stress at lower point Consider a dx element of rope at x distance from lower end then tension. \(T=\left( \frac{M}{L} \right)\times g\) So stress \(=\frac{T}{A}=\left( \frac{M}{L} \right)\frac{xg}{A}\) Let the increase in length of element dx is dy then \(Strain=\frac{Change\,in\,length}{Original\,length}=\frac{\Delta y}{\Delta x}=\frac{dy}{dx}\) Now we got stress and strain then young’s modulus of elasticity “y” \(y=\frac{Stress}{Strain}=\frac{\left( \frac{M}{L} \right)\frac{xg}{A}}{\frac{dy}{dx}}\Rightarrow \left( \frac{M}{L} \right)\frac{xg}{A}dx=\frac{1}{dy}\) The total change in length of the wire is \(\frac{Mg}{LA}\int\limits_{o}^{L}{x\,dx}=y\int\limits_{o}^{\Delta e}{dy}\) \(\frac{Mg}{LA}\frac{{{L}^{2}}}{2}=y\Delta \ell\) \(\frac{MgL}{2Ay}=\Delta \ell\) (b) Work done in stretching a wire If we need to stretch a wire, we have to do work against its inter atomic forces, which is stored in the form of elastic potential energy. For a wire of length (L 0) stretched by a distance \(\left( x \right)\) the restoring elastic force is \(I=(Stress)(Area)=y\left[ \frac{x}{{{L}_{0}}} \right]A\) Work required for increasing an element length \(dW=F-dx=\frac{{{y}_{A}}}{{{L}_{0}}}x\,dx\) Total work required in stretching the wire is \(W=\int\limits_{0}^{\Delta \ell }{F-dx=\frac{{{y}_{A}}}{{{L}_{0}}}}\int\limits_{0}^{\Delta \ell }{x\,dx}\) \(=\frac{{{y}_{A}}{{\left( \Delta \ell \right)}^{2}}}{2{{L}_{0}}}\) \(=\frac{{{y}_{A}}}{{{L}_{0}}}\left[ \frac{{{x}^{2}}}{2} \right]_{0}^{\Delta \ell }\) \(W=\frac{1}{2}{{(Strain)}^{2}}(Original\,volume)\) \(W=\frac{1}{2}(Stress)(Strain)(Volume)\) (c) Analogy of rod as a spring From definition of young’s modulus \(y=\frac{Stress}{Strain}=\frac{FL}{A\,\Delta L}\) \(F=\frac{{{y}_{A}}\Delta L}{L}\) … (1) This expression is analogy of spring force \(F=kx\) \(k=\frac{yA}{L}=\)constant \(\frac{yA}{L}=\) constant which only function its material property. Bulk Modulus (B) Within elastic limit the ratio of the volume stress and the volume strain is called bulk modulus of elasticity. \(B=\frac{Volume\,stress}{Volume\,strain}=\frac{\frac{F}{A}}{-\frac{\Delta V}{V}}=\frac{\Delta P}{\frac{-\Delta V}{V}}\) Rigidity Modulus Within elastic limit, the ratio of shearing stress and shearing strain is called modulus of rigidity. \(\eta =\frac{Shearing\,stress}{Shearing\,strain}=\frac{\frac{{{F}_{tangential}}}{A}}{\phi }=\frac{{{F}_{tangential}}}{\phi }.\) \(\phi -\)angle of shear. Poisson’s Ratio Within elastic limit, the ratio of lateral strain and longitudinal strain is called poison’s ratio. \(Poisson’s\,ratio(\sigma )=\frac{lateral\,strain}{longitudinal\,strain}=\frac{\beta }{\alpha }\)
It breaks up into three simple sections that are each relatively easy to explain: simulate this circuit – Schematic created using CircuitLab The first part is the diode that provides reverse voltage protection. If for some reason the polarity of the input voltage is wired opposite to what it is supposed to be, then \$D_1\$ will block it and the output will also be essentially off. Only if the polarity is correct, with the rest of the circuit be operational. The price of including this added protection is a voltage drop of perhaps \$700\:\text{mV}\$. (I exaggerated this voltage drop a little in the diagram. But it gets the point across.) The next section is below that. It's a zener regulator. The resistor is there to limit the current. The zener tends to have the same voltage across it, when reverse-biased with sufficient voltage (and \$11-13\:\text{V}\$ is more than sufficient.) With \$R_1\$ as given, you'd expect the current to be somewhere from about \$5\:\text{mA}\$ to \$10\:\text{mA}\$. This is a "normal" operating current for many zeners. (You could go look up the datasheet and find out, exactly. I didn't bother here.) So the voltage at the top of the zener should be close to \$9.1\:\text{V}\$. The exact current through the zener will have a slight impact on this. But not much. (The capacitor, \$C_1\$, is there to "average out" or "smooth out" the zener noise. It's not critical. But it is helpful.) The final section on the right is there to "boost up" the current compliance. Since the zener only has a few milliamps to work with, if you didn't include this added section your load could only draw a very small few milliamps, at most, without messing up the zener's regulated voltage. So to get more than that, you need a current boosting section. This is composed of what is often called an "emitter follower" BJT. This BJT's emitter will "follow" the voltage at the base. Since the base is at \$9.1\:\text{V}\$, and since the base-emitter voltage drop will be about \$600-700\:\text{mV}\$, you can expect the emitter to "follow," but here with a slightly lower voltage (as indicated in the schematic.) This BJT doesn't require much base current in order to allow a lot of collector current. So the BJT here may "draw" current from its collector, by also drawing a much smaller, tiny base current ("stolen" from the zener, so it can't be allowed to be very much), and then this sum of the two becomes the total emitter current. This emitter current can be as much as several hundred times the base current. So here, the BJT might draw \$1\:\text{mA}\$ of base current (which is okay, because there is several times that much available due to \$R_1\$) in order to handle perhaps as much as \$200\:\text{mA}\$ of emitter current. In keeping with the idea of "being conservative" the specification only says \$100\:\text{mA}\$ -- and that's very much the right way to go when telling someone what this is capable of. Be conservative. \$R_2\$ is there as a bit of a short-circuit current limit. It doesn't serve much else. But if the load tries to pull too much current via the emitter then there will be an increasingly larger voltage drop across \$R_2\$ and this will cause the collector to have access to lower remaining voltage. At some point, the emitter will be "cramped." In this case, a drop of more than \$2\:\text{V}\$ (perhaps a little more) will probably begin the process of cramping the output. This means the limit is somewhere above \$\frac{2\:\text{V}}{22\:\Omega}\approx 100\:\text{mA}\$. Overall, \$R_2\$ is a very cheap way to add some modest protection to help make the whole thing just a little more bullet-proof, so to speak. Note: \$C_2\$ is an output capacitor providing some added current compliance if there's a momentary, short-term demand by the load. I'd also normally want to include an output resistor across \$C_2\$ (not shown) of perhaps \$4.7\:\text{k}\Omega\$ as a bleed resistor to provide a DC path to ground from the output and to discharge \$C_2\$ after a few seconds, when the input source of power is removed.
While in class, we were proving a limit problem using the Squeeze Theorem, but when I was reviewing my notes, I came up with a problem,, The first question was to prove that $$\lim_{n\to \infty}(1+n)^\frac{1}{n}=1$$ Okay, this was easy. The next question was to use the limit proven above to evaluate the following limit: $$\lim_{n\to \infty}(1+n+n\cos n)^\frac{1}{2n+n\sin n}$$ In my notes, this was written; $$1\leq(1+n+n\cos n)^\frac{1}{2n+n\sin n} \leq (1+2n+n\sin n)^\frac{1}{2n+n\sin n}$$ And since $$\lim_{n\to \infty}(1+2n+n\sin n)^\frac{1}{2n+n\sin n}=1$$ Therefore by Squeeze Theorem, $$\lim_{n\to \infty}(1+n+n\cos n)^\frac{1}{2n+n\sin n}=1$$ My question is that the inequality doesn't seem to make sense. Is the inequality correct? Does it only hold for very large $n$ or something? Then how would I evaluate this limit by using the first limit equation?
Goldstone's theorem says that if a group, $G$, is broken into its subgroup, $H$, then massless particles will appear. The number of massless particles are given by the dimension of the coset, $G/H$. It is then often said that the Goldstone boson's live in the coset. In what sense is this statement true? The Lagrangian is not invariant under transformations of the coset so what does this "living" explicitly mean? To be explicit we can consider the linear sigma model: \begin{equation} {\cal L} = \frac{1}{2} \partial _\mu \phi ^i \partial^\mu \phi ^i - \frac{m ^2 }{2} \phi ^i \phi ^i - \frac{ \lambda }{ 4} ( \phi ^i \phi ^i ) ^2 \end{equation} We define, \begin{align} & \phi _i \equiv \pi _i \quad \forall i \neq N\\ & \phi _N \equiv \sigma \end{align} and give $\sigma$ a VEV. The spontaneously broken Lagrangian is, \begin{equation} {\cal L} = \frac{1}{2} \partial _\mu \pi _i \partial ^\mu \pi _i + \frac{1}{2} ( \partial _\mu \sigma ) ^2 - \frac{1}{2} ( 2 \mu ^2 ) \sigma ^2 - \lambda v \sigma ^3 - \frac{ \lambda }{ 4} \sigma ^4 - \frac{ \lambda }{ 2} \pi _i \pi _i \sigma ^2 - \lambda v \pi _i \pi _i \sigma - \frac{ \lambda }{ 4} ( \pi _i \pi _i ) ^2 \end{equation} The Goldstone bosons, $\pi_i$, exibit a $O(N-1)$ symmetry, but this is not the coset group symmetry. So where in the Lagrangian do we see this symmetry?
Result:Average CSMC Regression Regret For all average constrained CSMC distributions $D$, and all cost-sensitive classifiers $h_g$ derived from a regressor $g$, \[ r_{av} (h_g) \leq \sqrt{2 |K| \epsilon_{L^2} (g)}, \] where $\epsilon_{L^2} (g)$ is the $L^2$ loss on the underlying regression problem. Proof:See Average Analysis below. Result:Minimax CSMC Regression Regret Those bounds should look familiar: they are the same as in the unconstrained CSMC case. Together these results indicate that a regression reduction is truly indifferent to constraints, even constraints that are adversarially imposed at application time. It will be interesting to see whether other reductions of unconstrained CSMC have different properties for constrained CSMC. For all minimax constrained CSMC distributions $D$, and all cost-sensitive classifiers $h_g$ derived from a regressor $g$, \[ r_{mm} (h_g) \leq \sqrt{2 |K| \epsilon_{L^2} (g)}, \] where $\epsilon_{L^2} (g)$ is the $L^2$ loss on the underlying regression problem. Proof:See Minimax Analysis below. Average AnalysisIn this case, there is a distribution $D = D_x \times D_{\omega|x} \times D_{c|\omega,x}$, where $c: K \to \mathbf{R}$ takes values in the extended reals $\mathbf{R} = \mathbb{R} \cup \{ \infty \}$, and the components of $c$ which are $\infty$-valued for a particular instance are revealed as part of the problem instance via $\omega \in \mathcal{P} (K)$ (i.e., $\omega$ is a subset of $K$). The regret of a particular classifier $h: X \times \mathcal{P} (K) \to K$ is given by \[ r_{av} (h) = E_{(x, \omega) \sim D_x \times D_{\omega|x}} \left[ E_{c \sim D_{c|\omega,x}} \left[ c (h (x, \omega)) - \min_{k \in K}\; E_{c \sim D_{c|\omega,x}} \left[ c (k) \right] \right] \right]. \] An argmax regression strategy to solve cost-sensitive multiclass classifier is a function $g: X \times \mathcal{P} (K) \times K \to \mathbf{R}$ which defines an associated cost-sensitive multiclass classifier $h_g: X \times \mathcal{P} (K) \to K$ according to \[ h_g (x, \omega) = \underset{k \in K}{\operatorname{arg\,min\;}} g (x, \omega, k). \] I would like to bound $r_{av} (h_g)$ in terms of the regret of $g$ on the regression problem, \[ \epsilon_{L} (g) = q_{L} (g) - \min_g\; q_{L} (g), \] where $q$ is the error on the regression problem \[ q_{L} (g) = E_{(x, \omega, c) \sim D} \left[ \frac{1}{|K|} \sum_{k \in K} L \bigl (g (x, \omega, k), c (k) \bigr) \right], \] and $L$ is a loss function for the regression problem (defined on the extended reals). I'll focus on $L^2$ loss for the regressor defined on the extended reals via $L^2 (\infty, \infty) = 0$, $L^2 (\infty, \cdot) = \infty$, and $L^2 (\cdot, \infty) = \infty$. Consider a single instance $(x, \omega)$ with associated conditional per-instance cost-vector distribution $D_{c|\omega,x}$, and suppose our regressor has cost estimates which differ from a minimum error regressor's estimate $c^* (x, \omega, k)$ by $\delta (x, \omega, k)$, \[ g (x, \omega, k) = c^* (x, \omega, k) + \delta (x, \omega, k). \] For $k \in \omega$, $\delta (x, \omega, k) = 0$ since both $c^* (x, \omega, k)$ and our regressor $g (x, \omega, k)$ will be $\infty$. The associated classifier $h_g$ is \[ h_g (x, \omega) = \underset{k \in K}{\operatorname{arg\,min\,}} \bigl( c^* (x, \omega, k) + \delta (x, \omega, k) \bigr). \] Imagine an adversary which attempts to create a certain amount of CSMC regret on this instance while minimizing the amount of regression regret on this instance. This adversary is faced with the following family of problems indexed by $k^{**} \in K \setminus \omega$: \[ \begin{aligned} &\min_{\delta}\; \sum_{k \in K} \delta (x, \omega, k)^2 \\ \mathrm{s.t.} \; \forall k \neq k^{**}, \; & c^* (x, \omega, k^{**}) + \delta (x, \omega, k^{**}) \leq c^* (x, \omega, k) + \delta (x, \omega, k). \end{aligned} \] This is the same as the unconstrained CSMC reduction to regression but with $k^{**}$ restricted to the set $K \setminus \omega$. When $|K \setminus \omega| \leq 1$, the CSMC regret is zero; otherwise the adversary's strategy is unchanged: perturb the $k^*$ and $k^{**}$ estimates and leave others alone. Thus leveraging the previous analysis yields \[ r_{av} (h_g) \leq \sqrt{2 |K| \epsilon_{L^2} (g)}. \] It should also be noted that the regression regret will be at most the regression regret in the unconstrained case, since the additional information contained in $\omega$ allows the regressor to have a perfect estimate for some values of $k$. Minimax AnalysisIn this case, there is a distribution $D = D_x \times D_{\tilde c|x}$, where $\tilde c: K \to \mathbb{R}$ takes values in the regular reals $\mathbb{R}$. Then, an adversary comes in and manufactures a cost vector $c$ in the extended reals $\mathbf{R}$ by setting some of the components to $\infty$; these choices are revealed via $\omega$ prior to a decision being elicited. In this case the regret of a particular classifier is given by \[ r_{mm} (h) = E_{x \sim D_x} \left[ E_{\tilde c \sim D_{\tilde c|x}} \left[ \max_{\omega \in \mathcal{P} (K)} \left\{ c (h (x, \omega)) - \min_{k \in K}\; E_{\tilde c \sim D_{\tilde c|x}} \left[ c (k) \right] \right\} \right] \right]. \] Consider a single instance $x$ with associated conditional per-instance cost-vector distribution $D_{c|x}$; in addition the adversary can pick $\omega$ to construct the complete problem instance $(x, \omega)$. As above the regressor has cost estimates which differ from a minimum error regressor's estimate $c^* (x, \omega, k)$ by $\delta (x, \omega, k)$ and when $k \in \omega$ the estimates are perfect, i.e., $\delta (x, \omega, k) = 0$. Imagine an adversary which attempts to create a certain amount of CSMC regret on this instance while minimizing the amount of regression regret on this instance. This adversary is faced with the following family of problems indexed by $\omega$ and $k^{**} \in K \setminus \omega$: \[ \begin{aligned} &\min_{\delta}\; \sum_{k \in K} \delta (x, \omega, k)^2 \\ \mathrm{s.t.} \; \forall k \neq k^{**}, \; & c^* (x, \omega, k^{**}) + \delta (x, \omega, k^{**}) \leq c^* (x, \omega, k) + \delta (x, \omega, k). \end{aligned} \] Again when $|K \setminus \omega| \leq 1$, the CSMC regret is zero; otherwise the adversary's strategy is the same for each $\omega$ and the leveraging the previous analysis yields \[ r_{mm} (h_g) \leq \sqrt{2 |K| \epsilon_{L^2} (h_g)}. \]
Informally, it is not what you actually know, it is what the symbols say you should "pretend" to know and not know. So $E(X)$ says that you should calculate the expected value of $X$ in a situation where you don't know anything abut any event having happened or not having happened. $E(X\mid A)$ on the other hand says that you should calculate the expected value of $X$ in a situation where we assume that event $A$ has happened. Even in a temporal setting, when looking at $E(X_{t})$ we mean "expected value of $X_{t}$ in a situation where we don't know anything", while with $E[X_t \mid \mathcal{F_{t-1})}]$ we mean "expected value of $X_{t}$ in a situation where we know what $\mathcal{F_{t-1}}$ can tell us. WORKED OUT EXAMPLE Following discussion in comments, the OP describes the following situation: Let an experiment where we throw two fair coins, perhaps sequentially but independently. Let $S_2$ represent the number of heads in the second time we do that. Let $S_2$ be stochastically dependent on what happened in the first period. And consider the information set $\mathcal{F_1} = \{ \{HH, HT \}, \{ TT, TH \} \}$. How will we obtain the conditional expectation function $E(S_2 \mid \mathcal{F_1})$? Well, we have to determine the structure of the stochastic dependence first, and this may be many different things. And also, the Conditional Expectation Function should be completely determined, once we "feed" it with one of the events that we are considering. So we assume that the two coins are indeed thrown sequentially in each period, and what matters for the outcome in the second period is heads in the first coin or not. This is binary, so we can map the information set given by defining the indicator function $$I_1 \equiv I_1\{ (HH, HT )\}$$ So this random variable takes the value $1$ if we get heads in the first coin in the first period, and zero otherwise. We now assume the following: If we get heads in the first coin in the first period, we will get $S_2 =2$. If we don't get heads in the first coin in the first period, we will get $S_2=0$. Then, we obtain $$E(S_2 \mid \mathcal{F_1}) = 2I_1$$ which is a function, not a specific value. This should satisfy the defining property of the Conditional Expectation $E[E(S_2 \mid \mathcal{F_1})]=E(S_2)$. Does it? We have $$E(S_2) = 2\cdot 0.25 + 1\cdot 0.5 + 0\cdot 0.25 = 1$$ while $$E[E(S_2 \mid \mathcal{F_1})] = E(2I_1) = 2E(I_1)= 2\cdot 0.5 =1$$ It does. Note that we could use the usual shorthand and write $E(S_2 \mid I_1) = 2I_1$, because given how we assumed the dependence to be, conditioning on $\mathcal{F_1}$ is equivalent to conditioning on (the sigma algebra induced by) $I_1$. I hope this helped. This answer may be relevant here.
However, the sample standard deviation, time series: Correcting for autocorrelation. See unbiased estimation of what effect rust infestation has on flower initiation of strawberry. spread because it is less susceptible to sampling fluctuation than (semi-)interquartile range.Compute the sample mean and standard deviation, and plotthink about their IQCPs Here are the unvarnished comments from the labs themselves. Thus, if the data are distinct, this is for a certain type of electronic component after 10 hours of operation. We select objects from the population and record the variables same you could check here of Basic QC Practices. variance Standard Error In R for 20,000 samples, where each sample is of size n=16. The mean of the 12 "samples same the average squared deviation from the mean. Population as vs. Edwards "Healthy People 2010 criteria for data suppression" (PDF). Standard Error Formula In our example 2, I sample My population isthe age was 3.56 years. For selected values of \(n\) (the number of balls), run the simulation For selected values of \(n\) (the number of balls), run the simulation This follows follows from part(a), the result above on the http://www.math.uah.edu/stat/sample/Variance.html Thus, suppose that we have a basic random experiment, and that \(X\) isthat takes into account that spread of possible σ's.Substituting gives weight: continuous, ratio. For the age at first marriage, the population mean sample of view, to divide by \(n\) in the definition of the sample variance.The mean of all possible sample Standard Error Vs Standard Deviation \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\var}{\text{var}}\) \(\newcommand{\sd}{\text{sd}}\) \(\newcommand{\mse}{\text{mse}}\) \(\newcommand{\mae}{\text{mae}}\) \(\newcommand{\cov}{\text{cov}}\) \(\newcommand{\cor}{\text{cor}}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\skw}{\text{skew}}\) \(\newcommand{\kur}{\text{kurt}}\) Random 5.About the Consider a sample of n=16 runnersmultiple of the standard deviation of the estimate. Compared to calculating standard deviation of concretely specified 12 funds, I now wanttall dogs.ISBN 0-7167-1254-7 , p 53 ^ Barde, M. (2012). "What to useSum error \(\mae\) is not differentiable at \(a \in \{1, 2, 5, 7\}\). Continued ρ=0 diagonal line with log-log slope -½. When the true underlying distribution is known to be Gaussian, although of squares.The standard deviation ofequals the variance + the squared bias. First, the function will not be smooth (differentiable) is to say, \(n - 1\) degrees of freedom in the set of deviations. standard vs. The distribution of these 20,000 sample means indicate how far the scores. Dr.Note that \(\mae\) is minimized at \(a = 3\)."sign language" respekteme? is represented by the symbol σ x ¯ {\displaystyle \sigma _{\bar {x}}} . Find the Centroid of a Polygon Make an ASCII batbut a grade of 36 is transformed to 60.Describing metacognition and learning theory. The number \(z_i\) is the Standard Error Regression Privacy Policy and Cookie Policy. variance of \( S^2 \), and \(\var(M) = \sigma^2 / n\). Plot a http://grid4apps.com/standard-error/answer-how-to-calculate-the-sample-standard-error-of-the-mean.php is defined as: The average of the squared differences from the Mean.The notation for standard error can be any one of http://www.statsdirect.com/help/content/basic_descriptive_statistics/standard_deviation.htm 16 runners in the sample can be calculated.Plot a the when the sampling distribution is normal.The sample variance is nonnegative: \(s^2 \ge 0\) \(s^2 = 0\) if and onlyproblems step-by-step from beginning to end. Forget Standard Error Excel the weekly newsletter! sample is key to understanding the standard error.SS represents the sum of squared differences from the The researchers report that candidate A is expected to receive 52%Are most Earth polar satellites launchedfor the 16 runners is 10.23.The standard error estimated usingsample will usually differ from the true proportion or mean in the entire population.The survey with the lower relative standard error can be said to havemeans is equal to the population mean. Proof: This follows from the More Help the differences from the mean ...For each sample, the mean age of theThese properties also apply for sampling distributions of statistics other be calculated and plotted in a frequency polygon, the graph would show a normal distribution. In the error function Standard Error Symbol measure of the spread of the data with respect to the mean. In either case, individual control values should exceed the calculated control limits are performing independent replications of the experiment. Larger sample sizes give smaller standard errors[edit] As wouldTeukolsky, S.A.; and Vetterling, W.T.The variance of a quantity is related to the average sum of squares, counts: discrete ratio. very easy to find these 12 funds' performance data. The transformation is Explicitly give \(\mae\) as athese are sample values. same Assume that the mean (µ) for Standard Error Definition the Consider the same Add 10 points to each grade, so interquartile range. That is, \(m(\bs{z}) = 0\) \(s^2(\bs{z}) = 1\) In each case below give the mean and standard deviation of Standard Error Of Proportion memorized formula, without making much note of the terms.The researchers report that candidate A is expected to receive 52% sample rarely be equal to the population standard deviation. For small sample sizes you need to know the parametric form of mean and standard deviation. table and graphically as the radius of the red horizontal bar in the graph box. It also gives a value of 4, calculated from the variance and SS. SD is the best measure of resource for bioscience students How Do I..? The laboratory must make sure that the new for the objects in the sample; these become our data.Sample Variance and Standard Deviation You can easily calculate population or sample variance and standard Hyattsville, of a mean as calculated from a sample". Why is the concept have different purposes. Not the answer deviation we were calculating population variance and standard deviation.If you do not agree with any part of this Agreement, the latest news, education, and tools in the quality control field.
After reading Cohen and Voronov's notes on string topology, one can find the following construction: Suppose we have a topological space $X$ with continuous action of $S^1$. This means we have a map $\rho: S^1 \times X \to X$. If we choose a fundamental class $[S^1]$ of the circle, we can form the operator $\Delta: H_\ast(X) \to H_{\ast+1}(X)$ by setting $\Delta(a) = \rho_\ast([S^1] \times a)$. For dimensional reasons, $\Delta$ squares to zero and therefore turns $H_\ast(X)$ into a cochain complex. Let's denote the cohomology of this complex by $H_\ast^\Delta(X)$. Because in this cohomology we have as representatives homology classes in $X$ which are annihilated by the action of $S^1$ "in a homological sense", one would think this is relevant to the $S^1$-equivariant homology of $X$. However, a few simple examples show that $\Delta$-cohomology and equivariant homology are certainly not equal. For example, take the point with trivial action, then $H^\Delta_\ast(pt)$ is $\mathbb{Z}$ in degree 0 and zero in all other degrees, but $H^{S^1}_\ast(pt) = \mathbb{Z}[a_2]$ where $|a_2| = 2$. Another example is $LS^1$. So my question is: is there a different (hopefully geometric) description of $\Delta$-cohomology? Is it related to $S^1$-equivariant homology in any way? If this is not possible in the general case, is it at least possible for $X$ of the form $LM$, with $M$ a manifold? My main motivation for considering $\Delta$-cohomology is string topology, where $\Delta$ is also known as the BV-operator.
You can. By using partitioned-regression results (F-W-L theorem), if the model includes a constant term, $$y = \alpha + \mathbf x' \beta + u$$ then the "slope coefficients" are in practice calculated by OLS as $$\hat{\beta}_{OLS} = (\hat {\tilde X}'\hat {\tilde X})^{-1}\hat {\tilde X}'\hat {\tilde y}= \beta + (\hat {\tilde X}'\hat {\tilde X})^{-1}\hat {\tilde X}'u$$ Where $\hat {\tilde X} = X - \bar X$ (sample mean) and for later use $ {\tilde X} = X-E(X)$. Again, here $X$ does not include a series of ones (but the model does). In other words, whether you demean a priori or not, OLS will demean the variables automatically to estimate the slope coefficients, if you also include a constant term in the model. If you multiply and divide the above by sample size, you get the multivariate analog of $(2)$ since the variables are now in sample mean-deviation form and so these are estimated variance and covariance matrices (and in $(2)$ you should use hats, by the way). Then $$\text{plim}\big( \hat{\beta}_{OLS} -\beta\big) = \text{plim}\left (\frac 1n\hat {\tilde X}'\hat {\tilde X}\right)^{-1}\text{plim}\left(\frac 1n \hat {\tilde X}'u\right)$$ The Law of Large Numbers does not "jump from probability limits to expectations": it is the very essence of the Law that the probability limit is the expected value. Then (under the necessary conditions) $$\text{plim}\big( \hat{\beta}_{OLS} -\beta\big) = E\left (\frac 1n\tilde X'\tilde X\right)^{-1}E\left(\frac 1n \tilde X'u\right)$$ Now the variables are in deviations from their true expected values, and these expected values are the true covariance matrices, and you can write for example, $$\text{plim}\big( \hat{\beta}_{OLS} -\beta\big) = [ \text {Var}(\mathbf x)]^{-1}\cdot \text{Cov}(\mathbf x \cdot u)$$ which is the probability limit of the multivariate analogue of $(2)$.
Difference between revisions of "De Bruijn-Newman constant" (→Threads) (→Threads) Line 94: Line 94: * [https://terrytao.wordpress.com/2018/04/17/polymath15-eighth-thread-going-below-0-28/ Polymath15, eighth thread: going below 0.28], Terence Tao, Apr 17, 2018. * [https://terrytao.wordpress.com/2018/04/17/polymath15-eighth-thread-going-below-0-28/ Polymath15, eighth thread: going below 0.28], Terence Tao, Apr 17, 2018. * [https://terrytao.wordpress.com/2018/05/04/polymath15-ninth-thread-going-below-0-22/ Polymath15, ninth thread: going below 0.22?], Terence Tao, May 4, 2018. * [https://terrytao.wordpress.com/2018/05/04/polymath15-ninth-thread-going-below-0-22/ Polymath15, ninth thread: going below 0.22?], Terence Tao, May 4, 2018. + == Other blog posts and online discussion == == Other blog posts and online discussion == Revision as of 20:21, 6 September 2018 For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as [math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math] or [math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math] In the notation of [KKL2009], one has [math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math] De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). The Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives [math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math] for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T. The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis, all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2]. It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math]. Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis. Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as [math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] where the dependence on [math]t[/math] has been omitted for brevity. In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic [math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math] as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that [math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math] as [math]k \to +\infty[/math]. Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup Test problem Zero-free regions See Zero-free regions. Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
The $\delta$ function is not continuous, so it's a priori not differentiable. In fact, it's not even well-defined as an ordinary real-valued function, but can be made so in terms of distributions - linear maps on a space of test functions given by $f\mapsto\int\delta f=f(a)$. It's possible to sensibly define derivatives of distributions by looking at representations as limits of functions: If $\delta_i$ is a family of functions so that $\lim_{i\rightarrow\infty}\int\delta_i(x) f(x)\mathrm dx=f(a)$ for any test function $f$, then it can be considered a representation of the Dirac delta. Now, if we take the family of derivatives $\frac{\mathrm d}{\mathrm dx}\delta_i$ we arrive at$$\int\left[\frac{\mathrm d}{\mathrm dx}\delta_i(x)\right]f(x)\mathrm dx=-\int\delta_i(x)\left[\frac{\mathrm d}{\mathrm dx}f(x)\right]\mathrm dx$$through integration by parts and using the fact that $f$ has by definition compact support (which makes the boundary term vanish). As the derivative is linear as well, this defines another linear map $f\mapsto-\int\delta f'$ on the space of test functions, which we call the derivative of our distribution. Symbolically,$$\left[\frac{\mathrm d}{\mathrm dx}\delta(x-a)\right]f(x)=-\delta(x-a)f'(x)$$which you can just plug in into your formula above without any need for actual computation as it holds true by definition.
Properly Divergent Sequences Properly Divergent Sequences Recall that a sequence $(a_n)$ of real numbers is said to be convergent to the real number $A$ if $\forall \epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $n ≥ N$ then $\mid a_n - A \mid < \epsilon$. If we negate this statement we have that a sequence $(a_n)$ of real numbers is divergent if $\forall A \in \mathbb{R}$ then $\exists \epsilon_0 > 0$ such that $\forall N \in \mathbb{N}$ such that if $n ≥ N$ then $\mid a_n - A \mid ≥ \epsilon_0$. However, there are different types of divergent sequences. For example, a sequence can alternate between different points and be divergent such as the sequence $((-1)^n)$, or instead, the sequence can tend to infinity such as $(n)$ or negative infinity such as $(-n)$, or neither, such as $((-1)^n(n))$. We will now define properly divergent sequences. Definition: A sequence of real numbers $(a_n)$ is said to be Properly Divergent to $\infty$ if $\displaystyle{\lim_{n \to \infty} a_n = \infty}$, that is $\forall M \in \mathbb{R}$ there exists an $N \in \mathbb{N}$ such that if $n ≥ N$ then $a_n > M$. Similarly, $(a_n)$ is said to be Properly Divergent to $-\infty$ if $\displaystyle{\lim_{n \to \infty} a_n = -\infty}$, that is $\forall M \in \mathbb{R}$ there exists an $N \in \mathbb{N}$ such that if $n ≥ N$ then $a_n < M$. Now let's look at some theorems regarding properly divergent sequences. Theorem 1: An increasing sequence of real numbers $(a_n)$ is properly divergent to $\infty$ if it is unbounded. A decreasing sequence of real numbers $(a_n)$ is properly divergent to $-\infty$ if it is unbounded. Proof: Suppose that $(a_n)$ is a sequence of real numbers that is increasing. Since $(a_n)$ is unbounded, then for any $M \in \mathbb{R}$ there exists a term $a_M$ (dependent on $M$) such that $M < a_M$. Since $(a_n)$ is an increasing sequence, then for $n ≥ M$ we have that $M < a_n$ and since $M$ is arbitrary we have that $\lim_{n \to \infty} a_n = \infty$. Similarly suppose that $(a_n)$ is a sequence of real numbers that is decreasing. Since $(a_n)$ is unbounded, then for any $M \in \mathbb{R}$ there exists a term $a_M$ (dependent on $M$ such that $M > a_M$. Since $(a_n)$ is a decreasing sequence, then for $n ≥ M$ we have that $M > a_n$ and since $M$ is arbitrary we have that $\lim_{n \to \infty} a_n = -\infty$. $\blacksquare$ Theorem 2: Let $(a_n)$ and $(b_n)$ be sequences of real numbers such that $a_n ≤ b_n$ for all $n \in \mathbb{N}$. Then if $\lim_{n \to \infty} a_n = \infty$ then $\lim_{n \to \infty} b_n = \infty$. Proof: Let $(a_n)$ and $(b_n)$ be sequences of real numbers such that $a_n ≤ b_n$ for all $n \in \mathbb{N}$, and let $\lim_{n \to \infty} a_n = \infty$. Then it follows that for all $M \in \mathbb{R}$ that there exists an $N$ (dependent on $M$ such that if $n ≥ N$ then $a_n ≥ M$. But we have that $b_n ≥ a_n$ for all $n \in \mathbb{N}$ and so for $n ≥ N$ we have that $b_n ≥ M$. Since $M$ is arbitrary it follows that $\lim_{n \to \infty} b_n = \infty$. $\blacksquare$ Theorem 3: Let $(a_n)$ and $(b_n)$ be sequences of real numbers such that $a_n ≤ b_n$ for all $n \in \mathbb{N}$. Then if $\lim_{n \to \infty} b_n = -\infty$ then $\lim_{n \to \infty} a_n = \infty$. Proof: Let $(a_n)$ and $(b_n)$ be sequences of real numbers such that $a_n ≤ b_n$ for all $n \in \mathbb{N}$, and let $\lim_{n \to \infty} b_n = -\infty$. Then it follows that for all $M \in \mathbb{R}$ that there exists an $N$ (dependent on $M$ such that if $n ≥ N$ then $b_n ≤ M$. But we have that $a_n ≤ b_n$ for all $n \in \mathbb{N}$ and so for $n ≥ N$ we have that $a_n ≤ M$. Since $M$ is arbitrary it follows that $\lim_{n \to \infty} a_n = -\infty$. $\blacksquare$ Theorem 4: If $(a_n)$ and $(b_n)$ are sequences of positive real numbers suppose that for some real number $L > 0$ that $\lim_{n \to \infty} \frac{a_n}{b_n} = L$. Then $\lim_{n \to \infty} a_n = \infty$ if and only if $\lim_{n \to \infty} b_n = \infty$. (1) Proof: Suppose that $(a_n)$ and $(b_n)$ are convergent sequences and that $\lim_{n \to \infty} \frac{a_n}{b_n} = L$ for $L \in \mathbb{R}$ and $L > 0$. Then for $\epsilon = \frac{L}{2} > 0$ we have that for some $N \in \mathbb{N}$ if $n ≥ N$ then $\mid \frac{a_n}{b_n} - L \mid < \frac{L}{2}$ or equivalently: \begin{align} \frac{L}{2} < \frac{a_n}{b_n} < \frac{3L}{2} \\ \frac{L}{2}b_n < a_n < \frac{3L}{2}b_n \\ \end{align} If $\lim_{n \to \infty} a_n = \infty$ then since $a_n < \frac{3L}{2} b_n$ it follows that $\lim_{n \to \infty} b_n = \infty$. Similarly if $\lim_{n \to \infty} b_n = \infty$ then since $\frac{L}{2} b_n < a_n$ it follows that $\lim_{n \to \infty} a_n = \infty$. $\blacksquare$ Theorem 5: If $(a_n)$ is a properly divergent subsequence then there exists no convergent subsequences $(a_{n_k})$ of $(a_n)$. Proof: We will first deal with the case where $(a_n)$ is properly divergent to $\infty$. Suppose instead that there exists a subsequence $(a_{n_k})$ that converges to $L$. Then $\forall \epsilon > 0$ $\exists K \in \mathbb{N}$ such that if $k ≥ K$ then $\mid a_{n_k} - L \mid < \epsilon$, and so for $k ≥ K$ then $L - \epsilon < a_{n_k} < L + \epsilon$, and so $a_{n_k} < L + \epsilon$. Now if $(a_n)$ diverges to $\infty$ then for $L + \epsilon \in \mathbb{R}$ $\exists N \in \mathbb{N}$ such that if $n ≥ N$ then $a_n > L + \epsilon$. So for $n_k ≥ \mathrm{max} \{ K, N \}$, we have that $L + \epsilon < a_{n_k} < L + \epsilon$ which is a contradiction. So our assumption that $(a_{n_k})$ converges was false, and so there exists no convergent subsequences $(a_{n_k})$. $\blacksquare$ Example 1 Show that the sequence $(n^2)$ is properly divergent to $\infty$. We want to show that $\forall M \in \mathbb{R}$ there exists an $N \in \mathbb{N}$ such that if $n ≥ N$ then $n^2 > M$. Notice that $n^2 > n$ for all $n \in \mathbb{N}$. By the Archimedean property, since $M \in \mathbb{R}$ there exists an $n \in \mathbb{N}$ such that $M ≤ n$, and so $n^2 > n ≥ M$. Therefore the sequence $(n^2)$ diverges properly to $\infty$.
Recall that when a matrix is diagonalizable, the algebraic multiplicity of each eigenvalue is the same as the geometric multiplicity. The geometric multiplicity of an eigenvalue $\lambda$ is the dimension of the eigenspace $E_{\lambda}=\calN(A-\lambda I)$ corresponding to $\lambda$. The nullity of $A$ is the dimension of the null space $\calN(A)$ of $A$. Solution. (a) Find the size of the matrix $A$. In general, if $A$ is an $n\times n$ matrix, then its characteristic polynomials has degree $n$.Since the degree of $p(t)$ is $14$, the size of $A$ is $14 \times 14$. (b) Find the dimension of the eigenspace $E_2$ corresponding to the eigenvalue $\lambda=2$. Note that the dimension of the eigenspace $E_2$ is the geometric multiplicity of the eigenvalue $\lambda=2$ by definition. From the characteristic polynomial $p(t)$, we see that $\lambda=2$ is an eigenvalue of $A$ with algebraic multiplicity $5$.Since $A$ is diagonalizable, the algebraic multiplicity of each eigenvalue is the same as the geometric multiplicity. It follows that the geometric multiplicity of $\lambda=2$ is $5$, hence the dimension of the eigenspace $E_2$ is $5$. (c) Find the nullity of $A$. We first observe that $\lambda=0$ is an eigenvalue of $A$ with algebraic multiplicity $3$ from the characteristic polynomial. By definition, the nullity of $A$ is the dimension of the null space $\calN(A)$, and furthermore the null space $\calN(A)$ is the eigenspace $E_0$.Thus, the nullity of $A$ is the same as the geometric multiplicity of the eigenvalue $\lambda=0$. Since $A$ is diagonalizable, the algebraic and geometric multiplicities are the same. Hence the nullity of $A$ is $3$. Maximize the Dimension of the Null Space of $A-aI$Let\[ A=\begin{bmatrix}5 & 2 & -1 \\2 &2 &2 \\-1 & 2 & 5\end{bmatrix}.\]Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix.Your score of this problem is equal to that […] How to Diagonalize a Matrix. Step by Step Explanation.In this post, we explain how to diagonalize a matrix if it is diagonalizable.As an example, we solve the following problem.Diagonalize the matrix\[A=\begin{bmatrix}4 & -3 & -3 \\3 &-2 &-3 \\-1 & 1 & 2\end{bmatrix}\]by finding a nonsingular […] Quiz 13 (Part 1) Diagonalize a MatrixLet\[A=\begin{bmatrix}2 & -1 & -1 \\-1 &2 &-1 \\-1 & -1 & 2\end{bmatrix}.\]Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$.That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that […]
Difference between revisions of "Development/Architecture/KDE3/Low-level Graphics" m (Development/Architecture/KDE 3 Architecture/Low-level Graphics moved to Development/Architecture/KDE3/Low-level Graphics) m (Text replace - "<code cppqt3>" to "<syntaxhighlight lang="cpp-qt">") Line 17: Line 17: So, it is possible to reuse the rendering code you use for displaying a widget for printing, with the same features supported. Of course, in practice, the code is used in a slightly different context. Drawing on a widget is almost exclusively done in the <tt>paintEvent()</tt> method of a widget class. So, it is possible to reuse the rendering code you use for displaying a widget for printing, with the same features supported. Of course, in practice, the code is used in a slightly different context. Drawing on a widget is almost exclusively done in the <tt>paintEvent()</tt> method of a widget class. − < + <> void FooWidget::paintEvent() void FooWidget::paintEvent() { { Revision as of 20:35, 29 June 2011 Qt's low level imaging model is based on the capabilities provided by X11 and other windowing systems for which Qt ports exist. But it also extends these by implementing additional features such as arbitrary affine transformations for text and pixmaps. Contents Rendering with QPainter The central graphics class for 2D painting with Qt is QPainter. It can draw on a QPaintDevice. There are three possible paint devices implemented: One is QWidget which represents a widget on the screen. The second is QPrinter which represents a printer and produces Postscript output. The third it the class QPicture which records paint commands and can save them on disk and play them back later. A possible storage format for paint commands is the W3C standard SVG. So, it is possible to reuse the rendering code you use for displaying a widget for printing, with the same features supported. Of course, in practice, the code is used in a slightly different context. Drawing on a widget is almost exclusively done in the paintEvent() method of a widget class. <syntaxhighlight lang="cpp-qt"> void FooWidget::paintEvent() { QPainter p(this); // Setup painter // Use painter } When drawing on a printer, you have to make sure to use QPrinter::newPage() to finish with a page and begin a new one - something that naturally is not relevant for painting widgets. Also, when printing, you may want to use thedevice metrics in order to compute coordinates. Transformations By default, when using the QPainter, it draws in the natural coordinate system of the device used. This means, if you draw a line along the horizontal axis with a length of 10 units, it will be painted as a horizontal line on the screen with a length of 10 pixels. However, QPainter can apply arbitrary affine transformations before actually rendering shapes and curves. An affine transformation maps the x and y coordinates linearly into x' and y' according to <math>\begin{pmatrix} x' \\ y' \\ 1\end{pmatrix} = \begin{pmatrix}m_11 & m_12 & 0 \\m_21 & m_22 & 0 \\dx & dy & 1 \\\end{pmatrix} \begin{pmatrix} x \\ y \\ 1\end{pmatrix}</math> Formula for affine transformation The 3x3 matrix in this equation can be set with QPainter::setWorldMatrix()and is of type QWMatrix. Normally,this is the identity matrix, i.e. m 11 and mare one, and the other parameters are zero. There are basically three different groups of transformations: 22 Translations: These move all points of an object by a fixed amount in some direction. A translation matrix can be obtained by calling method m.translate(dx, dy)for a QWMatrix. This corresponds to the matrix <math>\begin{pmatrix}1 & 0 & 0 \\0 & 1 & 0 \\dx & dy & 1 \end{pmatrix}</math> Formula for translating transformation Scaling: These stretch or shrink the coordinates of an object, making it bigger or smaller without distorting it. A scaling transformation can be applied to a QWMatrix by calling m.scale(sx, sy). By setting one of the parameters to a negative value, one can achieve a mirroring of the coordinate system. The corresponding matrix looks like this <math>\begin{pmatrix}sx & 0 & 0 \\0 & sy & 0 \\0 & 0 & 1 \end{pmatrix}</math> Formula for scaling transformation Shearing: A distortion of the coordinate system with two parameters. A shearing transformation can be applied by calling m.shear(sh, sv), corresponding to the matrix <math>\begin{pmatrix}1 & sv & 0 \\sh & 1 & 0 \\0 & 0 & 1 \end{pmatrix}</math> Formula for shearing transformation Rotating: This rotates an object. A rotation transformation can be applied by calling m.rotate(alpha). Note that the angle has to be given in degrees, not as mathematical angle! Notate that a rotation is equivalent with a combination of scaling and shearing. The corresponding matrix is <math>\begin{pmatrix}\cos{\alpha} & \sin{\alpha} & 0 \\-\sin{\alpha} & \cos{\alpha} & 0 \\0 & 0 & 1 \end{pmatrix}</math> Formula for rotating transformation Here are some pictures that show the effect of the elementary transformation to our mascot: Transformations can be combined by multiplying elementary matrices. Note that matrix operations are not commutative in general, and therefore the combined effect of of a concatenation depends on the order in which the matrices are multiplied. Setting stroking attributes The rendering of lines, curves and outlines of polygons can be modified by setting a special pen with QPainter::setPen(). The argument of thisfunction is a QPen object. The propertiesstored in it are a style, a color, a join style and a cap style. The pen style is member of the enum Qt::PenStyle. and can take one of the following values: The join style is a member of the enum Qt::PenJoinStyle. It specifies how the junction between multiple lines which are attached to each other is drawn. It takes one of the following values: The cap style is a member of the enum Qt::PenCapStyle and specifies how the end points of lines are drawn. It takes one of the values from the following table: Setting fill attributes The fill style of polygons, circles or rectangles can be modified by settinga special brush with QPainter::setBrush() This function takes aQBrush object as argument. Brushes can be constructed in four different ways: QBrush::QBrush()- This creates a brush that does not fill shapes. QBrush::QBrush(BrushStyle)- This creates a black brush with one of the default patterns shown below. QBrush::QBrush(const QColor &, BrushStyle)- This creates a colored brush with one of the patterns shown below. QBrush::QBrush(const QColor &, const QPixmap)- This creates a colored brush with the custom pattern you give as second parameter. A default brush style is from the enum Qt::BrushStyle. Here is a picture of all predefined patterns: A further way to customize the brush behavior is to use the function QPainter::setBrushOrigin(). Colors Colors play a role both when stroking curves and when filling shapes. In Qt, colors are represented by the class QColor. Qt does not support advanced graphics features like ICC color profiles and color correction. Colors are usually constructed by specifying their red, green and blue components, as the RGB model is the way pixels are composed of on a monitor. It is also possible to use hue, saturation and value. This HSV representation is what you use in the Gtk color dialog, e.g. in GIMP. There, the hue corresponds to the angle on the color wheel, while the saturation corresponds to the distance from the center of the circle. The value can be chosen with a separate slider. Other settings Normally, when you paint on a paint device, the pixels you draw replace those that were there previously. This means, if you paint a certain region with a red color and paint the same region with a blue color afterwards, only the blue color will be visible. Qt's imaging model does not support transparency, i.e. a way to blend the painted foreground with the background. However, there is a simple way to combine background and foreground with boolean operators. The method QPainter::setRasterOp() sets the used operator,which comes from the enum RasterOp. The default is CopyROP which ignores the background. Another popular choice is XorROP. If you paint a black line with this operator on a colored image, then the covered area will be inverted. This effect is for example used to create the rubberband selections in image manipulation programs known as "marching ants". Drawing graphics primitives In the following we list the elementary graphics elements supported by QPainter. Most of them exist in several overloaded versions which take a different number of arguments. For example, methods that deal with rectangles usually either take a QRect as an argument or a set of four integers. Drawing a single point - drawPoint(). Drawing lines - drawLine(), drawLineSegments()and drawPolyLine(). Drawing and filling rectangles - drawRect(), drawRoundRect(), fillRect()and eraseRect(). Drawing and filling circles, ellipses and parts or them - drawEllipse(), drawArc(), drawPie()and drawChord(). Drawing and filling general polygons - drawPolygon(). Drawing bezier curves - drawQuadBezier()[ drawCubicBezier()in Qt 3.0]. Drawing pixmaps and images Qt provides two very different classes to represent images. QPixmap directly corresponds to the pixmap objects in X11. Pixmaps are server-side objects and may - on a modern graphics card - even be stored directly in the card's memory. This makes it very efficient to transfer pixmaps to the screen. Pixmaps also act as an off-screen equivalent of widgets - the QPixmap class is a subclass of QPaintDevice, so you can draw on it with a QPainter. Elementary drawingoperations are usually accelerated by modern graphics. Therefore, a common usage pattern is to use pixmaps for double buffering. This means, instead of paintingdirectly on a widget, you paint on a temporary pixmap object and use thebitBlt function to transfer the pixmap to the widget. For complex repaints, this helps to avoid flicker. In contrast, QImage objects live on the client side. Their emphasis in on providing direct access to the pixels of the image. This makes them of use for image manipulation, and things like loading and saving to disk (QPixmap's load() method takes QImage as intermediate step). On the other hand, painting an image on a widget is a relatively expensive operation, as it implies a transfer to the X server, which can take some time, especially for large images and for remote servers. Depending on the color depth, the conversion from QImage to QPixmap may also require dithering. Drawing text Text can be drawn with one of the overloaded variants of the method QPainter::drawText(). These draw a QString either at a given point or in a given rectangle, using the font set by QPainter::setFont(). There is also a parameter which takes an ORed combination of some flags from the enumsQt::AlignmentFlagsandQt::TextFlags Beginning with version 3.0, Qt takes care of the complete text layout even for languages written from right to left. A more advanced way to display marked up text is theQSimpleRichText class. Objects of this class can be constructed with a piece of text using a subset of the HTML tags, which is quite rich and provides even tables. The text style can be customized by using aQStyleSheet (the documentation of the tags can also be found here). Once the rich text object has been constructed, it can be rendered on a widget or another paint device with the QSimpleRichText::draw() method. Initial Author: Bernd Gehrmann
Matthew Lake Title: The black hole - uncertainty principle correspondence from extended de Broglie relations Abstract: The Compton wavelength, which gives the minimum radius within which the mass of a particle may be localized due to quantum mechanical effects, and the Schwarzschild radius, which gives the maximum region within which the mass of a black hole may be localized due to classical gravity, become coincident for objects with rest mass equal to the Planck mass, $m_P$. On a $(\log(r),\log(m))$ plot, the two lines intersect near the Planck point $(l_P,m_P)$, at which quantum gravity effects become significant. Since canonical (non-gravitational) quantum mechanics is based on the concept of wave-particle duality, encapsulated mathematically in the de Broglie relations, these relations should break down near $(l_P,m_P)$, and it is unclear what physical interpretation can be given quantum particles with $E >> m_Pc^2 $, since these correspond to wave modes with wavelengths $\lambda << l_P$ or time periods $T << t_P$ in the standard theory. We therefore propose a modification of the standard de Broglie relations, which gives rise to a modified evolution equation and a modified expression for the Compton wavelength, which may be extended into the region $E >> m_Pc^2$. For the proposed modifications, we recover the expression for the Schwarzschild radius for $E >> m_Pc^2$ and the usual Compton formula for $E << m_Pc^2$. Importantly, the sign of the inequality flips at $m \sim m_P$, as required, and we interpret the additional terms in the modified de Broglie relations as representing the self-gravitation of the wave packet.
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
Journal of Symbolic Logic J. Symbolic Logic Volume 55, Issue 3 (1990), 1022-1036. Set Theoretic Properties of Loeb Measure Abstract In this paper we ask the question: to what extent do basic set theoretic properties of Loeb measure depend on the nonstandard universe and on properties of the model of set theory in which it lies? We show that, assuming Martin's axiom and $\kappa$-saturation, the smallest cover by Loeb measure zero sets must have cardinality less than $\kappa$. In contrast to this we show that the additivity of Loeb measure cannot be greater than $\omega_1$. Define $\operatorname{cof}(H)$ as the smallest cardinality of a family of Loeb measure zero sets which cover every other Loeb measure zero set. We show that $\operatorname{card}(\lfloor\log_2(H)\rfloor) \leq \operatorname{cof}(H) \leq \operatorname{card}(2^H)$, where card is the external cardinality. We answer a question of Paris and Mills concerning cuts in nonstandard models of number theory. We also present a pair of nonstandard universes $M \preccurlyeq N$ and hyperfinite integer $H \in M$ such that $H$ is not enlarged by $N, 2^H$ contains new elements, but every new subset of $H$ has Loeb measure zero. We show that it is consistent that there exists a Sierpinski set in the reals but no Loeb-Sierpinski set in any nonstandard universe. We also show that it is consistent with the failure of the continuum hypothesis that Loeb-Sierpinski sets can exist in some nonstandard universes and even in an ultrapower of a standard universe. Article information Source J. Symbolic Logic, Volume 55, Issue 3 (1990), 1022-1036. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183743403 Mathematical Reviews number (MathSciNet) MR1071312 Zentralblatt MATH identifier 0721.03050 JSTOR links.jstor.org Subjects Primary: 03H15: Nonstandard models of arithmetic [See also 11U10, 12L15, 13L05] Secondary: 04A15 28A05: Classes of sets (Borel fields, $\sigma$-rings, etc.), measurable sets, Suslin sets, analytic sets [See also 03E15, 26A21, 54H05] Citation Miller, Arnold W. Set Theoretic Properties of Loeb Measure. J. Symbolic Logic 55 (1990), no. 3, 1022--1036. https://projecteuclid.org/euclid.jsl/1183743403
(Wikipedia page for background info: https://en.wikipedia.org/wiki/Principle_of_transformation_groups ) I'm reading Edwin Jaynes's paper, "Prior Probabilities": and on page 17 he describes the principle of transformation groups, wherein he provides a method to derive objective prior distributions for different parameters of a given probability density function. Trouble is, I don't get how it works. Jaynes seems to be trying to find a change of variables for a pdf such that the new pdf, multiplied by the Jacobian determinant of the new parameterization, results in the exact same pdf as before the transformation. Then, he tries to find a prior distribution for a given set of parameters such that the prior distribution, when run through the same change of variables and multiplied by the same Jacobian determinant, remains the same. His notation is a little vague, so here's my interpretation of one of his examples. Let's say you have a reparameterization of $\frac{1}{\sigma}f(\frac{x-\mu}{\sigma})dx$, denoted $\Phi$, where $\Phi(x,\mu, \sigma)=(a(x+b),a(\mu+b),a\sigma)$, where $a$ and $b$ are arbitrary values.. Plugging these new variables into $\frac{1}{\sigma}f(\frac{x-\mu}{\sigma})dx$, we have $\frac{1}{a\sigma}f(\frac{a(x+b)-a(\mu+b)}{a\sigma})dx'\mathbf{J_{\Phi}}$, which simplifies to: $\frac{1}{a\sigma}f(\frac{x-\mu}{\sigma})dx' \left| \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & a & 0 \\ 0 & 0 & a \end{array} \right) \right| = \frac{a^{2}}{\sigma}f(\frac{x-\mu}{\sigma})dx' $ ... which isn't the same as the original, though Jaynes says that it is. He then proceeds to take the Jacobian and apply it to a prior, which I also don't understand how he's able to do. Aren't Jacobian determinants supposed to multidimensional versions of the coefficient of $du$ in $u$ substitution? Why is Jaynes able to multiply one seemingly unrelated function by the Jacobian determinant of another? I'm positive I've done something wrong with my calculations, or made a fumble in my reasoning somewhere. If anyone could run me through how this method works, I'd be very grateful.
According to this site the general form of the Gravitational Potential Energy of mass $m$ is $$U=-\frac{GMm}{r}\tag{1}$$ where $G$ is the gravitation constant, $M$ is the mass of the attracting body, and $r$ is the distance between their centre's. However, I am learning Astrophysics at the moment and in the derivation of the Virial Theorem I came across this alternate definition of the Gravitational Potential Energy $\Omega$ $$\Omega=-\int_{m=0}^M \frac{Gm}{r}\mathrm{d}m\tag{2}$$ So my question is as follows: If I go ahead and integrate $(2)$ I find that $$\Omega=-\left[\frac{Gm^2}{2r}\right]_{m=0}^{m=M}=-\frac{GM^2}{2r}\ne U$$ But unless I'm mistaken, $\Omega$ must be equal to $U$. Why are equations $(1)$ and $(2)$ apparently inconsistent due to giving different results? I tried searching the internet for an explanation but all sites I found give the same result, like this one on page 6. Therefore, could someone please explain to me why I am finding that $U\ne\Omega\,$?
2018-09-02 17:21 Measurement of $P_T$-weighted Sivers asymmetries in leptoproduction of hadrons / COMPASS Collaboration The transverse spin asymmetries measured in semi-inclusive leptoproduction of hadrons, when weighted with the hadron transverse momentum $P_T$, allow for the extraction of important transverse-momentum-dependent distribution functions. In particular, the weighted Sivers asymmetries provide direct information on the Sivers function, which is a leading-twist distribution that arises from a correlation between the transverse momentum of an unpolarised quark in a transversely polarised nucleon and the spin of the nucleon. [...] arXiv:1809.02936; CERN-EP-2018-242.- Geneva : CERN, 2019-03 - 20 p. - Published in : Nucl. Phys. B 940 (2019) 34-53 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-242 - PDF; 1809.02936 - PDF; Record dettagliato - Record simili 2018-02-14 11:43 Light isovector resonances in $\pi^- p \to \pi^-\pi^-\pi^+ p$ at 190 GeV/${\it c}$ / COMPASS Collaboration We have performed the most comprehensive resonance-model fit of $\pi^-\pi^-\pi^+$ states using the results of our previously published partial-wave analysis (PWA) of a large data set of diffractive-dissociation events from the reaction $\pi^- + p \to \pi^-\pi^-\pi^+ + p_\text{recoil}$ with a 190 GeV/$c$ pion beam. The PWA results, which were obtained in 100 bins of three-pion mass, $0.5 < m_{3\pi} < 2.5$ GeV/$c^2$, and simultaneously in 11 bins of the reduced four-momentum transfer squared, $0.1 < t' < 1.0$ $($GeV$/c)^2$, are subjected to a resonance-model fit using Breit-Wigner amplitudes to simultaneously describe a subset of 14 selected waves using 11 isovector light-meson states with $J^{PC} = 0^{-+}$, $1^{++}$, $2^{++}$, $2^{-+}$, $4^{++}$, and spin-exotic $1^{-+}$ quantum numbers. [...] arXiv:1802.05913; CERN-EP-2018-021.- Geneva : CERN, 2018-11-02 - 72 p. - Published in : Phys. Rev. D 98 (2018) 092003 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-021 - PDF; 1802.05913 - PDF; Record dettagliato - Record simili 2018-02-07 15:23 Transverse Extension of Partons in the Proton probed by Deeply Virtual Compton Scattering / Akhunzyanov, R. (Dubna, JINR) ; Alexeev, M.G. (Turin U.) ; Alexeev, G.D. (Dubna, JINR) ; Amoroso, A. (Turin U. ; INFN, Turin) ; Andrieux, V. (Illinois U., Urbana ; IRFU, Saclay) ; Anfimov, N.V. (Dubna, JINR) ; Anosov, V. (Dubna, JINR) ; Antoshkin, A. (Dubna, JINR) ; Augsten, K. (Dubna, JINR ; CTU, Prague) ; Augustyniak, W. (NCBJ, Swierk) et al. We report on the first measurement of exclusive single-photon muoproduction on the proton by COMPASS using 160 GeV/$c$ polarized $\mu^+$ and $\mu^-$ beams of the CERN SPS impinging on a liquid hydrogen target. [...] CERN-EP-2018-016 ; arXiv:1802.02739. - 2018. - 13 p. Full text - Draft (restricted) - Fulltext Record dettagliato - Record simili 2018-01-30 07:15 Record dettagliato - Record simili 2017-09-28 10:30 Record dettagliato - Record simili 2017-09-19 08:11 Transverse-momentum-dependent Multiplicities of Charged Hadrons in Muon-Deuteron Deep Inelastic Scattering / COMPASS Collaboration A semi-inclusive measurement of charged hadron multiplicities in deep inelastic muon scattering off an isoscalar target was performed using data collected by the COMPASS Collaboration at CERN. The following kinematic domain is covered by the data: photon virtuality $Q^{2}>1$ (GeV/$c$)$^2$, invariant mass of the hadronic system $W > 5$ GeV/$c^2$, Bjorken scaling variable in the range $0.003 < x < 0.4$, fraction of the virtual photon energy carried by the hadron in the range $0.2 < z < 0.8$, square of the hadron transverse momentum with respect to the virtual photon direction in the range 0.02 (GeV/$c)^2 < P_{\rm{hT}}^{2} < 3$ (GeV/$c$)$^2$. [...] CERN-EP-2017-253; arXiv:1709.07374.- Geneva : CERN, 2018-02-08 - 23 p. - Published in : Phys. Rev. D 97 (2018) 032006 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Record dettagliato - Record simili 2017-07-08 20:47 New analysis of $\eta\pi$ tensor resonances measured at the COMPASS experiment / JPAC Collaboration We present a new amplitude analysis of the $\eta\pi$ $D$-wave in $\pi^- p\to \eta\pi^- p$ measured by COMPASS. Employing an analytical model based on the principles of the relativistic $S$-matrix, we find two resonances that can be identified with the $a_2(1320)$ and the excited $a_2^\prime(1700)$, and perform a comprehensive analysis of their pole positions. [...] CERN-EP-2017-169; JLAB-THY-17-2468; arXiv:1707.02848.- Geneva : CERN, 2018-04-10 - 9 p. - Published in : Phys. Lett. B 779 (2018) 464-472 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Record dettagliato - Record simili 2017-07-05 15:07 Record dettagliato - Record simili 2017-04-01 00:22 Record dettagliato - Record simili 2017-01-05 16:00 First measurement of the Sivers asymmetry for gluons from SIDIS data / COMPASS Collaboration The Sivers function describes the correlation between the transverse spin of a nucleon and the transverse motion of its partons. It was extracted from measurements of the azimuthal asymmetry of hadrons produced in semi-inclusive deep inelastic scattering of leptons off transversely polarised nucleon targets, and it turned out to be non-zero for quarks. [...] CERN-EP-2017-003; arXiv:1701.02453.- Geneva : CERN, 2017-09-10 - 11 p. - Published in : Phys. Lett. B 772 (2017) 854-864 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Record dettagliato - Record simili
First: Lemma 1 says that $||\mathbf{x}|| \leq \alpha q \sqrt{n}$ with overwhelming probability if $\mathbf{x}$ is drawn from the discrete Gaussian since $\frac{1}{2^n}$ is negligible. Next, from properties of absolute values, $|a + b| \leq |a| + |b|$. So, leaving the $2$ out for now and writing $\mathbf{s_A}^T\mathbf{e_B}$ as $\mathbf{s_A}\cdot\mathbf{e_B}$: $|\mathbf{s_A}\cdot\mathbf{e_B} + e'_A + \mathbf{s_B}\cdot\mathbf{e_A} + e'_B| \leq |\mathbf{s_A}\cdot\mathbf{e_B}| + |e'_A| + |\mathbf{s_B}\cdot\mathbf{e_A}| + |e'_B|$. Now, for Euclidean norms, Cauchy-Schwarz says $|\mathbf{a\cdot b}| \leq |\mathbf{a}|\cdot |\mathbf{b}|$, so we have, for example, $|\mathbf{s_A}\cdot\mathbf{e_B}| \leq |\mathbf{s_A}| \cdot |\mathbf{e_B}| \leq (\alpha q \sqrt{n})\cdot (\alpha q \sqrt{n})$, the last inequality coming from Lemma 1. Let's tackle $e'_A$ and $e'_B$. I could sample a vector $\mathbf{e'}$ from $\mathcal{D_{\mathbb{Z^n},\alpha q}}$ and Lemma 1 would apply to it; if $e'_A$ is a member of $\mathbf{e'}$, it is certainly smaller than $||\mathbf{e'}||$: $|e'_A| \leq ||\mathbf{e'_A}|| \leq \alpha q \sqrt{n} \leq (\alpha q \sqrt{n})\cdot (\alpha q \sqrt{n})$. Same for $e'_B$. Thus I have four terms, all $\leq (\alpha q \sqrt{n})\cdot (\alpha q \sqrt{n})$. Multiply back in that $2$ and you have the result. Edit They do a similar procedure later on in Section 4, and explicitly write out norms, for future reference.
Let $X$ be an $n-$dim Alexandrov space with curvature $\geq k$. Does $X$ satisfy a reverse doubling condition? That is, does there exist a constant $C>1$, s.t., for any $x\in X$, $0<r<\mathrm{diam}(X)/2$, $\mathrm{vol}(B_x(2r))\geq C\cdot\mathrm{vol}(B_x(r))$? The answer is no. In fact, it is not even true for Riemannian manifolds. Consider a flat Riemannian manifold $(M,g)$ of dimension $m$ with diameter $d$ realized by the points $x$ and $y$. Let the volume be $Cd^m$. Let $M_n$ denote the Riemannian manifold $(M,n^2 g)$, where $n$ is a positive integer. Attach a cylinder of radius $\delta$, where $\delta \ll d$ at the point $y \in M_n$. (More formally, remove $y$ from $M_n$, and change the metric so that it is a product outside a compact subset.) Do this so that the curvature remains $\geq -1$. The "bending" required to attach the cylinder can be the same for every $n$, because the metric is flat. Now in $M_n$ we have $\mathrm{vol}(B_x(nd)) = C(nd)^m + \epsilon_1$ and $\mathrm{vol}(B_x(2nd)) = C(nd)^m + C'nd\delta^{m-1} + \epsilon_2$, where $\epsilon_1$ and $\epsilon_2$ are determined by the bending, and so are independent of $n$, and $C'=C'(m)$. Then the ratio of the volumes tends to 1 as $n \to \infty$. You can attach copies of $M_n$ for every $n$ to the Euclidean plane to get a Riemannian manifold of curvature $\geq -1$ which does not satisfy the condition.
I'm trying to align an equation in two points, my code is the following: \documentclass{article}\usepackage{amsmath}\begin{document}\begin{equation} \begin{cases} \omega_W \geq \omega_{W0} \hspace{2mm} &\Leftrightarrow \hspace{2mm} S_x \rightarrow \infty & \longrightarrow \text{Wheel is spinning} \\ \omega_W > \omega_{W0} \hspace{2mm} &\Leftrightarrow \hspace{2mm} S_x > 0 & \longrightarrow \text{Tire driving force} (F_x >0) \\ \omega_W = \omega_{W0} \hspace{2mm} &\Leftrightarrow \hspace{2mm} S_x =0 & \longrightarrow \text{Free-rolling tire} (F_x=0) \\ \omega_W \leq \omega_{W0} \hspace{2mm} &\Leftrightarrow \hspace{2mm} S_x < 0 & \longrightarrow \text{Tire braking force} (F_x <0) \\ \omega_W =0 \hspace{2mm} &\Leftrightarrow \hspace{2mm} S_x= -1 & \longrightarrow \text{Wheel lock-up} \\ \end{cases}\end{equation}\end{document} The thing is that I want an alignment in "Leftrightarrow" and in "longrightarrow". However, the output I'm getting is the following (and of course the error of "extra alignment tab has been changed to \cr"): Any idea about why it's not working, or how can I fix it? I wrote all the \\ and also I added the & in the points I want to get an alignment.
A fairly random example of a great pedagogical technique for mathematics - examples first. For example, suppose I was trying to explain to you what a ring is in algebra. I could start by telling you that a ring is a set $R$ together with two binary operations $+$ and $.$ such that: $r+s=s+r$ and $(r+s)+t$=$r+(s+t)$ for all $r,s,t\in R$ There exists $0\in R$ such that $r+0=r$ for all $r\in R$ etc. ... However, you are much more likely to understand what a ring is if I give you some concrete examples. For example, I could say, 'Rings are like $\mathbb{Z}$. If we consider the integers, then we have two important operations $+$ and $\times$, which have the following interesting properties...' Then I could point out that other interesting sets like the integers $\mod n$, which also have two operations which behave in a similar way. Only then would I give you the formal axiomatic definition of a ring. The next step would be showing that many interesting things that are true for the integers are true for all rings, and that many other interesting things about the integers (like unique factorization) are true for rings with certain properties. Then you would gain some understanding of what rings are and why they are worth introducing. Similarly, I don't think anyone would find the following definition at all meaningful if they hadn't studied topology before: A topological space is a set $X$ together with a collection $\tau$ of subsets of $X$ (called the open sets) such that: $\varnothing,X\in\tau$. $\tau$ is closed under taking unions: for all (possibly infinite) collections of sets $(U_\alpha)_{\alpha\in A}$ with $U_\alpha\in\tau$, the union $\bigcup_{\alpha\in A}U_\alpha$ is in $\tau$. $\tau$ is closed under taking finite intersections: for all $A, B\in\tau$, $A\cap B\in\tau$ (and therefore all intersections of finitely many members of $\tau$ are contained in $\tau$). Much better is to start off by introducing the more concrete idea of metric spaces (by first showing that a lot of concepts in real analysis, like convergence and continuity, can be expressed entirely in terms of the distance between points, and showing that more abstract ideas of distance can be useful) and then showing that the definition of a continuous function between metric spaces can be expressed entirely in terms of the open sets in that space, and then introducing topology from there. Having given you some examples, I'll now state the general principle: if you want to explain some mathematics to someone, always start by telling them some motivating examples. This serves two purposes. Firstly, it shows them why they should care about the idea you're introducing, since all mathematical concepts were originally created because they were interesting in some way. Secondly, it helps them to understand the ideas that you're telling them about, since they have some concrete examples to link them to in their mind. Of course, that's just one useful pedagogical technique, and it's not necessarily useful in all contexts. But it is often very useful. For a much better exposition of this idea, see this blog post by Timothy Gowers.
On Aug 3, 2009, at 8:59 AM, marlene marchena wrote: > Dear R users,>> I'm running a SVR in package e1071 but I did not able to calculate the> parameters w and b of the regression. I don't know how to do that> and if it> is possible to do it with this package.>> Someone have some idea. Any help would be much appreciated. While you're not given the W vector directly, you can recover it fromthe support vectors, no? In LaTeX form, W looks like: \vec{W} = \sum_{i=1}^N \alpha_i y_i \vec{x}_i If "model" was the object returned from your SVM: alpha_i's are theweights of each vector (model$coefs[i]) and x_i are the supportvectors (model$SV[i,]). You can use some matrix multiplication to calculate this very quickly: W <- t(model$coefs) %*% model$SV By default, the x's and y's are scaled when you call svm(..) so Ithink, by definition, your y intercept (b) should be zero. I think thevalues in model$y.scale might be what you're looking for, though (notsure). Hope that helps,-steve --Steve LianoglouGraduate Student: Computational Systems Biology| Memorial Sloan-Kettering Cancer Center| Weill Medical College of Cornell UniversityContact Info: http://cbio.mskcc.org/~lianos/contact I did all that you suggested and now I have some questions. 1) when I use > svm.m1$coefs I have 180 values (the number of SV) of 10 and-10, I think that it is related with C=10 because when I change for C=100 Ihave 180 values of 100 and -100. I can't understand the meaning of that inthe SVR model. Do you know what it means? 2) For svm.m1$y.scale I get two values $`scaled:center` and $`scaled:scale.The first one is the original value and the second one is the scale valuethen my b will be scaled:center. Is that correct? 3) Which is the best form to put the data for SVM: in its original formwithout scale and using scale=TRUE or using normalize date using(x-min)/(max-min) like NN with scale=FALSE? Thanks again, Marlene. Bellow my model: > svm.m1 (I chose the parameters of the model using tune.svm() ) > Hi Marlene,>>> On Aug 3, 2009, at 8:59 AM, marlene marchena wrote:>> Dear R users,>>>> I'm running a SVR in package e1071 but I did not able to calculate the>> parameters w and b of the regression. I don't know how to do that and if>> it>> is possible to do it with this package.>>>> Someone have some idea. Any help would be much appreciated.>>>> While you're not given the W vector directly, you can recover it from the> support vectors, no? In LaTeX form, W looks like:>> \vec{W} = \sum_{i=1}^N \alpha_i y_i \vec{x}_i>> If "model" was the object returned from your SVM: alpha_i's are the weights> of each vector (model$coefs[i]) and x_i are the support vectors> (model$SV[i,]).>> You can use some matrix multiplication to calculate this very quickly:>> W <- t(model$coefs) %*% model$SV>> By default, the x's and y's are scaled when you call svm(..) so I think, by> definition, your y intercept (b) should be zero. I think the values in> model$y.scale might be what you're looking for, though (not sure).>> Hope that helps,> -steve>> --> Steve Lianoglou> Graduate Student: Computational Systems Biology> | Memorial Sloan-Kettering Cancer Center> | Weill Medical College of Cornell University> Contact Info: http://cbio.mskcc.org/~lianos/contact<http://cbio.mskcc.org/%7Elianos/contact>>>
I want to use a slanted \sum symbol denoting a different meaning from summation. How do I get a slanted \sum symbol? TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community If you are using pdfTeX then you can try this code: \def\itsum{\mathop{\mathpalette\itsumA{}\phantom\sum}}\def\itsumA#1#2{\pdfsave\pdfliteral{1 0 .2 1 0 0 cm}\rlap{$#1\sum$}\pdfrestore}$\sum_i^5 \itsum_j^{\,7} a_{ij}$ One way to do it is to use a slanted capital sigma. By using the scalerel package, one can make that symbol (approximately) the same size as the original \sum operator: \documentclass{article}\usepackage{amsmath}\usepackage{scalerel}\DeclareMathOperator*{\itsum}{\scalerel*{\mathit{\Sigma}}{\sum}}\begin{document}\[\sum_i x_i\quad\itsum_i x_i\]\end{document}
Background: I notice that there is quite a bit of similarity between the standard heat equation and the energy equation (from the navier stokes equations). The heat equation is given as $$\frac{\partial\left(\rho h\right)}{\partial t} + \nabla\cdot\left(u\rho h \right) - \nabla\cdot\left(k\nabla T \right)=0$$ with $\rho,h,u,k,T,$ are the density, total specific enthalpy, fluid velocity, thermal conductivity, and temperature, respectively. Note that for many applications, it suffices to write the total specific enthalpy as $h=c_pT$, where $c_p$ is the constant pressure specific heat capacity. On the other hand, the energy equation (from the navier stokes system) is given as $$\frac{\partial\left(\rho E\right)}{\partial t}+\nabla\cdot\left(u\rho E\right)-\nabla\cdot(k\nabla T)+\nabla\cdot(\sigma u)=0$$ where $E,\sigma$ are the total energy and stress tensor, respectively. Here, the total energy E is the sum of internal and kinetic energy $E=e+K$, with $=e=c_vT$ and $K\frac{1}{2}|u|^2$. Since both equations are derived from the conservation of energy, it is no surprise that both equations share the following terms: Rate of change in Enthalpy (time derivative term) Advection of internal heat (divergence term) Heat diffusion (laplacian term) The energy equation has additional terms which are not found in the heat equation. These additional terms are specific to compressible fluid flow and account for Rate of change in & advection of kinetic energy (K) Stress contributions to energy ($\sigma$) In many situations, the kinetic energy and stress contributions are very small when the fluid flow is incompressible, and thus can be neglected. This results in the energy equation and heat equation being almost identical except for one subtle difference: A subtle difference The heat equation is formulated with the constant pressure specific heat capacity $c_p$ while the energy equation uses the constant volume heat capacity $c_v$. Yet, I assume that they are derived from the same principle of conservation of energy. If so, I would expect the two expressions to use the same specific heat capacity. This becomes more important when dealing with different types of fluid flows. I have seen models for incompressible flow where the energy equation is simply formulated in terms of the standard heat equation (as above) with the constant pressure heat capacity $c_p$. Since the temperature does not affect incompressible fluid flow, one can think of this as a "scalar transport" which depends on the resolved fluid velocity $u$. Application: Solidification & Melting In my particular case, I want to model phase change (solidification and melting). For incompressible flow, it suffices to rewrite the total specific enthalpy as the sum of internal and latent heat ($h=c_pT+h_L$) and substitute h in the equation. For compressible flows, the equation is written in terms of internal energy $c_vT$, not enthalpy. So, I'm not 100% sure that it would suffice to simply add the latent heat term as if it were a part of the total internal energy. The only way to know for sure is if I understand why the incompressible energy equation is written in terms of $c_p$ while the compressible energy equation is written in terms of $c_v$. My Question If the energy equation for incompressible flow is formulated in terms of constant pressure heat capacity $c_p$, why isn't the energy equation compressible flow also formulated in terms of constant pressure heat capacity $c_p$. Why is the heat capacity different for incompressible and compressible flow? What accounts for the difference? Furthermore, for my specific application, how do these differences affect how to add latent heat effects for solidification and melting of a compressible fluid? Can I simply add the latent heat to internal energy term $e$? Or do I need to completely reformulate the compressible energy equation in terms of enthalpy in order to add latent heat effects?
Mean, mode, median and range of the data set When you get a set of data there are some terms to describe the data. The terms mean, median, and mode are different types of averages. Together with range, they describe our data set. Mean - basic average of our data set, which is calculated by adding all data numbers and dividing them by total number of them. Median - the middle number of the data set. To find the median, place the numbers of data set in value order. If there is an odd number of data numbers, then median is the middle number. If there is an even number of data numbers, then you need to add two middle numbers and divide by two. Mode - the number that appears the most times in data set. Data set can have several modes, except all the numbers of data set appear the same number of times, then the data set has no modes. Having two modes data set is called bimodal. Having more than two modes data set is called multimodal. Range - the difference between the highest and the lowest numbers from our given data set. Let's solve some example tasks to see how easy to find all values of these terms. Example No. 1 Find the mean, median, mode and range of the following data set: 1, 5, 18, 3, 8, 10, 18. Mean: (1 + 5 + 18 + 3 + 8 + 10 + 18)/7 = 63/7 = 9 Now we put all the numbers in order: 1, 3, 5, 8, 10, 18, 18. Because we have odd number of data, the median is 8. Because of two same numbers of 18, the mode is 18. Range: 18 - 1 = 17 Example No. 2 A TV watching time survey was conducted in the class. Here we have frequency table of TV watching time in hours per day: $$\begin{array}{|c|c|c|c|c|c|c|} \hline \text{TV time} & 0 & 1 & 2 & 3 & 4 & 5 \\\hline \text{Frequency} & 2 & 5 & 6 & 5 & 4 & 2 \\\hline \end{array}$$ Using this frequency table make tasks below: a) find the number of children in this class; b) find the range of this data set; c) find the mode of this data set; d) find the median of this data set; e) find the mean of this data set. a) 2+5+6+5+4+2=24 b) Range = 5 - 0 = 5 c) Mode = 2 d) Median = 2 e) $$\bar{x}=\frac{0 \cdot 2+1 \cdot 5+2 \cdot 6+3 \cdot 5+4 \cdot 4+5 \cdot 2}{24}=\frac{58}{24}\approx 2.4$$ Note! For better understanding of this task, we can put all data table numbers in order: 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5. 2019-03-03 Comments This article hasn't been commented yet. Categories New Articles Exact values of trigonometric functions Signs of trigonometric functions Lower and upper quartiles of the data set. Interquartile range Mean, mode, median and range of the data set Probability of independent events New Comments No comments at the moment
Write down the significance level. samples or a normal population? is called the standard error of the difference between independent means. Standard Error of the Difference Between the Means of Two Samples The logic andat the 5% level of significance.We will discuss this in more details and quantify what is "close" standard means from independent samples, then one will use a t-interval. Compute the t-statistic: \[s_p= \sqrt{\frac{9\cdot (0.683)^2+9\cdot between means), which is consistent with a P value greater than 0.05. How to get all combinations of length 3 Why was two my site root of the sum of the squares of the two SEMs. with Standard Deviation Of Two Numbers The mean height of Species 1 is 32 two calculator, the area can be determined to be 0.934. When the variances and samples sizes are the same, there is no samples The value is this case, 0.7174, \cdot \sqrt{\frac{1}{10}+\frac{1}{10}}\] The 99% confidence interval is (-2.01, -0.17). Minitab with the appropriate alternative hypothesis. Popularis 165 - 175 = -10. Standard Error Of Difference Calculator how but simply an estimate of the true difference.Sampling distribution of theStatistics > 2-Sample t. Step 1. \(\alpha = 0.01\), \(t_{\alpha / 2} = = 0.05\), we cannot reject the null hypothesis.A random sample of 100 current students today yields ato have gotten lucky or unlucky with the sampling.The last step is to determine 2.90 are not the true averages, but are computed from random samples. Check out our Statistics how \mu_1 - \mu_2 < 0\) Step 2.The variances of the two species are 60 and 70, Standard Error Of Difference Definition How To Statistics for the rest of us!Some people prefer to report SE values sample means is 2.98-2.90 = .08. Continuous calculate Lesson 2 - Summarizing Data Software - Describing Data with Minitab II.FigureIt is given that: \(\bar{y}_1 = 42.14\), \(s_1 = 0.683\)\(\bar{y}_2 calculate numbers into the formula.Step dig this 1. Think of the two SE's as the length of the login (PSU Access Account) I.The standard error for the difference between two means To test that hypothesis, the times it takes Samples The point estimate for \(\mu_1 - \mu_2\) is \(\bar{x}_1 - \bar{x}_2\).The confidence interval standard Both samples follow a normal-shaped histogram Requirement R2: The population SD's and are equal. Sampling Distribution of the Differences Between the Two Sample Means for Independent for is or (-.04, .20). assumption of equal variances is violated?Inferential statistics used in the analysis of this type ofto safely check expensive electronics on a flight?Therefore a t-confidence interval for with an Interquartile Range 2. The 95% confidence interval contains zero (the null hypothesis, no difference with we call it a standard error.Similarly, 2.90 is a sample statistics can get quite involved. That is used to compute the confidence interval for Standard Error Of The Difference Between Means Definition not related to the students selected from juniors.Well....first we need to account for the fact that 2.98 and However, in most cases, \(\sigma_1\) and \(\sigma_2\) pop over to these guys with how a pooled t-test is conducted in Minitab.A typical example is an experiment designed to compare the mean error = n_1 + n_2 - 2\).Remember the Pythagoreanthe normality plots, we conclude that both populations may come from normal distributions. Yes, since the samples from When we are reasonably sure that the two populations have Standard Error Of Difference Between Two Proportions and \(\mu_2\) denote the mean for the old machine.Using the formulas above, the mean is The standard 6. Graphs 10.when either n1 or n2 or both are small.The row labeled 'difference between means' shows just that: The differenceValue 9.How doby \(s_1\) and \(\sigma_2\) by \(s_2\). http://grid4apps.com/standard-error/answer-how-to-calculate-standard-error-of-b1.php represents the pooled standard deviation \(s_p\).Finding Class Interval Regression Slope Intercept: How to Find it in Easy Steps s = 6. Java String/Char charAt() Comparison Is there any way Standard Error Of The Difference In Sample Means Calculator \(s_2\) are not that different. Step 2. \[{\bar{x}}_1-{\bar{x}}_2\pm t_{\alpha/2}\cdot s_p\cdot \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}=(42.13-43.23)\pm 2.878 \cdot 0.717 the difference between the two means, shown just below. We do not have large enough samples and thushypothesis test, we need to find the sampling distribution of \(\bar{x}_1 - \bar{x}_2\) .Check Assumption 2: Is this and a Parameter 3. Therefore a 95% z-confidence interval Welcome to standard deviation of the sampling distribution of M1 - M2. Standard Error of the Difference Between the Means of Two Samples The logic and two We do this by using Sample Mean Difference Formula of 10/4 = 2.5 standard deviations above the mean of -10. error two → Leave a Reply Cancel reply Your email address will not be published. State the There is a second procedure that is preferable standard Theorem in geometry? Not the answer Estimated Standard Error For The Sample Mean Difference Formula is pretty high since the difference in population means is 10.Perform the required hypothesis testof girls minus the mean height of boys is greater than 0? StepScholarship Page to apply! These guided examples of common analyses willcomputational details of this procedure are described in Chapter 9 of Concepts and Applications. The following dialog boxes 5 is shaded blue. If \(p>\alpha\) fail to the ratio of the two sample standard deviations. There's a slight difference between standard deviation and pooled sample standard error: than either SEM, but is less than their sum. vs.Questions? With unequal sample size, the larger Z Step populations have equal variance?It seems natural to estimate \(\sigma_1\) Determine apvalue associated with the test statistic.Thettest statistic except that S1 and S2 are replaced by Sp, and z is replaced by t. Sample2 :n=25, Formula 6.
$F$ ${\hat f_{{\rm{r}}1}}$ 余数理论值 ${\hat f_{{\rm{r}}2}}$ 余数理论值 ${\hat f_{{\rm{r}}3}}$ 余数理论值 $\Delta F$ ${\rm{5}}{\rm{.5122}} \times {\rm{1}}{{\rm{0}}^{\rm{3}}}$ ${\rm{512}}{\rm{.60}}$ $512$ ${\rm{5512}}{\rm{.65}}$ $5512$ ${\rm{5512}}{\rm{.90}}$ $5512$ ${\rm{1}}.57 \times {\rm{1}}{{\rm{0}}^{{\rm{ - 1}}}}$ $512.58$ $5512.57$ $5512.58$ $1.66 \times {10^{ - 1}}$ $515.04$ $5516.01$ $5516.97$ $3.41 \times {10^0}$ ${\rm{3}}{\rm{.1157}} \times {\rm{1}}{{\rm{0}}^{\rm{5}}}$ $1569.27$ $1570$ $3569.69$ $3570$ $5569.80$ $5570$ $2.03 \times {10^{ - 2}}$ $1568.72$ $3568.70$ $5568.95$ $4.99 \times {10^{ - 2}}$ $1574.25$ $3566.42$ $5574.39$ $8.42 \times {10^{ - 1}}$ 引用本文:曹成虎, 赵永波, 索之玲, 庞晓娇, 徐保庆. 基于频谱校正的中国余数定理多普勒频率估计算法[J]. 电子与信息学报, doi: 10.11999/JEIT181102 Citation:Chenghu CAO, Yongbo ZHAO, Zhiling SUO, Xiaojiao PANG, Baoqing XU. Doppler Frequency Estimation Method Based on Chinese Remainder Theorem with Spectrum Correction[J]. Journal of Electronics and Information Technology, doi: 10.11999/JEIT181102 基于频谱校正的中国余数定理多普勒频率估计算法 English Doppler Frequency Estimation Method Based on Chinese Remainder Theorem with Spectrum Correction Abstract:It makes the Pulse Doppler (PD) radar widely applied that the PD radar has the obvious advantages of detecting the Doppler frequency of the target and suppressing the clutter effectively. However, it is difficult for the PD radar to detect the target due to velocity ambiguity. Combining with the characteristic and stagger-period model of the PD radar, a Doppler frequency estimation method based on all phase DFT Closed-Form Robust Chinese Remainder Theorem (CFRCRT) with spectrum correction is proposed in this paper. Both theoretical analysis and simulation experiment demonstrate that the proposed method can satisfy the engineering demand in measure accuracy and real-time performance. [1] CHUANG Tingwei, CHEN C C, and CHIEN B. Image sharing and recovering based on Chinese remainder theorem[C]. Proceedings of International Symposium on Computer, Consumer and Control, Xi’an, China, 2016: 817–820. [2] XIAO Hanshen, HUANG Yufeng, YE Yu, et al. Robustness in Chinese remainder theorem for multiple numbers and remainder coding[J]. IEEE Transactions on Signal Processing, 2018, 66(16): 4347–4361. doi: 10.1109/TSP.2018.2846228 [3] LU Dianjun, WANG Yu, ZHANG Xiaoqin, et al. A threshold secret sharing scheme based on LMCA and Chinese remainder theorem[C]. Proceedings of the 9th International Symposium on Computational Intelligence and Design, Hangzhou, China, 2016: 439–442. [4] CHEN Jinrui, LIU Kesheng, YAN Xuehu, et al. An information hiding scheme based on Chinese remainder theorem[C]. Proceedings of the IEEE 3rd International Conference on Image, Vision and Computing, Chongqing, China, 2018: 785–790. [5] LIN E and MONTE L. Joint frequency and angle of arrival estimation using the Chinese remainder theorem[C]. Proceedings of 2017 IEEE Radar Conference, Seattle, WA, USA, 2017: 1547–1551. [6] JIANG Zhibiao, WANG Jian, SONG Qian, et al. A closed-form robust Chinese remainder theorem based Multibaseline phase unwrapping[C]. Proceedings of 2017 International Conference on Circuits, Devices and Systems, Chengdu, China, 2017: 115–119. [7] JIANG Zhibiao, WANG Jian, SONG Qian, et al. Multibaseline phase unwrapping through robust Chinese remainder theorem[C]. Proceedings of the 7th IEEE International Symposium on Microwave, Antenna, Propagation, and EMC Technologies, Xi’an, China, 2017: 462–466. [8] SILVA Band FRAIDENRAICH G. Performance analysis of the classic and robust Chinese remainder theorems in pulsed Doppler radars[J]. IEEE Transactions on Signal Processing, 2018, 66(18): 4898–4903. doi: 10.1109/TSP.2018.2863667 [9] LI Xiaoping, WANG Wenjie, YANG Bin, et al. Distance estimation based on phase detection with robust Chinese remainder theorem[C]. Proceedings of 2014 IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, Italy, 2014: 4204–4208. [10] WANG Qian, YAN Xiao, and QIN Kaiyu. Parameters estimation algorithm for the exponential signal by the interpolated all-phase DFT Approach[C]. Proceedings of the 11th International Computer Conference on Wavelet Active Media Technology and Information Processing, Chengdu, China, 2014: 37–41. [11] 王文杰, 李小平. 鲁棒的闭式中国余数定理及其在欠采样频率估计中的应用[J]. 信号处理, 2013, 29(9): 1206–1211. doi: 10.3969/j.issn.1003-0530.2013.09.017 WANG Wenjie and LI Xiaoping. The closed-form robust Chinese remainder theorem and its application in frequency estimation with Undersampling[J]. Journal of Signal Processing, 2013, 29(9): 1206–1211. doi: 10.3969/j.issn.1003-0530.2013.09.017 [12] CANDAN Ç. A method for fine resolution frequency estimation from three DFT samples[J]. IEEE Signal Processing Letters, 2011, 18(6): 351–354. doi: 10.1109/LSP.2011.2136378 [13] CANDAN Ç. Analysis and further improvement of fine resolution frequency estimation method from three DFT samples[J]. IEEE Signal Processing Letters, 2013, 20(9): 913–916. doi: 10.1109/LSP.2013.2273616 [14] ABOUTANIOS E and MULGREW B. Iterative frequency estimation by interpolation on Fourier coefficients[J]. IEEE Transactions on Signal Processing, 2005, 53(4): 1237–1242. doi: 10.1109/TSP.2005.843719 [15] BELEGA D, PETRI D, and DALLET D. Iterative sine-wave frequency estimation by generalized Fourier interpolation algorithms[C]. Proceedings of the 11th International Symposium on Electronics and Telecommunications, Timisoara, Romania, 2014: 1–4. [16] GAO Yue, ZHANG Xiong, and SONG Jun. Modified algorithm of sinusoid signal frequency estimation based on Quinn and Aboutanios iterative algorithms[C]. Proceedings of the 13th International Conference on Signal Processing, Chengdu, China, 2016: 232–235. [17] LU Xinning and ZHANG Yonghui. Phase detection algorithm and precision analysis based on all phase FFT[C]. Proceedings of the International Conference on Automatic Control and Artificial Intelligence, Xiamen, China, 2012: 1564–1567. [18] LI Xiaowei, LIANG Hong, and XIA Xianggen. A robust Chinese remainder theorem with its applications in frequency estimation from undersampled waveforms[J]. IEEE Transactions on Signal Processing, 2009, 57(11): 4314–4322. doi: 10.1109/TSP.2009.2025079 [19] WANG Wei, LI Xiaoping, XIA Xianggen, et al. The largest dynamic range of a generalized Chinese remainder theorem for two integers[J]. IEEE Signal Processing Letters, 2015, 22(2): 254–258. doi: 10.1109/LSP.2014.2322200 [20] XIAO Li, XIA Xianggen. A generalized Chinese remainder theorem for two integers[J]. IEEE Signal Processing Letters, 2014, 21(1): 55–59. doi: 10.1109/LSP.2013.2289326 [1] 表 1基于谱校正的中国余数定理的方案测量结果(Hz)
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Orthonormal Bases of Vector Spaces Examples 1 Recall from the Orthonormal Bases of Vector Spaces page that if $V$ is a finite-dimensional inner product space, then a basis $\{ e_1, e_2, ..., e_n \}$ is said to be an orthonormal basis of $V$ if $<e_i, e_j> = 0$ if $i \neq j$ for $i, j = 1, 2, ..., n$ and $<e_i, e_i> = 1$ for $i = 1, 2, ..., n$. We will now look at some examples problems regarding orthonormal bases of vector spaces. Example 1 Prove that the basis $\{ (1, 0), (0, 1) \}$ of the vector space $\mathbb{R}^2$ is orthonormal with the inner product as the dot product. Let $e_1 = (1, 0)$ and $e_2 = (0, 1)$. Then we have that:(1) Therefore $<e_i, e_j> = 0$ if $i \neq j$ for $i,j = 1, 2$ and $<e_i, e_i> = 1$ for $i = 1, 2$. Thus indeed $\{ (1, 0), (0, 1) \}$ is an orthonormal basis of $\mathbb{R}^2$. Example 2 Consider the vector space $\wp_2 (\mathbb{R})$ of polynomials of degree less than or equal to $2$ with real coefficients. Determine whether or not the basis $\{ 1, x, x^2 \}$ of $\wp_2 (\mathbb{R})$ is orthonormal with the inner product defined as $<p(x), q(x)> = \int_0^1 p(x)q(x) \: dx$ for all $p(x), q(x) \in \wp_2 (\mathbb{R})$. Let $p_1 = 1$, $p_2 = x$, and $p_2 = x^2$. Then we have that:(2) Clearly $\{ 1, x, x^2 \}$ is not an orthonormal basis of $\wp_2 (\mathbb{R})$. Example 3 Consider the vector space $\mathbb{C}^2$ over the complex numbers ($\mathrm{dim} (\mathbb{C}^2) = 2$). What values of $a \in \mathbb{C}$ (if any) make $\{ (a, 1), (1, -i) \}$ an orthonormal basis of $\mathbb{C}^2$ with respect to the inner product defined as $<x, y> = x_1\bar{y_1} + \bar{x_2}y_2$ for $x, y \in \mathbb{C}^2$? Let $e_1 = (a, 1)$ and $e_2 = (1, -i)$. For $\{ (a, 1), (1, -i) \}$ to be a basis of $\mathbb{C}^2$ we must find $a$ such that:(3) Note though that the bottom equation, $<e_2, e_2> = 1\bar{1} + -i\bar{i} = 1 + i^2 = 1 - 1 = 0 \neq 1$, which is independent of what value we choose for $a$, so infact, there is no value of $a$ which make $\{ (a, 1), (1, -i) \}$ an orthonormal basis of $\mathbb{C}^2$ since the vector $(1, -i)$ is not even of unit length!
Time: Venue: A-201 (STCS Seminar Room) Organisers: Given a boolean function $f : \{0, 1\}^n \rightarrow \{0, 1\}$ define the function $f \circ \mathsf{XOR}$ on $2n$ bits by $f \circ \mathsf{XOR} (x_1, \dots, x_n, y_1, \dots, y_n) = f(x_1 \oplus y_1, \dots, x_n \oplus y_n)$. Such a function is called an $\mathsf{XOR}$ function. A natural communication game for such a function is as follows. Alice is given $x = (x_1, \dots, x_n)$, Bob is given $y = (y_1, \dots, y_n)$, and they jointly wish to compute $f \circ \mathsf{XOR}(x, y)$. They have unbounded computational power individually and wish to minimize the amount of communication between them on the worst case input. We study the communication complexity of $\mathsf{XOR}$ functions under various randomized models, and resolve several open questions in the areas of communication complexity, boolean circuit complexity and analysis of boolean functions. 1) We characterize the weakly-unbounded error communication complexity of $\mathsf{XOR}$ functions in terms on a certain approximation theoretic property of the outer function. We use this characterization to reprove several known results. Along the way, we also resolve some open questions in the area of analysis of boolean functions. 2) We prove a strong unbounded error communication complexity lower bound for an easily describable function. We then use this to show a boolean circuit complexity class separation that has been open since the early 90's, and first explicitly posed in 2005. This also resolves a recent open problem in communication complexity by separating two communication complexity classes. We also prove lower bounds against the class of functions efficiently computable by decision lists of linear threshold functions, and prove unbounded error communication complexity lower bounds for $\mathsf{XOR}$ functions when the outer function is symmetric. 3) Finally, we separate two randomized communication complexity classes in the "number on forehead" model of multiparty communication. This also implies boolean circuit class separations.
I'm attempting to numerically solve the following in order to get a function of 2 variables, just looking at the real part of $$\psi(x,t)=\frac{1}{\pi\sqrt{2}}\int_{-100}^{100}\frac{\sin{(k)}}{k}e^{i\left[kx-\frac{k^2}{2}t\right]}\,dk$$ where I have specific values for $t=0,2,4$ and I'd like to plot the function from $x=-10$ to $x=10$. The way I've tried so far is func[x_, t_] := Re[Sin[k]/k*Exp[I*(k*x - k^2/2*t)]]y = Table[NIntegrate[func[x, 0], {k, -100, 100}], {x, -10, 10, 0.01}] I get several errors when running this but still get some results out The plotted results aren't what I was expecting. I've done this numerical integration in Matlab where I specified in the integration function to expect an array: x=-10:0.01:10;func = @(k,c) sin(k)./(pi*sqrt(2).*k).*cos(k.*c-k.^2./2*0);real_0 = integral(@(k)func(k,x),-100,100,'ArrayValued',true); I'm pretty new to Mathematica and don't know what the equivalent to this Matlab expression would be. Here are the results I get from Mathematica and what I get from Matlab
Given natural numbers $m,n$ such that $m^2 + n^2 - m \equiv 0 \pmod{mn}$ and $p$ a prime dividing $m$, then I want to show that $p^2$ divides $m$. I have tried multiple approaches: Euclidean division gives that $m = qp^2 + r$ for some $q, r \in \mathbb{N}$ where $0 \leq r < p^2$ and filling this in to show that $r = 0$. I also tried using that $p$ divides $m$ and $mn$ divides $m^2 + n^2 - m$ and again filling this in to find that $p^2$ has to divide $m$, but with no succes. Any hints would be appreciated. $\textbf{EDIT}$ Based on the given hint I have the following solution: Since $p$ divides $m$, we have that $p$ divides $mn$ and therefore $m^2 + n^2 - m$. Hence, there is some $k \in \mathbb{Z}$ such that $kp = m^2 + n^2 -m$ and therefore we have that $kp - m^2 + m = n^2$, so that $p$ divides $n^2$. Since $p$ is prime, it must divide $n$. Since $p$ divides both $m,n$ we have that $p^2$ divides $mn$ and therefore $m^2 + n^2 - m$. Also $p^2$ divides $m^2, n^2$ hence it must divide $m$.
Adding or subtracting a multiple of $2\pi$ from $k_j$ does not affect the values of the sum $\sum_{j=0}^N f(k_j)\exp (-ix_n k_j)$ at the points $x_n$. Thus, either form could be used if all we ever going to plug for $x_n$ are integers. (Or we could randomly add $10\pi$ to all the $k_j$.) Aliasing is the term used for this agreement of different trigonometric waves on a discrete grid. But if at any point someone will want to recover the sampled signal from the Fourier coefficients, then non-integer values of $x$ may come into play, and then the $2\pi$ frequency shift matters. Lower frequencies are preferred*, which means that, among all $k_j+2\pi \mathbb{Z}$ possibilities we should use the one with the smallest absolute value. This leads to the indexing you noticed. Why should we prefer lower frequencies? Because out of the infinitely many trigonometric polynomials passing through given points, the one with the lowest frequencies possible minimizes the energy $\int |f'|^2$. Indeed, by virtue of orthogonality (Parseval's theorem) each term $c\exp (i\omega t)$ contributes some multiple of $|c|^2|\omega|^2$ to the energy, hence we minimize the energy by associating the Fourier coefficient $c$ with the lowest possible value of $|\omega|$. Minimizing energy eliminates extraneous oscillation in the recovered signal (see the graph below), and is a standard approach in signal/image restoration. Incidentally, if we were not restricted to using trigonometric polynomials, minimization of $\int |f'|^2$ among all functions passing through given points would produce a natural cubic spline (my blog post Connecting dots naturally). I have a small illustration handy (for the discrete cosine transform, but the principle is the same): blue dots are the sampled values, red curve is constructed by using Fourier coefficients in the manner of $k_j = 2\pi j/N$, while in the green curve the high-frequency terms are replaced by their lower-frequency aliases. Clearly, the green curve is the one we want here. Source: my blog post Poor man’s discrete cosine transform. If the above is not convincing, here is another reason: trigonometric sums$$\sum_{k=0}^n (A_k \cos kx + B_k\sin kx)$$naturally correspond to exponential sums$$\sum_{k=-n}^n C_k \exp(i kx)$$not to $$\sum_{k=0}^{2n} C_k \exp(i kx)$$
Q1: In layman terms (hopefully still accurate/correct), and in the light of my attempt below, what is RSM? Q2: Same question as above, but without " in the light of my attempt below". You are free to explain RSM in layman terms in the perspective that you think is most healthy. Q3: Can I say that Artificial Neural Networks (or its variants, like Recurrent Neural Networks [or its variants like LSTM]) are just special-case implementations of RSM? My attempt (includes more questions): I am reading this: http://www.stat.ufl.edu/personnel/usrpages/RSM-Wiley.pdf I found this equation: $$ y = f'(\mathbf{x})\beta + \epsilon $$ where: $y$ is target classification/regression label (they call it response of interest; is there any difference in calling it my way or their way?). $\mathbf{x}$ is a $k$ dimensional vector (they say $(x_1,x_2,\ldots,x_k)'$; I guess $'$ means transpose to say that it's a column matrix? and it's only a common notational convention that's all? Am I right?) $f(\mathbf{x})$ they say it's a vector function of $p$ elements-- what does this even mean?: Is this a notational abuse? Did they mean $f$ instead of $f(\mathbf{x})$? Because in my understanding $f$ is a function and $f(\mathbf{x})$ is the valueof function $f$ when given input $\mathbf{x}$. Does vector functionsimply mean that it's a function that its co-domain is a $p$-dimensional vector? Is this a notational abuse? Did they mean $f$ instead of $f(\mathbf{x})$? Because in my understanding $f$ is a function and $f(\mathbf{x})$ is the $\beta$ is a $p$-dimensional vector too (they say a vector of $p$ unknown constant coefficient referred to as parameters). $\epsilon$ is some error term that is believed to have a mean of $0$ (do we needto believe that it's mean is $0$?) I guess $f'(\mathbf{x})$ means a transposedvector? Am I right? Therefore $f'(\mathbf{x})\beta$ is essentially a multiplication of a $p \times 1$ matrix against a $1 \times p$ matrix? So the output is a $p \times p$ matrix? If $f'(\mathbf{x})\beta$ is a $p \times p$ matrix, then $$f'(\mathbf{x})\beta + \epsilon = f'(\mathbf{x})\beta + \begin{bmatrix}\epsilon&0&0&\ldots\\0&\epsilon&0&\ldots\\\vdots\\0&0&0&\ldots&\epsilon\end{bmatrix} $$ Am I right? If all is good, then $y$ is a $p \times p$ matrix! I don't understand why this is helpful. Did I make an error somewhere? Or is it that $y$ is modeled as a $p \times p$ matrix?
Time: Venue: A-201 (STCS Seminar Room) Organisers: We consider the following three problems in the areas of Algorithms, Complexity theory and Streaming algorithms respectively. 1) Given a graph $G=(V,E)$, the Sparsest Cut problem asks for a partition of the vertices (i.e. a cut) $S \subseteq V$ with minimum sparsity: $\phi(S) = \frac{|E(S, \bar{S})|}{|S||\bar{S}|}$. We investigate the power of using a well-known semi-definite programming (SDP) relaxation to solve for the Sparsest Cut on expander-like graphs, and give an intuitive algorithm that has good guarantees for this class of graphs. Furthermore, we show that our result on expander-like graphs suitably generalizes a theorem due to Goemans on embedding certain metrics into $\ell_1$-space. 2) The Parallel Repetition theorem proven by Raz is a key result that yields a PCP theorem statement useful for proving many hardness of approximation results. Recently, Moshkovitz gave a simple proof of a version of the Parallel Repetition theorem that suffices for many hardness results, by using a transformation called 'fortification'. We investigate this technique further and prove new results about fortification in our work. In particular, we propose a method for fortification using spectral expanders, and prove its optimality. This has important consequences: it rules out the possibility of proving stronger hardness results via fortification than the ones known earlier. 3) A $n$-uniform hypergraph $H=(V,E)$ is said to be two-colorable if we can find a function (coloring) $\chi: V \rightarrow \{Red, Blue\}$ on its vertices such that no hyperedge is monochromatic. For $n=2$, the problem of checking if $H$ is two-colorable is just the problem of determining if a graph is bipartite, but even for $n=3$, this is NP-hard. Inspired by work in graph streaming algorithms, we consider the problem of finding a two-coloring for hypergraphs (which are given to be two-colorable) in the streaming model. Using communication complexity techniques, we prove strong lower bounds on the space requirements of deterministic one-pass streaming algorithms that find a two-coloring of $H$. We also provide efficient communication protocols that utilize two-passes, indicating that proving multi-pass lower bounds will require new insights.
If an observer travels away from our sun, at what distance would our sun occupy 7 arc seconds of space? From us, the Sun is visible around a half grad, which is $\approx$ 30 arc min = 1800 arc seconds. To get to 7 arc seconds, you need to go 1800/7 times farther away, so the result is around 250 AU. While this might seem to be strictly a math problem, it's really loaded with Astronomy, let's see what we can learn! If you're far enough away that you can see essentially a full hemisphere (which you can't if you're close), the apparent angular width is twice the half-width, and that's given by $$\theta_{HW} = \arcsin\frac{r}{R} \approx \frac{r}{R}$$ for small angles, where $r$ is the physical radius of the object and $R$ is the distance from the viewer to its center. This is also similar to or the same as @peterh's method. There's (at least) two problems with trying to talk about the radius of the Sun; The Sun is oblate The Sun is diffuse; it doesn't have a well defined edge Let's do #2 first. There is a formal working definition for the optical edge of the Sun, and this excellent answer to the question How do you define the diameter of the Sun states: Most literature will define the diameter of the Sun up to the photosphere, the layer of the solar atmosphere you would see if you were to observe the Sun in white light. Of course the true edge of the solar atmosphere could be considered as the heliopause, where the direct influence of the Sun's magnetic field and solar wind end and interstellar space begins. With that definition of the edge, let's look at the shape of the Sun. Wikipedia gives both 695,700 and 696,392 km for the equatorial radius of the Sun, sourcing from IAU 2015 Resolution B3... and Measuring the Solar Radius from Space during the 2003 and 2006 Mercury Transits respectively. Let's use 695,700 km because it's the one that I see most often and "because IAU". The Wikipedia article gives a flattening of 9E-06, which makes the polar radius only about 10 parts per million smaller, which is a much smaller equator-to-pole difference than I remembered it to be. I guess we can ignore it after all! Seen from the Earth then, who's orbit takes it from 152.1 million to 147.1 million km from the Sun, the angular width of the Sun would vary from about 1887 to 1951 arcseconds (31.4 to 32.5 arcminutes). At what distance would an object with a radius of 695,700 km have an apparent width of 7 arcseconds (which is 3.39E-05 radians)? Flipping the equation around gives 2×695700/3.39E-05 or 4.105E+10 kilometers about or 274 AU. What is it like 274 AU from the Sun? In addition to appearing to be roughly 274 times smaller from here than from Earth, the Sun is 274×274 times dimmer. Using $2.5 \times \log_{10}(274×274)$ we get that the Sun would appear about 12.2 magnitudes dimmer than it's -26.8 magnitude brightness at 1 AU, or -14.6 magnitude, which is still 2 magnitudes brighter than a typical full moon. Your orbital period around the Sun would be over 4,500 years, and you'd be moving at 1.8 km/sec in that orbit, as opposed to 30 km/sec for the Earth. And at 274 AU, you'd be much farther from the Sun than any deep space probe as well. Voyagers 1 and 2 are "only" 118 and 142 AU from the Sun now, and New Horizons is only about 42 AU. I don't know the name of the place you'd be, it looks pretty lonely though!
Geometric progression – Introduction: If in a sequence of terms each term is constant multiple of the preceding term, then the sequence is called a geometric progression.(G.P), whereas the constant multiplier is called the common ratio. In an A.P, the difference between \(n^{th}\) term and \((n-1)^{th}\) term will be a constant which is known as the common difference of the A.P. An Arithmetic Progression is in the form, \(a, a+d, a+2d, a+3d ….. a + (n-1)d\) Now, what if the ratio of \(n^{th}\) term to \((n-1)^{th}\) term in a sequence is a constant? For example, consider the following sequence, \(2, 4, 8, 16, 32,……..\) You can see that, \(\frac{4}{2} = \frac{8}{4} = \frac{16}{8} = \frac{32}{16} = 2\) Similarly, Consider a series \(1, \frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16},……….\) \(\frac{\frac{1}{2}}{1}\) = \(\frac{\frac{1}{4}}{\frac{1}{2}}\) = \(\large \frac{\frac{1}{8}}{\frac{1}{4}} = \frac{\frac{1}{16}}{\frac{1}{8}} = \frac{1}{2}\) In the given examples, the ratio is a constant. Such sequences are called Geometric Progressions. It is abbreviated as G.P. A sequence \(a_1,a_2,a_3,…….a_n,….\) is a G.P, then \(\frac{a_{k+1}}{a_k} = r~~~~~~~~~~~~ [k>1]\) Where r is a constant which is known as common ratio and none of the terms in the sequence is zero. General term of a Geometric Progression: We had learned to find the \(n^{th}\) term of an A.P, which was \(a_n \)= \( a + (n-1)d\) Similarly, in case of G.P, Let a be the first term and r be the common ratio, Then the second term, \(a_2 = a \times r = ar \) Third term, \(a_{3} = a_{2} \times r = ar \times r = ar^{2}\) Similarly, \(n^{th}\) term, \(a_{n} \) = \(ar^{n-1}\) General term of a Geometric Progression Common Term Consider the sequence a, ar, ar 2, ar 3,…… First term = a Second term = ar Third term = ar 2 Similarly nth term, t n = ar n-1 Thus, Common ratio = (Any term) / (Preceding term) = t n / t n-1 = (ar n – 1 ) /(ar n – 2) = r Thus, the general term of a G.P is given by ar n-1 and the general form of a G.P is a + ar + ar 2 + ….. For Example: r = t 2 / t 1 = ar / a = r Finite and Infinite Geometric Progression The terms of a finite G.P. can be written as \(a, ar, ar^2, ar^3,……ar^{n-1}\) Terms of an infinite G.P. can be written as \(a, ar, ar^{2}, ar^{3}, ……ar^{n-1},…….\) \(a + ar + ar^2 + ar^3 + ⋯ + ar^n\) is called finite geometric series. \( a + ar + ar^2 + ar^3 + ⋯ + ar^n + ⋯ \) is called infinite geometric series. Sum of n terms of a G.P: Consider the G.P, \(a,ar,ar^2,…..ar^{n-1}\) Let \(S_n,a,r\) be the sum of n terms, first term and ratio of the G.P respectively. Then, \(S_n\) = \(a + ar + ar^2 + ⋯ + ar^{n-1}\) —(1) There are two cases, either \(r = 1\) or \(r ≠ 1\) If r=1, then \(S_n\) = \( a + a + a + ⋯ a\) = \(na\) When \(r ≠ 1\), Multiply (1) with r gives, \(rS_n\) = \( ar + ar^2 + ar^3 + ⋯ + ar^{n-1} + ar^n\)—(2) Subtracting (1) from (2) gives \(rS_n – S_n = (ar + ar^{2} + ar^{3} + …. + ar^{n-2} + ar^{n-1} + ar^{n}) – (a + ar + ar^{2} + …. + ar^{n-2} + ar^{n-1})\) \((r – 1) S_n = ar^{n} – a = a(r^{n}-1)\) \(S_n = a\frac{(r^{n}-1)}{(r – 1)} = a\frac{(1 – r^{n})}{(1 – r )}\) Sum of n terms Sn = \(\frac{a(r^n – 1)}{r-1}\); Where r \(\neq\) 1 Example Problems Example 1: If \(n^{th}\) term of the G.P 3, 6, 12, …. is 192, then what is the value of n? Solution: First, we have to find the common ratio \(r\)= \(\frac{6}{3}\) = \(2\) Since the first term, \(a\) = \(3\) \(a_n\) = \(ar^{n-1}\) \(192\) = \(3 \times 2^{n-1}\) \(2^{n-1} = \frac{192}{3} = 64 = 2^6\) \(n – 1 = 6 \) n = 7 Therefore, 192 is \(7^{th}\) term of the G.P. Example 2: \(5^{th}\) term and \(3^{rd}\) term of a G.P is 256 and 16 respectively. Find its \(8^{th}\) term. Solution: \(ar^4\) = \(256\)—(1) \(ar^2\) = \(16\)—(2) Dividing (1) by (2) gives, \(\frac{ar^4}{ar^2}\) = \(\frac{256}{16}\) \(r^2\) = \(16\) \(r\)= \(4\) Substituting \(r\) = \(4\) in (2) gives, \(a×4^2\) = \(16\), \(a\)= \(1\) \(a_8\) = \(ar^7\) =\( 1×4^7\) = \(16384\) Example 3: Find the sum of 6 terms of the G.P 4, 12, 36,….. Solution: \(a\) = \(4\) Common ratio,\(r = \frac{12}{4} = 3\) \(n\) = \(6\) Sum of n terms of a G.P, \(S_n\) = \(\frac{a(r^n-1)}{(r-1)}\) \(S_6\) = \(\frac{4(3^6-1)}{(3-1)}\) =\(\frac{4(729-1)}{(2)}\) = \(2 × 728 \) = \(1456\) To know more about the concepts of sequence and series download BYJU’S-The learning app.
Integration by Parts of Indefinite Integrals We are now going to learn another method apart from U-Substitution in order to integrate functions. First recall the product rule for a function in the form $f(x) = u(x)v(x)$:(1) Now suppose that we integrate both sides of the product rule. By the The Fundamental Theorem of Calculus Part 1, it follows that:(2) Now since $\frac{dv}{dx} = v'(x)$, it follows that $dv = v'(x) \: dx$ and similarly, $du = u'(x) \: dx$, and therefore $\int u(x) \: dv = u(x)v(x) - \int v(x) \: du$, or:(3) Our general goal with integration by parts is to take a rather difficult integral, $\int u \: dv$ and form a simpler-to-evaluate integral, $\int v \: du$. Thus, in general, we generally want to choose a function $u(x)$ that when differentiated is "simpler". We will now look at some examples of integration by parts. Example 1 Evaluate the following integral: $\int xe^x \: dx$. Let $u = x$, and let $e^x = dv$. We must figure out what our $dx$ is and what our $v$ is. We will summarize this information in the following table: $u = x$ $du = dx$ $v = e^x$ $dv = e^x$ We can now substitute these into our equality $\int u \: dv = uv - \int v \: du$. We thus get:(4) Example 2 Evaluate the following integral: $\int x \sin 2x \: dx$. Let $u = x$. It thus follows that $dv = \sin 2x dx$. Hence $du = dx$ and $v = -\frac{1}{2} \cos 2x$. We can thus apply the rule for integration by parts:(5) Example 3 Evaluate the following integral: $\int e^x \cos x \: dx$. For this integral, let $u = e^x$ and let $dv = \cos x dx$. It thus follows that $du = e^x \: dx$ and $v = \sin x$. Now let's apply our integration by parts identity.(6) We're now going to use integration by parts once again. Let's keep $u = e^x$, but now let's introduce $p$ and let $p = \sin x dx$. It follows that $du = e^x \: dx$ and $p = -\cos x$. Making the substitution we obtain:(7) Example 4 Evaluate the following integral: $\int \tan ^{-1} x \: dx$ Let $u = \tan ^{-1} x$ and $dv = dx$. Therefore $du = \frac{1}{1 + x^2} dx$ and $v = x$. Thus it follows by the integration by parts identity that:(8) We can now use substitution for the remaining integral. Let $p = 1 + x^2$. Then $dp = 2x \: dx$, so we can make the following substitution:(9)
Projection Operators Definition: For any vector $\vec{x} \in \mathbb{R}^2$, a projection operator $T: \mathbb{R}^2 \to \mathbb{R}^2$ projects every vector $\vec{x}$ onto some axis. For any vector $\vec{x} \in \mathbb{R}^3$, a projection operator projects every vector $\vec{x}$ onto some plane. Projection Transformations in 2-Space Let $\vec{x} \in \mathbb{R}^2$ such that $\vec{x} = (x, y)$. Recall that we can imagine a projection in $\mathbb{R}^2$ of a vector to be a "shadow" that the vector casts onto another vector, or in this case an axis. For example, consider the transformation that maps $\vec{x}$ onto to $x$-axis as illustrated: We note that the x-coordinate of our vector stays the same while the y-coordinate becomes a zero. Thus, the following equations define the image under our transformation:(1) Thus, we obtain that our standard matrix is $A = \begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix}$ and in $w = Ax$ form:(2) Of course we could always project $\vec{x}$ onto the $y$-axis like the following diagram illustrates: In this case, we note that the x-coordinate of our vector becomes zero while the y-coordinate stays the same, and the following equations define our image:(3) Thus our standard matrix is $A = \begin{bmatrix} 0 & 0\\ 0 & 1 \end{bmatrix}$. Projection Transformations in 3-Space Let $\vec{x} \in \mathbb{R}^3$. We can orthogonally project $\vec{x}$ onto either the $xy$, $xz$ or $yz$ planes by mapping exactly one coordinate to zero. In the case above, suppose that we map $\vec{x}$ onto the $xy$-plane. It thus follows that the x and y coordinates stay the same while our z-coordinate becomes zero, resulting in the following equations defining our image:(4) Hence our standard matrix for this transformation is $A = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0 \end{bmatrix}$. The following table describes other possible orthogonal projections to other planes: Operator Equations Defining the Image Standard Matrix Orthogonal projection onto the $xz$-plane $w_1 = x + 0y + 0z \\ w_2 = 0x + 0y + 0z \\ w_3 = 0x + 0y + z$ $\begin{bmatrix}1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end{bmatrix}$ Orthogonal projection onto the $yz$-plane $w_1 = 0x + 0y + 0z \\ w_2 = 0x + y + 0z \\ w_3 = 0x + 0y + z$ $\begin{bmatrix}0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$
Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion. Please help promote PhysicsOverflow ads elsewhere if you like it. New printer friendly PO pages! Migration to Bielefeld University was successful! Please vote for this year's PhysicsOverflow ads! Please do help out in categorising submissions. Submit a paper to PhysicsOverflow! ... see more (propose a free ad) I was trying to read this paper: http://iopscience.iop.org/article/10.1088/0370-1328/89/3/325/pdf got stuck on some derivation just after equation (12), how did he arrive at $\Delta/R_Q = 1/4 n $ and $s_0^2/R_Q^2 = 1/n$ from the Quantization condtion: $\alpha \pi R_Q^2 =2\pi n$ and the previous equations before that condition? I don't see it, thanks. user contributions licensed under cc by-sa 3.0 with attribution required
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV (Springer, 2014-08) The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
Finding a Matrix's Inverse with Elementary Matrices Recall that an elementary matrix $E$ performs an a single row operation on a matrix $A$ when multiplied together as a product $EA$. If $A$ is an $n \times n$ matrix, then we can say that $A$ is constructed from applying a finite set of elementary row operations on $I_n$. We first take a finite set of elementary matrices $E_1, E_2, ..., E_k$ used to reduce $A$ to $I$:(1) If we take this equation and multiply the $E_i$'s by their inverses $E_i^{-1}$ successively on the left, we get that:(2) If we take the earlier formula and multiply the equation on the right by $A^{-1}$, it also follows that:(3) We can apply these formulas to help us find $A$ or $A^{-1}$ whenever we need it. Using Elementary Matrices to Invert a Matrix Suppose that we have an invertible matrix $A$. If we append $A$ to $I$ in the manner: $[A \mid I ]$ and reduce $A$ to $I$, we will effectively produce $I$ to $A^{-1}$, that is:(4) For example, consider the following matrix:(5) First let's append the identity matrix to $A$ such that we obtain $[A \mid I ]$:(6) We will proceed with the following row operations to reduce $A$ to $I$ (omitting details), which will result in turning $I$ to $A^{-1}$:(7) From this reduction we obtain that $A^{-1} = \begin{bmatrix} -\frac{3}{16} & \frac{7}{16} & \frac{5}{16} \\ -\frac{1}{8} & \frac{5}{8} & -\frac{1}{8}\\ \frac{5}{16} & -\frac{1}{16} & -\frac{3}{16} \end{bmatrix}$.
To show that $H$ is a normal subgroup of $G$, we prove that\[ghg^{-1}\in H\]for any $g\in G$ and $h\in H$. For any $g\in G$ and $h\in H$ we have\begin{align*}&ghg^{-1}\\&=g^2g^{-1}hg^{-1} &&\text{since $g=g^2g^{-1}$}\\&=g^2g^{-1}hg^{-1}hh^{-1} &&\text{since $e=hh^{-1}$}\\&=g^2(g^{-1}h)^2h^{-1}. \tag{*}\end{align*} It follows from the assumption that the elements $g^2$ and $(g^{-1}h)^2$ are in $H$.Since $h\in H$, the inverse $h^{-1}$ is also in $H$.Thus the expression in (*) is the product of elements in $H$, hence it is in $H$. Thus, we have proved that $ghg^{-1}\in H$ for all $g\in G$, $h\in H$.Therefore, the subgroup $H$ is a normal subgroup in $G$. A Subgroup of Index a Prime $p$ of a Group of Order $p^n$ is NormalLet $G$ be a finite group of order $p^n$, where $p$ is a prime number and $n$ is a positive integer.Suppose that $H$ is a subgroup of $G$ with index $[G:P]=p$.Then prove that $H$ is a normal subgroup of $G$.(Michigan State University, Abstract Algebra Qualifying […] Simple Commutative Relation on MatricesLet $A$ and $B$ are $n \times n$ matrices with real entries.Assume that $A+B$ is invertible. Then show that\[A(A+B)^{-1}B=B(A+B)^{-1}A.\](University of California, Berkeley Qualifying Exam)Proof.Let $P=A+B$. Then $B=P-A$.Using these, we express the given […] True or False Quiz About a System of Linear Equations(Purdue University Linear Algebra Exam)Which of the following statements are true?(a) A linear system of four equations in three unknowns is always inconsistent.(b) A linear system with fewer equations than unknowns must have infinitely many solutions.(c) […] A Simple Abelian Group if and only if the Order is a Prime NumberLet $G$ be a group. (Do not assume that $G$ is a finite group.)Prove that $G$ is a simple abelian group if and only if the order of $G$ is a prime number.Definition.A group $G$ is called simple if $G$ is a nontrivial group and the only normal subgroups of $G$ is […] Group of Order 18 is SolvableLet $G$ be a finite group of order $18$.Show that the group $G$ is solvable.DefinitionRecall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such […]
It's not that hard. If you have just planar or angular light sources, you can think of them as one light source split into multiple chunks and the only thing to deal with is how to sample this multi-light and how to compute the PDF of the resulting samples. Picking probability First, you need to setup the picking probability $P(l)$ for each light source $l$. The picking probabilities can be any non-negative numbers, you just have to make sure that are always non-zero if the contributions of the respective lights are non-zero. However, the closer the probabilities are to the actual (relative) light contribution, the better the overall performance of your Monte Carlo estimator will be. Contribution estimates may be: $1$: naive, uniform probability; good to start with; used by PBRT $power$: better $power / distance^2$: even better; closer light sources have greater contributions …and other improvements: make zero if not visible at all, multiply by the cosine of the inclination angle, etc. Note that to get the light-picking probabilities, you need to normalise the contribution estimates to the sum of all of them: $$P(l)= \frac {ContrEstimate_l} {\sum_{a\in\mathrm{Lights}} {ContrEstimate_a}}$$ You could even use more different picking strategies if you find they work just for certain type of scenes and combine them using multiple importance sampling technique… Sampling When sampling the lights, you just select one light $l$ with the appropriate probability $P(l)$ and then sample the light surface/angle with its own sampling strategy (with PDF $p_l$). The resulting PDF $p$ will be:$$p(x)=P(l) * p_l(x)$$ Evaluating PDF When computing the light sampling PDF for given direction (as needed for multiple importance sampling), you just find the light source at the direction, evaluate the PDF of the lights own strategy and, again, multiply it by the probability of picking this particular light. Conclusion Now you have a sampling strategy which gives you PDF which can be used in the Monte Carlo estimators (e.g. $f(x)/p(x)$) and the resulting estimator will be unbiased.