text
stringlengths
256
16.4k
I think that a good way to understand the Levi-Civita connection is to say that is is the Ehresmann connection in $TTM$ obtained from the linearization of the geodesic flow by a natural geometric construction. I described this construction in my answer to this MO question, but I'll do so again with some improvements. Dynamic construction. Let $c(t)$ be an orbit of the geodesic flow in $TM$, consider the vertical subspaces $V(t)$ in $TTM$ along $c(t)$ and bring them back to the tangent space of the cotangent bundle over the point $c(0)$ by using the differential of the flow. You get a family of (Lagrangian) subspaces $l(t) := D\phi_{-t}(V(t))$ in the symplectic vector space $T_{c(0)}TM$. Now forget you ever had a geodesic flow: all that you need is the curve of subspaces. A bit of differential projective geometry---described below---shows that you also get a second curve $h(t)$ of (Lagrangian subspaces) in $T_{c(0)}(T^*M)$ that is transversal to $l(t)$. The subspace $h(0)$ is the horizontal subspace of the connection and $T_{c(0)}(T^*M) = l(0) \oplus h(0)$ is the decomposition into vertical and horizontal subspaces. Projective construction. Now I'll describe as succintly as possible the projective-geometric constructionthat underlies both the Levi-Civita connection and the Schwartzian derivative.For the detais of what follows see this paper What's new in the description here is that I explicitly use the Springer resolution (Duran and I used implicitly in the paper). First we need two remarks on the geometry of the Grassmannian $G_n(\mathbb{R}^{2n})$ of $n$-dimensional subspaces in $\mathbb{R}^{2n}$ 1. The tangent space of $G_n(\mathbb{R}^{2n})$ at a subspace $\ell$is canonically identified with the space of linear maps from $\ell$ to $\mathbb{R}^{2n}/\ell$ or, equivalently, with the space $(\mathbb{R}^{2n}/\ell) \otimes \ell^*$. Since $\mathbb{R}^{2n}/\ell$ and $\ell$ have the same dimension, we may distinguish a class of differentiable curves $\gamma$ on the Grassmannian by requiring that at each instant $t$ their velocities are invertible linear maps from $\gamma(t)$ to $\mathbb{R}^{2n}/\gamma(t)$. These curves are called fanning or regular. Using that the cotangent space of $G_n(\mathbb{R}^{2n})$ at a subspace $\ell$is canonically isomorphic to $\ell \otimes (\mathbb{R}^{2n}/\ell)^*$, we can liftevery fanning curve $\gamma(t)$ to a curve on the cotangent bundle of the Grassmannian by $t \mapsto (\dot{\gamma}(t))^{-1}$. 2. Consider the action of the linear group $GL(2n;\mathbb{R})$ on the Grassmannian $G_n(\mathbb{R}^{2n})$ and lift it to an action on its cotangent bundle. The moment map of this action takes values on the set of nilpotent matrices. Now consider a fanning curve $\gamma(t)$ on the Grassmannian $G_n(\mathbb{R}^{2n})$ and lift it to the curve $(\dot{\gamma}(t))^{-1}$ on its cotangent bundle. Use the moment map to obtain a curve $F(t)$ of nilpotent matrices. Note that everything we have done is $GL(2n,\mathbb{R})$-equivariant. Finally we come to the little miracle: the time derivative of $F(t)$ is a curve of reflections $\dot{F}(t)$ (i.e., $\dot{F}(t)^2 = I$) whose -1 eigenspace is the curve of subspaces $\gamma(t)$ and whose $1$-eigenspace defines a "horizontal curve" $h(t)$ equivariantly attached to $\gamma(t)$. This is the construction that yields the Levi-Civita connection (and what is behind the formalisms of Grifone and Foulon for connections of second order ODE's on manifolds). Differentiate $F(t)$ a second time to find the Schwartzian derivative. Geometrically, it just describes how the curve $h(t)$ moves with respect to $\gamma(t)$. For comparison, recall that the curvature of a connection is obtained by differentiating (i.e., bracketing) horizonal vector fields and projecting onto the vertical bundle.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
So I am simulating stock prices with what I believe to be geometric Brownian motion using parameters from the usual Black-Scholes framework (Please correct me if I am wrong) with the following formula: $$S_{t} = S_{0}e^{(r-\delta -\frac{1}{2}\sigma^{2})t +z\sigma \sqrt{t}} ,$$ where St is the stock price at time t, r is the risk-free rate, delta is the dividend rate, sigma is the volatility, and z is a draw from the standard normal distribution. However, when I simulate the stock prices one year from now by plugging in t=1 vs plugging in t=1/12 (and simulate 12 successive runs), I get drastically different ending prices. The simulated stock prices from the single step (t=1) has much higher variation than stock prices simulated from the 12 time step versions. I am wondering if I am missing something from this equation. A somewhat related question-------maybe too simple to start a new topic------- is the following: I remembered back in school that when simulating stock prices, one should use alpha---the real rate of return, as opposed to the risk free rate in the equation (using r in the simulation equation implying we're in the risk-neutral world?). (http://www.actuarialoutpost.com/actuarial_discussion_forum/showthread.php?t=216817) However, when I use this equation to simulate stock prices I am able to get option prices very close to the B-S-M theoretical prices. So my question is why can't we simulate stock prices with alpha and discount at some other rate to price the same option? (Is it because alpha is unknown, or the other discount rate is unknown?). Thanks for reading through!
I'm trying to work through a self-made exercise, which may be ill formed as a question. Any general advice in dealing with these types of problems is also much appreciated! I'm looking at a quantum gate $U_f$ for a function $f$, that has the effect $$\sum_x \alpha_x \vert x\rangle\vert 0\rangle \mapsto \sum_x \alpha_x\vert x\rangle\vert f(x)\rangle. $$ This will in most cases be an entangled state: for instance, if $f(x) = x$, then I get what looks like a Bell state. I want to consider a case where the first register is already maximally entangled with a third party, Eve. One way to proceed is to write the first register as a mixed state which I obtain after tracing out Eve's part. The trouble now is, when we consider the action of the gate, that the gate entangles the two registers. I have no idea how to sort out the entanglement between Eve and the first register and the new entanglement between the first and second registers. Alternatively, if I don't trace out Eve's register and instead implement the gate $\mathbb 1\otimes U_f$, then I'm still not sure what the outcome is. Before the gate, I have $$\sum_x \vert x \rangle_E\vert x\rangle\vert 0\rangle. $$ (I have marked Eve's register for clarity.) After the gate, I could naively write $$\sum_x \vert x\rangle_E\vert x\rangle\vert f(x)\rangle, $$ but this looks dubious to me. Particularly, this looks like Eve is now entangled with the second register but that seems wrong. I'm not sure how entanglement monogamy fits in but I suspect my guess for the state isn't compatible with it. Can anyone clarify what's going on for me?
Suppose that $\{\theta^1,\ldots,\theta^M\}$ is a $2\delta$-separated set contained in the space $\theta(\cP)$. Generate a r.v. $Z$ by the following procedure: Sample $J$ uniformly from the index set $[M]={1,\ldots,M}$. Given $J=j$, sample $Z\sim \bbP_{\theta^j}$. Let $\bbQ$ denote the joint distribution of the pair $(Z,J)$ generated by this procedure. A testing function for this problem is a mapping $\psi:\cZ \rightarrow [M]$, and the associated probability of error is given by $\bbQ[\psi(Z)\neq J]$. (From estimation to testing) For any increasing function $\Phi$ and choice of $2\delta$-separated set, the minimax risk is lower bounded as$$\fM(\theta(\cP), \Phi\circ \rho)\ge \Phi(\delta)\inf_\psi\bbQ[\psi(Z)\neq J]\,,$$where the infimum ranges over test functions. For any $\bbP\in\cP$ with parameter $\theta = \theta(\bbP)$, we have$$\bbE[\Phi(\rho(\hat\theta,\theta))] \ge \Phi(\delta) \bbP[\Phi(\rho(\hat\theta,\theta))\ge \Phi(\delta)]\ge \Phi(\delta)\bbP[\rho(\hat\theta,\theta)\ge \delta]\,.$$Note that$$\sup_{\bbP\in\cP}\bbP[\rho(\hat\theta,\theta(\bbP)) \ge \delta] \ge \frac 1M \sum_{j=1}^M\bbP_{\theta^j}[\rho(\hat\theta, \theta^j)\ge \delta] = \bbQ[\rho(\hat\theta, \theta^J)\ge \delta]\,.$$Any estimator $\hat\theta$ can be used to define a test,$$\psi(Z) = \arg\min_{\ell\in[M]} \rho(\theta^\ell, \hat\theta)\,.$$Suppose that the true parameter is $\theta^j$: we then claim that the event $\{\rho(\theta^j,\hat\theta)<\delta\}$ ensures that the test is correct.Thus, conditioned on $J=j$,$$\{\rho(\hat\theta,\theta^j) < \delta\}\subset \{\psi(Z)=j\}\,,$$which implies that$$\bbP_{\theta^j}[\rho(\hat\theta,\theta^j)\ge \delta] \ge \bbP_{\theta^j}[\psi(Z)\neq j]\,.$$Taking averages over the index $j$, we find that$$\bbQ[\rho(\hat\theta,\theta^J)\ge \delta] = \frac 1M\sum_{j=1}^M\bbP_{\theta^j}[\rho(\hat\theta,\theta^j)\ge \delta]\ge \bbQ[\psi(Z)\neq J]\,.$$Thus,$$\sup_{\bbP\in\cP}\bbE_\bbP[\Phi(\rho(\hat\theta,\theta))]\ge \Phi(\delta)\bbQ[\psi(Z)\neq J]\,.$$Finally, take the infimum over all estimators $\hat\theta$ on the left-hand side, and the infimum over the induced set of tests on the right-hand side. Some divergence measures Total Variation distance: $\Vert \bbP-\bbQ\Vert_{\TV}:=\sup_{A\subseteq \calX}\vert \bbP(A)-\bbQ(A)\vert\,.$ $\frac 12H^2(\bbP^{1:n}\Vert \bbQ^{1:n})=1-(1-\frac 12H^2(\bbP_1\Vert\bbQ_1))^n\le \frac n2H^2(\bbP_1\Vert \bbQ_1)$suppose $f(x)=(1-x)^n-(1-nx)$, easily can show $f(x)\ge 0$ for $x\in [0,1]$ Binary testing The simplest type of testing problem, known as a binary hypothesis test, involves only two distributions. In a binary testing problem with equally weighted hypotheses, we observe a random variable $Z$ drawn according to the mixture distribution $\bbQ_Z=\frac 12\bbP_0+\frac 12\bbP_1$. For a given decision rule $\psi:\cal Z\rightarrow {0,1}$, the associated probability of error is given by If we take the infimum of this error probability over all decision rules, we obtain a quantity known as the Bayes risk for the problem. In the binary case, we have$$\inf_\psi \bbQ[\psi(Z)\neq J]=\frac 12\{1-\Vert \bbP_1-\bbP_0\Vert_{\TV}\}\,.$$ Note that there is a one-to-one correspondence between decision rule $\psi$ and measurable partitions $(A,A^c)$ of the space $\cal X$. In particular, any rule $\psi$ is uniquely determined by the set $A={x\in \cX\mid \psi(x)=1}$. Thus, we have$$\sup_\psi\bbQ[\psi(Z)=J] = \sup_{A\subseteq \cX}\{\frac 12\bbP_1(\A)+\frac 12\bbP_0(A^c)\}=\frac 12\sup_{A\subseteq \cX}\{\bbP_1(A)-\bbP_0(A)\} + \frac 12\,.$$Then the result follows from$$\sup_\psi \bbQ[\psi(Z)=J]=1-\inf_\psi\bbQ[\psi(Z)\neq J].$$ Thus, for any pair of distributions $\bbP_0,\bbP_1\in\cP$ such that $\rho(\theta(\bbP_0),\theta(\bbP_1))\ge 2\delta$, we have For a fixed variance $\sigma^2$, let $\bbP_\theta$ be the distribution of a $\cN(\theta,\sigma^2)$ variable; letting the mean $\theta$ vary over the real line defines the Gaussian location family $\{\bbP_\theta,\theta\in\bbR\}$. Here we consider the problem of estimating $\theta$ under either the absolute error $\vert \hat\theta-\theta\vert$ or the squared error $(\hat\theta-\theta)^2$ using a collection $Z=(Y_1,\ldots,Y_n)$ of $n$ iid samples drawn from a $\cN(\theta,\sigma^2)$ distribution.Apply the two-point Le Cam bound \eqref{2point-Le-Cam} with the distributions $\bbP_0^n$ and $\bbP_\theta^n$, where $\theta=2\delta$. Note that$$\Vert \bbP_\theta^n -\bbP_0^n\Vert_{\TV}^2 \le \frac 14\{e^{n\theta^2/\sigma^2}-1\} = \frac 14\{e^{4n\delta^2/\sigma^2}-1\},.$$Setting $\delta = \frac 12\frac{\sigma}{\sqrt n}$ thus yields\begin{align}\inf_{\hat\theta}\sup_{\theta\in\bbR} \bbE_\theta[\vert\hat\theta -\theta\vert]\ge\frac{\delta}{2}\{1-\frac 12\sqrt{e-1}\}\ge \frac{\delta}{6} = \frac{1}{12}\frac{\sigma}{\sqrt n}\\\inf_{\hat\theta}\sup_{\theta\in\bbR} \bbE_\theta[(\hat\theta-\theta)^2]\ge \frac{\delta^2}{2}\{1-\frac 12\sqrt{e-1}\}\ge \frac{\delta^2}{6}=\frac{1}{24}\frac{\sigma^2}{n}\,.\end{align}For instance, the sample mean $\tilde \theta_n$ satisfies the bounds$$\sup_{\theta\in\bbR}\bbE_\theta[\vert \tilde \theta_n-\theta\vert] = \sqrt{\frac{2}{\pi}}\frac{\sigma}{\sqrt n}\text{ and }\sup_{\theta\in\bbR}\bbE_\theta[(\tilde \theta_n-\theta)^2] = \frac{\sigma^2}{n}\,.$$ Mean-squared error decaying as $n^{-1}$ is typical for parametric problems with a certain type of regularity. For other non-regular problems, faster rates become possible. (Uniform location family) For each $\theta\in\bbR$, let $\bbU_\theta$ be the uniform distribution over the interval $[\theta,\theta+1]$.Given a pair $\theta,\theta'\in\bbR$, compute the Hellinger distance between $\bbU_\theta$ and $\bbU_\theta'$. By symmetry, only consider $\theta'>\theta$.If $\theta' > \theta + 1$, $H^2(\bbU_\theta\Vert\bbU_{\theta'})=2$.If $\theta'\in (\theta,\theta+1]$, $H^2(\bbU_\theta\Vert\bbU_{\theta'})=2\vert \theta'-\theta\vert$.Take $\vert\theta'-\theta\vert=2\delta:=\frac{1}{4n}$ (why? to cancel out $n$ in the next equation?), then$$\frac 12H^2(\bbU_\theta^n\Vert \bbU_{\theta'}^n)\le \frac n2 2\vert\theta'-\theta\vert = 1/4$$By \eqref{LeCamIneq}, we find that$$\Vert \bbU_\theta^n-\bbU_{\theta'}^n\Vert_{\TV}^2 \le H^2(\bbU_{\theta}^n\Vert\bbU_{\theta'}^n)\le \frac 12\,.$$From \eqref{2point-Le-Cam} with $\Phi(t)=t^2$, we have$$\inf_{\hat\theta}\sup_{\theta\in\bbR}\bbE_\theta[(\hat\theta -\theta)^2]\ge \frac{(1-1/\sqrt 2)}{128}n^{-2}\,.$$ Le Cam’s method for nonparametric problems. Estimate a density at a point, say $x=0$, in which case $\theta(f):=f(0)$ is known as an evaluation functional. The Lipschitz constant of the functional $\theta$ with respect to the Hellinger norm is given by (Le Cam for functional) For any increasing function $\Phi$ on the non-negative real line and any functional $\theta:\cF\rightarrow \bbR$, we have$$\inf_{\hat\theta}\sup_{f\in\cF}\bbE\left[ \Phi(\hat\theta-\theta(f)) \right]\ge \frac 14\Phi\left(\frac 12\omega(\frac{1}{2\sqrt n};\theta,\cF)\right)\,.$$ Let $\epsilon^2=1/4n$, choose a pair $f,g$ that achieve the supremum defining $\omega(1/(2\sqrt n))$. Note that$$\Vert \bbP_f^n-\bbP^n_g\Vert_{\TV}^2 \le H^2(\bbP_f^n\Vert \bbP_g^n)\le nH^2(\bbP_f\Vert\bbP_g)\le \frac 14\,.$$Thus, Le Cam's bound \eqref{2point-Le-Cam} with $\delta = \frac 12\omega(\frac{1}{2\sqrt n})$ implies that$$\inf_{\hat \theta}\sup_{f\in\cF} \bbE\left[ \Phi(\hat\theta - \theta(f)) \right]\ge \frac 14\Phi\left(\frac 12\omega(\frac{1}{2\sqrt n})\right)\,.$$ Consider the densities on $[-\frac 12,\frac 12]$ that are bounded uniformly away from zero, and are Lipschitz with constant one. Our goal is to estimate the linear functional $\theta(f)=f(0)$.It suffices to lower bound $\omega(\frac{1}{2\sqrt n};\theta,\cF)$ and choose a pair $f_0,g\in\cF$ with $H^2(f_0\mid g)=\frac{1}{4n}$, and then evaluating the difference $\vert \theta(f_0)-\theta(g)\vert$.Let $f_0\equiv 1$. For a parameter $\delta\in(0,\frac 16]$, consider the function$$\phi(x) = \begin{cases}\delta -\vert x\vert & \text{for }\vert x\vert \le \delta\\\vert x-2\delta\vert -\delta & \text{for }x\in[\delta, 3\delta]\\0 & \text{otherwise}\end{cases}$$By construction, the function $\phi$ is $1$-Lipschitz, uniformly bounded with $\Vert \phi\Vert_\infty = \delta\le 1/6$, and integrates to zero. Thus, the perturbed function $g=f_0+\phi$ is a density function belonging to our class and by construction, $\vert \theta(f_0)-\theta(g)\vert=\delta$.To control the squared Hellinger distance,$$\frac 12H^2(f_0\Vert g) = 1-\int_{-1/2}^{1/2}\sqrt{1+\phi(t)}dt\,.$$Define the function $\Psi(u)=\sqrt{1+u}$, and note that $\sup_{u\in\bbR}\vert \Psi''(u)\vert \le \frac 14$. By a Taylor series expansion, and by [Taylor inequality](http://mathworld.wolfram.com/TaylorsInequality.html),$$\frac 12H^2(f_0\Vert g)=\int_{-1/2}^{1/2}[\Psi(0)-\Psi(\phi(t))]dt \le \int_{-1/2}^{1/2}{-\Psi'(0)\phi(t)+\frac 18\phi^2(t)}dt\,.$$Observe that$$\int_{-1/2}^{1/2}\phi(t)dt = 0\text{ and }\int_{-1/2}^{1/2}\phi^2(t)dt = 4\int_0^\delta(\delta -x)^2dx = \frac 43\delta^3\,.$$Thus,$$H^2(f_0\Vert g) \le \frac 28\cdot \frac 43\delta^3 = \frac 13\delta^3\,.$$Consequently, set $\delta^3 = 3/4n$ ensures that $H^2(f_0\Vert g)\le 1/4n$.Hence,$$\inf_{\hat\theta}\sup_{f\in\cF}\bbE\left[ (\hat\theta-f(0))^2\right]\ge \frac{1}{16}\omega^2(\frac{1}{2\sqrt n})\succsim n^{-2/3}\,.$$ Le Cam’s convex hull method Consider two subsets $\cP_0$ and $\cP_1$ of $\cP$ that are $2\delta$-separated, in the sense that For any $2\delta$-separated classes of distributions $\cP_0$ and $\cP_1$ contained within $\cP$, any estimator $\hat\theta$ has worst-case risk at least$$\sup_{\bbP\in\cP}\bbE_\bbP[\rho(\hat\theta), \theta(\bbP)] \ge \frac{\delta}{2} \sup_{\bbP_0\in\conv(\cP_0)\\\bbP_1\in\conv(\cP_1)}{1-\Vert \bbP_0-\bbP_1\Vert_{\TV}}$$ (Sharpened bounds for Gaussian location family) A key step was to upper bound the TV distance $\Vert \bbP_\theta^n-\bbP_0^n\Vert_{\TV}$ between the $n$-fold product distributions based on the Gaussian models $\cN(\theta,\sigma^2)$ and $(0,\sigma^2)$.Set $\theta = 2\delta$, consider two families $\cP_0=\{\bbP_0^n\}$ and $\cP_1=\{\bbP_\theta^n,\bbP_{-\theta}^n\}$. Note that the mixture distribution $\bar\bbP:=\frac 12\bbP_\theta^n+\frac 12\bbP_{-\theta}^n$ belongs to $\conv(\cP_1)$. Note that$$\Vert \bar\bbP-\bbP_0^n\Vert^2_{\TV}\le \frac 14 \left\{e^{\frac 12(\frac{\sqrt n\theta}{\sigma})^4}-1\right}=\frac 14\left\{e^{\frac 12(\frac{2\sqrt n\delta}{\sigma})^4}-1\right\}$$Set $\delta = \frac{\sigma t}{2\sqrt n}$ for some parameter $t > 0$ to be chosen, the convex hull Le Cam bound yields$$\min_{\hat\theta}\sup_{\theta\in\bbR}\bbE_\theta[\vert \hat\theta-\theta\vert] \ge \frac{\sigma}{4\sqrt n}\sup_{t > 0}\{t(1-\frac 12\sqrt{e^{t^4/2}-1})\}\ge \frac{3}{20}\frac{\sigma}{\sqrt n}\,.$$ Fano’s method The mutual information between the r.v. $(Z,J)$ is defined by the KL divergence as the underlying measure of distance, Our set-up: lower bounding the probability of error an $M$-ary hypothesis testing problem, based on a family of distributions ${\bbP_{\theta^1},\ldots,\bbP_{\theta^M}}$. The mutual information can be written in terms of component distributions ${\bbP_{\theta^j},j\in[M]}$ and the mixture distribution $\bar \bbQ$ The Fano method is based on the following lower bound on the error probability in an $M$-ary testing problem, applicable when $J$ is uniformly distributed over the index set: Let $\{\theta^1,\ldots,\theta^M\}$ be a $2\delta$-separated set in the $\rho$ semi-metric on $\Theta(\cP)$, and suppose that $J$ is uniformly distributed over the index set $\{1,\ldots,M\}$, and $(Z\mid J=j)\sim \bbP_{\theta^j}$. Then for any increasing function $\Phi: [0,\infty) \rightarrow [0,\infty)$, the minimax risk is lower bounded as$$\fM(\theta(\cP);\Phi\circ \rho)\ge \Phi(\delta)\left\{1-\frac{I(Z;J)+\log 2}{\log M}\right\}\,,$$where $I(Z;J)$ is the mutual information between $Z$ and $J$.
Yes, it has a lot to do with mass. Since deuterium has a higher mass than protium, simple Bohr theory tells us that the deuterium 1s electron will have a smaller orbital radius than the 1s electron orbiting the protium nucleus (see "Note" below for more detail on this point). The smaller orbital radius for the deuterium electron translates into a shorter (and stronger) $\ce{C-D}$ bond length. A shorter bond has less volume to spread the electron density (of the 1 electron contributed by $\ce{H}$ or $\ce{D}$) over resulting in a higher electron density throughout the bond, and, consequently, more electron density at the carbon end of the bond. Therefore, the shorter $\ce{C-D}$ bond will have more electron density around the carbon end of the bond, than the longer $\ce{C-H}$ bond. The net effect is that the shorter bond with deuterium increases the electron density at carbon, e.g. deuterium is inductively more electron donating than protium towards carbon. Similar arguments can be applied to tritium and it's even shorter $\ce{C-T}$ bond should be even more inductively electron donating towards carbon than deuterium. Note: Bohr Radius Detail Most introductory physics texts show the radius of the $n^\text{th}$ Bohr orbit to be given by $$r_{n} = {n^2\hbar^2\over Zk_\mathrm{c} e^2 m_\mathrm{e}}$$ where $Z$ is the atom's atomic number, $k_\mathrm{c}$ is Coulomb's constant, $e$ is the electron charge, and $m_\mathrm{e}$ is the mass of the electron. However, in this derivation it is assumed that the electron orbits the nucleus and the nucleus remains stationary. Given the mass difference between the electron and nucleus, this is generally a reasonable assumption. However, in reality the nucleus does move too. It is relatively straightforward to remove this assumption and make the equation more accurate by replacing $m_\mathrm{e}$ with the electron's reduced mass, $\mu_\mathrm{e}$ $$\mu_\mathrm{e} = \frac{m_\mathrm{e}\times m_\text{nucleus}}{m_\mathrm{e} + m_\text{nucleus}}$$ Now the equation for the Bohr radius becomes $$r_{n} = {n^2\hbar^2\over Zk_\mathrm{c} e^2 \mu_\mathrm{e}}$$ Since the reduced mass of an electron orbiting a heavy nucleus is always larger than the reduced mass of an electron orbiting a lighter nucleus $$r_\text{heavy} \lt r_\text{light}$$ and consequently an electron will orbit closer to a deuterium nucleus than it will orbit a protium nucleus.
Dielectrophoretic Separation How can you use an electric field to control the movement of electrically neutral particles? This may sound impossible, but in this blog entry, we will see that the phenomenon of dielectrophoresis (DEP) can do the trick. We will learn how DEP can be applied to particle separation and demonstrate a very easy-to-use biomedical simulation app that is created with the Application Builder and run with COMSOL Server™. Forces on a Particle in an Inhomogeneous Static Electric Field The dielectrophoretic effect will show up in both DC and AC fields. Let’s first look at the DC case. Consider a dielectric particle immersed in a fluid. Furthermore, assume that there is an external static (DC) electric field applied to the fluid-particle system. The particle will in this case always be pulled from a region of weak electric field to a region of strong electric field, provided the permittivity of the particle is higher than that of the surrounding fluid. If the permittivity of the particle is lower than the surrounding fluid, then the opposite is true; the particle is drawn to a region of weak electric field. These effects are known as positive dielectrophoresis (pDEP) and negative dielectrophoresis (nDEP), respectively. The pictures below illustrate these two cases with a couple important quantities visualized: Electric field Maxwell stress tensor (surface force density) An illustration of positive dielectrophoresis (pDEP), where the particle permittivity is higher than that of the surrounding fluid \epsilon_p > \epsilon_f. An illustration of negative dielectrophoresis (nDEP), where the particle permittivity is lower than that of the surrounding fluid \epsilon_p < \epsilon_f. The Maxwell stress tensor represents the local force field on the surface of the particle. For this stress tensor to be representative of what forces are acting on the particle, the fluid needs to be “simple” in that it shouldn’t behave too weirdly either mechanically or electrically. Assuming the fluid is simple, we can see from the above illustrations that the net force on the particle appears to be in opposite directions between the two cases of pDEP and nDEP. Integrating the surface forces will indeed show that this is the case. It turns out that if we shrink the particle and look at the infinitesimal case of a very small particle acting like a dipole in a fluid, then the net force is a function of the gradient of the square of the electric field. Why is the net force behaving like this? To understand this, let’s look at what happens at a point on the surface of the particle. At such a point, the magnitude of the electric surface force density, f, is a function of charge times electric field: (1) where \rho is the induced polarization charges. (Let’s ignore for the moment that some quantities are vectors and make a purely phenomenological argument by just looking at magnitudes and proportionality.) The induced polarization charges are proportional to the electric field: (2) Combining these two, we get: (3) But this is just the local surface force density at one point at the surface. In order to get a net force from all these surface force contributions at the various points on the surface, there needs to be a difference in force magnitude between one side of the particle and the other. This is why the net force, \bf{F}, is proportional to the gradient of the square of the electric field norm: (4) In the above derivation, we have taken some shortcuts. For example, what is the permittivity in this relationship? Is it that of the particle or that of the fluid or maybe the difference of the two? What about the shape of the particle? Is there a shape factor? Let’s now address some of these questions. Force on a Spherical Particle In a more stringent derivation, we instead use the vector-valued relationship for the force on an electric dipole: (5) where \bf{P} is the electric dipole moment of the particle. To get the force for different particles, we simply insert various expressions for the electric dipole moment. In this expression, we can also see that if the electric field is uniform, we get no force (since the particle is small, its dipole moment is considered a constant). For a spherical dielectric particle with a (small) radius r_p in an electric field, the dipole moment is: (6) where k is a parameter that depends on the the permittivity of the particle and the surrounding fluid. The factor 4 \pi r_p^3 can be seen as a shape factor. Combining these, we get: (7) This again shows the dependency on the gradient of the square of the magnitude of the electric field. Forces on a Particle in a Time-Varying Electric Field If the electric field is time-varying (AC), the situation is a bit more complicated. Let’s also assume that there are losses that are represented by an electric conductivity, \sigma. The dielectrophoretic net force, \bf{F}, on a spherical particle turns out to be: (8) where (9) and (10) is the complex-valued permittivity. The subscripts p and f represent the particle and the fluid, respectively. The radius of the particle is r_p and \bf{E}_{\textrm{rms}} is the root-mean-square of the electric field. The frequency of the AC field is \nu. From this expression, we can get the force for the electrostatic case by setting \sigma = 0. (We cannot take the limit when the frequency goes to zero, since the conductivity has no meaning in electrostatics.) In the expression for the DEP force, we can see that indeed the difference in permittivity between the fluid and the particle plays an important role. If the sign of this difference switches, then the force direction is flipped. The factor k involving the difference and sum of permittivity values is known as the complex Clausius-Mossotti function and you can read more about it here. This function encodes the frequency dependency of the DEP force. If the particles are not spherical but, say, ellipsoidal, then you use another proportionality factor. There are also well-known DEP force expressions for the case where the particle has one or more thin outer shells with different permittivity values, such as in the case of biological cells. The simulation app presented below includes the permittivity of the cell membrane, which is represented as a shell. The settings window for the effective DEP permittivity of a dielectric shell. There may be other forces acting on the particles, such as fluid drag force, gravitation, Brownian motion force, and electrostatic force. The simulation app shown below includes force contributions from drag, Brownian motion, and DEP. In the Particle Tracing Module, a range of possible particle forces are available as built-in options and we don’t need to be bothered with typing in lengthy force expressions. The figure below shows the available forces in the Particle Tracing for Fluid Flow interface. The different particle force options in the Particle Tracing for Fluid Flow interface. Dielectrophoretic Separation of Particles Medical analysis and diagnostics on smartphones is about to undergo rapid growth. We can imagine that, in the future, a smartphone can work in conjunction with a piece of hardware that can sample and analyze blood. Let’s envision a case where this type of analysis can be divided into three steps: Extract blood using the hardware, which attaches directly to your smartphone, and compute mean platelet and red blood cell diameter. Compute the efficiency of separation of the red blood cells and platelets. This efficiency needs to be high in order to perform further diagnostics on the isolated red blood cells. Use the computed optimum separation conditions to isolate the red blood cells using the hardware attached to your smartphone. The COMSOL Multiphysics simulation app focuses on Step 2 of the overall analysis process above. By exploiting the fact that blood platelets are the smallest cells in blood and have different permittivity and conductivity than red blood cells, it is possible to use DEP for size-based fractionation of blood; in other words, to separate red blood cells from platelets. Red blood cells are the most common type of blood cell and the vertebrate organism’s principal means of delivering oxygen (O 2) to the body tissues via the blood flow through the circulatory system. Platelets, also called thrombocytes, are blood cells whose function is to stop bleeding. Using the Application Builder, we created an app that demonstrates the continuous separation of platelets from red blood cells (RBCs) using the Dielectrophoretic Force feature available in the Particle Tracing for Fluid Flow interface. (The app also requires one of the following: the CFD Module, Microfluidics Module, or Subsurface Flow Module and either the MEMS Module or AC/DC Module.) The app is based on a lab-on-a-chip (LOC) device described in detail in a paper by N. Piacentini et al., “Separation of platelets from other blood cells in continuous-flow by dielectrophoresis field-flow-fractionation”, from Biomicrofluidics, vol. 5, 034122, 2011. The device consists of two inlets, two outlets, and a separation region. In the separation region, there is an arrangement of electrodes of alternating polarity that controls the particle trajectories. The electrodes create the nonuniform electric field needed for utilizing the dielectrophoretic effect. The figure below shows the geometry of the model. The geometry used in the particle separation simulation app. The inlet velocity for the lower inlet is significantly higher (853 μm/s) than the upper inlet (154 μm/s) in order to focus all the injected particles toward the upper outlet. The app is built on a model that uses the following physics interfaces: Creeping Flow(Microfluidics Module) to model the fluid flow. Electric Currents(AC/DC or MEMS Module) to model the electric field in the microchannel. Particle Tracing for Fluid Flow(Particle Tracing Module) to compute the trajectories of RBCs and platelets under the influence of drag and dielectrophoretic forces and subjected to Brownian motion. Three studies are used in the underlying model: Study 1 solves for the steady-state fluid dynamics and frequency domain (AC) electric potential with a frequency of 100 kHz. Study 2 uses a Time Dependent study step, which utilizes the solution from Study 1 and estimates the particle trajectories without the dielectrophoretic force. In this study, all particles (platelets and RBCs) are focused to the same outlet. Study 3 is a second Time Dependent study that includes the effect of the dielectrophoretic force. You can download the model that the app was based on here. A Biomedical Simulation App To create the simulation app, we used the Application Builder, which is included in COMSOL Multiphysics® version 5.0 for the Windows® operating system. The figure below shows the app as it looks like when first started. In this case, we have connected to a COMSOL Server™ installation in order to run the COMSOL Multiphysics app in a standard web browser. The app lets the user enter quantities, such as the frequency of the electric field and the applied voltage. The results include a scalar value for the fraction of red blood cells separated. In addition, three different visualizations are available in a tabbed window: the blood cell and platelet distribution, the electric potential, and the velocity field for the fluid flow. The figures below show visualizations of the electric potential and the flow field. The app has three different solving options for computing just the flow field, computing just the separation using the existing flow field, or combining the two. A warning message is shown if there is not a clean separation. Increasing the applied voltage will increase the magnitude of the DEP force. If the separation efficiency isn’t high enough, we can increase the voltage and click on the Compute All button, since in this case, both the fields and particle trajectories need to be recomputed. We can control the value of the Clausius-Mossotti function of the DEP force expression by changing the frequency. It turns out that at the specified frequency of 100 kHz, only red blood cells will exit the lower outlet. The fluid permittivity is in this case higher than that of the particles and both the platelets and the red blood cells experience a negative DEP force, but with different magnitude. To get a successful overall design, we need to balance the DEP forces relative to the forces from fluid drag and Brownian motion. The figure below shows a simulation with input parameters that result in a 100% success in separating out the red blood cells through the lower outlet. Further Reading To learn more about dielectrophoresis and its applications, click on one of the links listed below. Included in the list is a link to a video on the Application Builder, which also shows you how to deploy applications with COMSOL Server™. Model Gallery: Dielectrophoretic Particle Separation Video Gallery: How to Build and Run Simulation Apps with COMSOL Server™ (archived webinar) Wikipedia: Dielectrophoresis Wikipedia: Maxwell-Wagner-Sillars polarization Wikipedia: Clausius-Mossotti relation Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. Comments (7) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Successor Ordinal A successor ordinal, by definition, is an ordinal $\alpha$ that is equal to $\beta + 1$ for some ordinal $\beta$. The successor function, denoted as $\beta+1$, $\operatorname{suc} \beta$, or $\beta ^+$ is defined on the ordinals as $\beta \cup \{ \beta \}$ Properties of the successor There are no ordinals between $\beta$ and $\beta + 1$. The union $\bigcup \beta$ can be thought of as an inverse successor function, because $\bigcup ( \beta + 1 ) = \beta$. All limit ordinals are equal to their union.
I guess this is true, because log is a strictly increasing function, but how do I prove it formally? I tried something like: Let $f(n)$ and $g(n)$ be monotonically increasing functions, $c \in \mathbb{R}$ and $n_0 \in \mathbb{N}$, such that $0 \leq f(n) \leq c g(n)$, for all $n \geq n_0$. We must find $c'$ and $n_0'$ such that $$\log(f(n)) \in \{h(n) \mid 0 \leq h(n) \leq c' \log(g(n)), \forall n \geq n_0'\}\,.$$ I got as far as $$f(n) \leq c g(n) \implies \log(f(n)) \leq \log(c g(n)) = \log(c) + \log(g(n))\,.$$ Then I got stuck. Thanks in advance!
Please enable JavaScript. Coggle requires JavaScript to display documents. Test ( Test 36% of 175g \(=36 \div 100 \times 175\) \(=63\) \( 2. \frac {4} {9} - \frac {5} {12} \) Times the left fraction by the right fractions denominator and then visa versa \( = \frac {48} {108} - \frac {45} {108} \) 150L increasedby 43 \(=143 \div 100 \times 150\) = 214L Highest Common Factor of 36 an 24 36: 1,2,3,4,6,12 and 36 24: 1,2,3,4,6,12 and 24 The highest common factor is 12 Write Down the first 5 terms of a sequence with the formula. \(n^{th} =6n-5\) 1,7,13,19 and 25 For the sequence for 2,9,16,23,30 a) The formula that generates it. \(n^{th} =7n-5\) b) The value of the \(40^{th}\) term =280 b) \(2x+11=19\) || \(-11\) Find the value of \(x\) in the following equations a) \(3x-5=16\) || (\(-5\)) \(3x=11\) || (\(\div 3\)) \(x= \frac {11} {3}\) \(2x=8\) || \(\div2\) \(x=4\)c) \(3(x+11)=21\) || Claw methodd) \(4x+3=x+18\) || \(-x\) \(3x+3 =18\) || \(-3\) \(3x=15\) \(\div 3\) \(x=5\) Phil buys 4 bags of sweets and eats 15 single sweets. Jamie has 2 bags of sweets and finds 7 more down the side of the sofa. If they have the same number of sweets. How many sweets are in the bag Solve the following question \(2x+5(3-x)=4(x+6)\) \(2x+15-5x=4x+24\) \(-3x+15=4x+24 || +3x\) \(7x+24=15 ||-24\) \(7x=-9 ||\div 7\) \(x + \frac{9} {7}\) For the patterns find a formula for posts and a formula for the wires e.g. 1st section: 2 posts 2 wires 2nd section: 4 posts 6 wires Formula:
$E = - \frac{\Delta V}{d}$ is derived from the definition of electric potential difference $\Delta V$. Pay careful attention to the negative signs in the derivation, as that is probably the source of most errors by beginning students. Briefly, we imagine some distribution of fixed charged particles that produce a net electric field $\vec{E}$. Then we calculate the work $W$ required to move a test charge at constant velocity from point $a$ to point $b$ along any arbitrary path: $$W = \int_{a}^{b}\vec{F}\cdot d\vec{s},$$ where $\vec{F}$ is the force applied against the net force $\vec{F}_{E}$ due to the electric field, and $d\vec{s}$ is a differential displacement vector along the path from $a$ to $b$. We want the test particle to move with constant velocity, so $\vec{F} = -\vec{F}_{E}$. Therefore, the work done by the electric field is $$W = -\int_{a}^{b}\vec{F}_{E}\cdot d\vec{s}.$$ Next, we consider the work done by the electric field per unit charge: $$\frac{W}{q_{\textrm{unit}}} = \Delta V = -\int_{a}^{b}\vec{E}\cdot d\vec{s}.$$ The integral does not depend on the path between $a$ and $b$, not because of anything to do with this kind of integral, but because we have assumed the source charges that generate $\vec{E}$ are fixed so that $\vec{E}$ does not change as we move the test charge. This means that for fixed $\vec{E}$, $V$ depends only on $a$ and $b$, which are just points in space. The next part requires you to draw a figure: a closed curve of any shape with points $a$ and $b$ on the curve. Mark a third point $c$ on the curve. The work done moving the test charge from point $c$ to point $a$ is $W_{c \to a}$, and from point $c$ to point $b$ is $W_{c \to b}$, so that $$W = -\int_{a}^{b}\vec{F}_{E}\cdot d\vec{s} = W_{c \to b} - W_{c \to a}.$$ ($W_{c \to c} = 0 = W_{c \to a} + W_{a \to b} + W_{b \to c} \implies W_{a \to b} = -W_{b \to c} - W_{c \to a}$. But $W_{b \to c} = -W_{c \to b}$, so $W_{a \to b} = W_{c \to b} - W_{c \to a}$.) Now, $W_{c \to b}$ and $W_{c \to a}$ are just scalars whose values depend on the choice of point $c$. So choosing point $c$ is equivalent to choosing some scalar field $V$ whose value at point $c$ can be set to zero, and we may write $$\frac{W}{q_{\textrm{unit}}} = V(b) - V(a).$$ It follows that $$\int_{a}^{b}\vec{E}\cdot d\vec{s} = V(a) - V(b) = -\Delta V.$$ As a special case, consider a uniform electric field $\vec{E}$. The integral becomes $E\Delta s$, where $\Delta s$ is the displacement in the direction of $E$. If we let $\Delta s = d$, we get $$E = -\frac{\Delta V}{d}.$$
26 0 Greetings helpful gentlemen and gentle women! The following is a problem I ran into during my internship. The background of my problem is biological/chemical in nature, so I’ll try to bother you as little as possible with the details. Thanks in advance for any help given. I have a point p0 on a 2-dimensional surface. The next point (p1) is at distance L1 from p0. The next point p2 is at distance L2 from p1. This continues, up to p28. I would like to find an equation to calculate the expected distance between the origin and each subsequent point (d1, d2, etc), as a direct function of the number of steps (s) between the origin and that point. The step length L between each two subsequent points is assumed to be variable, but with a limited maximum value Lmax. L = x*Lmax, where x is a random (or at least poorly understood) variable between 0 and 1. My dear friend Pythagoras: [tex]A^2+B^2=C^2[/tex] Also: [tex]\sin\left(\alpha\right)^2+\cos\left(\alpha\right)^2=1[/tex] The distance d1 (between p0 and p1) is obviously trivial: It is L1. The angle between d1 and the line p1-p2 is called a2. d2 is then calculated as: [tex]d2^2=(L1+L2*\cos\left(a2\right))^2+(L2+\sin\left(a2\right))^2 d2^2=L1^2+L2^2+2*L1*L2*\cos\left(a2\right)[/tex] d3 is calculated in a very similar manner, with a3 the angle between d2 and the line p2-p3: [tex]d3^2=(d2+L3*\cos\left(a3\right))^2+(L3+\sin\left(a3\right))^2 d3^2=d2^2+L3^2+2*d2*L3*\cos\left(a3\right)[/tex] In general, it can be said that: [tex]ds^2=d\left(s-1\right)^2+Ls^2+2*d\left(s-1\right)*\cos\left(as\right)[/tex] Now, this equation requires that I always calculate d(s-1) before I can calculate ds. I would like to find a direct equation, if that is at all possible. I am aware that that for any single iteration of this process, one To establish this, I used Excel to simulate this process 12,000 times, and noted the average distance between the origin and ps after s steps. What resulted was a clear correlation, with very little variability (due to the large number of iterations). This to me seems like a clear indicator that there is indeed some expected value for ds, but I can’t seem to nail down exactly how I could calculate it. Again, thanks in advance for any help provided. You would really help me out if you could point me in the right direction. Regards, DaanV The following is a problem I ran into during my internship. The background of my problem is biological/chemical in nature, so I’ll try to bother you as little as possible with the details. Thanks in advance for any help given. 1. Homework Statement I have a point p0 on a 2-dimensional surface. The next point (p1) is at distance L1 from p0. The next point p2 is at distance L2 from p1. This continues, up to p28. I would like to find an equation to calculate the expected distance between the origin and each subsequent point (d1, d2, etc), as a direct function of the number of steps (s) between the origin and that point. The step length L between each two subsequent points is assumed to be variable, but with a limited maximum value Lmax. L = x*Lmax, where x is a random (or at least poorly understood) variable between 0 and 1. 2. Homework Equations My dear friend Pythagoras: [tex]A^2+B^2=C^2[/tex] Also: [tex]\sin\left(\alpha\right)^2+\cos\left(\alpha\right)^2=1[/tex] 3. The Attempt at a Solution The distance d1 (between p0 and p1) is obviously trivial: It is L1. The angle between d1 and the line p1-p2 is called a2. d2 is then calculated as: [tex]d2^2=(L1+L2*\cos\left(a2\right))^2+(L2+\sin\left(a2\right))^2 d2^2=L1^2+L2^2+2*L1*L2*\cos\left(a2\right)[/tex] d3 is calculated in a very similar manner, with a3 the angle between d2 and the line p2-p3: [tex]d3^2=(d2+L3*\cos\left(a3\right))^2+(L3+\sin\left(a3\right))^2 d3^2=d2^2+L3^2+2*d2*L3*\cos\left(a3\right)[/tex] In general, it can be said that: [tex]ds^2=d\left(s-1\right)^2+Ls^2+2*d\left(s-1\right)*\cos\left(as\right)[/tex] Now, this equation requires that I always calculate d(s-1) before I can calculate ds. I would like to find a direct equation, if that is at all possible. I am aware that that for any single iteration of this process, one needsto know all previous steps, but I would say that there is some expected value if the process is repeated often enough. To establish this, I used Excel to simulate this process 12,000 times, and noted the average distance between the origin and ps after s steps. What resulted was a clear correlation, with very little variability (due to the large number of iterations). This to me seems like a clear indicator that there is indeed some expected value for ds, but I can’t seem to nail down exactly how I could calculate it. Again, thanks in advance for any help provided. You would really help me out if you could point me in the right direction. Regards, DaanV
Equivalence of Definitions of Factorial Contents Theorem The factorial of $n$ is defined inductively as: $n! = \begin{cases} 1 & : n = 0 \\ n \left({n - 1}\right)! & : n > 0 \end{cases}$ The factorial of $n$ is defined as: \(\displaystyle n!\) \(=\) \(\displaystyle \prod_{k \mathop = 1}^n k\) \(\displaystyle \) \(=\) \(\displaystyle 1 \times 2 \times \cdots \times \paren {n - 1} \times n\) where $\displaystyle \prod$ denotes product notation. Proof The proof proceeds by induction. For all $n \in \Z_{\ge 0}$, let $P \left({n}\right)$ be the proposition: $\begin{cases} 1 & : n = 0 \\ n \left({n - 1}\right)! & : n > 0 \end{cases} \equiv \displaystyle \prod_{k \mathop = 1}^n k$ Basis for the Induction $P \left({0}\right)$ is the case: $\displaystyle \prod_{k \mathop = 1}^0 k = 1$ which holds by definition of vacuous product. Thus $P \left({0}\right)$ is seen to hold. This is the basis for the induction. Induction Hypothesis Now it needs to be shown that, if $P \left({r}\right)$ is true, where $r \ge 0$, then it logically follows that $P \left({r + 1}\right)$ is true. So this is the induction hypothesis: $\begin{cases} 1 & : r = 0 \\ r \left({r - 1}\right)! & : r > 0 \end{cases} \equiv \displaystyle \prod_{k \mathop = 1}^r k$ from which it is to be shown that: $\begin{cases} 1 & : r = 0 \\ \left({r + 1}\right) r! & : r > 0 \end{cases} \equiv \displaystyle \prod_{k \mathop = 1}^{r + 1} k$ Induction Step This is the induction step: \(\displaystyle \prod_{k \mathop = 1}^{r + 1} k\) \(=\) \(\displaystyle \left({r + 1}\right) \prod_{k \mathop = 1}^r k\) \(\displaystyle \) \(=\) \(\displaystyle \left({r + 1}\right) r!\) Induction Hypothesis So $P \left({k}\right) \implies P \left({k + 1}\right)$ and the result follows by the Principle of Mathematical Induction. Therefore: $\begin{cases} 1 & : n = 0 \\ n \left({n - 1}\right)! & : n > 0 \end{cases} \equiv \displaystyle \prod_{k \mathop = 1}^n k$ $\blacksquare$
The Nonparaxial Gaussian Beam Formula for Simulating Wave Optics In a previous blog post, we discussed the paraxial Gaussian beam formula. Today, we’ll talk about a more accurate formulation for Gaussian beams, available as of version 5.3a of the COMSOL® software. This formulation based on a plane wave expansion can handle nonparaxial Gaussian beams more accurately than the conventional paraxial formulation. Paraxiality of Gaussian Beams The well-known Gaussian beam formula is only valid for paraxial Gaussian beams. Paraxial means that the beam mainly propagates along the optical axis. There are several papers that talk about paraxiality in a quantitative sense (see Ref. 1). Roughly speaking, if the beam waist size is near the wavelength, the beam propagates at a higher angle to a focus. Therefore, the paraxiality assumption breaks down and the formulation is no longer accurate. To alleviate this problem and to provide you with a more general and accurate formulation for general Gaussian beams, we introduced a nonpariaxial Gaussian beam formulation. In the user interface this is referred to as Plane wave expansion. Angular Spectrum of Plane Waves Let’s briefly review the paraxial Gaussian beam formula in 2D (for the sake of better visuals and understanding). We start from Maxwell’s equations assuming time-harmonic fields, from which we get the following Helmholtz’s equation for the out-of-plane electric field with the wavelength \lambda for our choice of polarization: where k=2 \pi/\lambda. The angular spectrum of plane waves is based on the following simple fact: an arbitrary field that satisfies the above Helmholtz equation can be expressed as the following plane wave expansion: where A(k_x,k_y) is an arbitrary function. The integration path is a circle of radius k for real k_x and k_y. (For complex k_x and k_y, the integration domain extends to a complex plane.) The function A(k_x,k_y) is called the angular spectrum function. One can prove that this E_z satisfies Helmholtz’s equation by direct substitution. Now that we know that this formulation always gives exact solutions to Helmholtz’s equation, let’s try to understand it visually. From the constraint, k_x^2+k_y^2=k^2, we can set k_x=k cos(\varphi) and k_y=k sin(\varphi) and rewrite the above equation as: The meaning of the above formula is that it constructs a wave as a sum, or integral, consisting of many waves propagating in various directions, all with the same wave number k. This is shown in the following figure. Visualization of the angular spectrum of plane waves. When actually solving a problem using this formula, all you have to do is find the angular spectrum function A(\varphi) that satisfies the boundary conditions. By assuming that the profile of the transverse field (perpendicular to the propagating direction, i.e., optical axis) is also a Gaussian shape (see Ref. 4), one can derive that A(\varphi) = \exp(-\varphi^2 / \varphi_0^2) , where \varphi_0 is the spectrum width. By some more mathematical manipulations, we get a relationship between the spectrum width \varphi_0 and the beam waist radius w_0. For example, for a slow Gaussian beam, the angular spectrum is narrow. A plane wave, on the other hand, is the extreme case where the angular spectrum function is a delta function. For a fast Gaussian beam, the angular spectrum is wider, and vice versa. This was a quick summary of the underlying theory for nonparaxial Gaussian beams. To recap what we have shown so far, let’s rewrite the formula once more by using polar coordinates, x=r \cos \theta, \ y = r \sin \theta: This is the formulation that Born and Wolf (Ref. 2) use in their book. The 3D formula is more complicated and looks different due to polarization, but the basic idea is the same as seen in the references mentioned above. It can also look different depending on whether or not you consider evanescent waves. The Plane Wave Expansion method used in the Wave Optics Module and the RF Module, although based on the angular spectrum theory, is adapted for numerical computations. Plane Wave Expansion: Settings and Results Let’s compare the new feature, Plane wave expansion, with the previously available feature, Paraxial approximation. The Settings window covering both methods is shown below. The Plane Wave Expansion feature settings. With the new feature, you have two options if the Automatic setting doesn’t give you a satisfactory approximation: Wave vector count Maximum transverse wave number The first option determines the number of discretization levels, depending on how fine you want to represent the Gaussian beam. The more plane waves, the finer it gets. The second option is related to the integral bound in the previous equation; i.e., -\pi/2 \le \varphi \le \pi/2. This integral bound can be the maximum \pi/2 for the smallest possible spot size and can be more shallow for slower beams, depending on how fast the Gaussian beam is. You need more angled plane waves with a larger transverse wave number to represent faster (more focused) beams. The following results compare the two formulas for the case where the spot radius is \lambda/2, which is considerably nonparaxial. As in the previous blog post, the simulation is done with the Scattered Field formulation and the domain is surrounded by a perfectly matched layer (PML). This way, the scattered field represents the error from the exact Helmholtz solution. The left images below show the new feature, while the images on the right show the paraxial approximation. The top images show the norm of the computed Gaussian beam background field, ewfd.Ebz, while the bottom images show the scattered field norm, ewfd.relEz, which represents the error from the exact Helmholtz solution. Obviously, the error from the Helmholtz solution is greatly reduced in the nonparaxial method. Concluding Remarks We have discussed the theory and results for an approximation method for nonparaxial Gaussian beams using the new plane wave expansion option. Remember that this formulation is extremely accurate, but is still an approximation under assumptions. First, we have made an assumption for the field shape in the focal plane. Second, we assume that the evanescent field is zero. If you are interested in the field coupling to some nanostructure near the focal region in a fast Gaussian beam, you may need to calculate the evanescent field. Next Step Learn more about the formulations and features available for modeling optically large problems in the COMSOL® software by clicking the button below: Note: This functionality can also be found in the RF Module. References P. Vaveliuk, “Limits of the paraxial approximation in laser beams”, Optics Letters, vol. 32, no. 8, 2007. M. Born and E. Wolf, Principles of Optics, ed. 7, Cambridge University Press, 1999. J. W. Goodman, Fourier Optics. G. P. Agrawal and M. Lax, “Free-space wave propagation beyond the paraxial approximation”, Phys. Rev.a. 27, pp. 1693–1695, 1983. Comments (6) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Difference between revisions of "SageMath" m (→sage -i doesn't work: flag for deletion) m (→Starting Sage Notebook Server throws an ImportError: flag for deletion) Line 129: Line 129: === Starting Sage Notebook Server throws an ImportError === === Starting Sage Notebook Server throws an ImportError === + + The Sage Notebook Server is in an extra package {{Pkg|sage-notebook}}. So, if you get an ImportError when launching The Sage Notebook Server is in an extra package {{Pkg|sage-notebook}}. So, if you get an ImportError when launching Revision as of 16:51, 21 January 2016 SageMath (formerly Sage) is a program for numerical and symbolic mathematical computation that uses Python as its main language. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica. SageMath provides support for the following: Calculus: using Maxima and SymPy. Linear Algebra: using the GSL, SciPy and NumPy. Statistics: using R (through RPy) and SciPy. Graphs: using matplotlib. An interactive shellusing IPython. Access to Python modulessuch as PIL, SQLAlchemy, etc. Contents 1 Installation 2 Usage 3 Optional additions 4 Install Sage package 5 Troubleshooting 6 See also Installation contains the command-line version; for HTML documentation and inline help from the command line. includes the browser-based notebook interface. The optional dependencies for various features that will be disabled if the needed packages are missing.package has number of Usage SageMath mainly uses Python as a scripting language with a few modifications to make it better suited for mathematical computations. SageMath command-line SageMath can be started from the command-line: $ sage For information on the SageMath command-line see this page. Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example: sage: plot(sin,(x,0,10)) SageMath opens a browser window with the Sage Notebook. Sage Notebook A better suited interface for advanced usage in SageMath is the Notebook. To start the Notebook server from the command-line, execute: $ sage -n The notebook will be accessible in the browser from http://localhost:8080 and will require you to login. However, if you only run the server for personal use, and not across the internet, the login will be an annoyance. You can instead start the Notebook without requiring login, and have it automatically pop up in a browser, with the following command: $ sage -c "notebook(automatic_login=True)" Jupyter Notebook SageMath also provides a kernel for the Jupyter notebook. To use it, install and , launch the notebook with the command $ jupyter notebook and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports LaTeX output via the %display latex command and 3D plots if is installed. Cantor Cantor is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, SageMath, Octave, Scilab, etc. See the Cantor page on the Sage wiki for more information on how to use it with SageMath. Cantor can be installed with the official repositories.package or as part of the or groups, available in the Documentation For local documentation, one can compile it into multiple formats such as HTML or PDF. To build the whole SageMath reference, execute the following command (as root): # sage --docbuild reference html This builds the HTML documentation for the whole reference tree (may take longer than an hour). An option is to build a smaller part of the documentation tree, but you would need to know what it is you want. Until then, you might consider just browsing the online reference. For a list of documents see sage --docbuild --documents and for a list of supported formats see sage --docbuild --formats. Optional additions SageTeX If you have installed TeX Live on your system, you may be interested in using SageTeX, a package that makes the inclusion of SageMath code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away. As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use pdflatex): include the sagetexpackage in the preamble of your document with the usual \usepackage{sagetex} create a sagesilentenvironment in which you insert your code: \begin{sagesilent} dob(x) = sqrt(x^2 - 1) / (x * arctan(sqrt(x^2 - 1))) dpr(x) = sqrt(x^2 - 1) / (x * log( x + sqrt(x^2 - 1))) p1 = plot(dob,(x, 1, 10), color='blue') p2 = plot(dpr,(x, 1, 10), color='red') ptot = p1 + p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot, e.g. inside a floatenvironment: \begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $ pdflatex <doc.tex> $ sage <doc.sage> $ pdflatex <doc.tex> you can have a look at your output document. The full documentation of SageTeX is available on CTAN. Install Sage package If you installed sagemath from the official repositories, it is not possible to install sage packages using the sage option sage -i packagename. Instead, you should install the required packages system-wide. For example, if you need jmol (for 3D plots): $ sudo pacman -S jmol An alternative would be to have a local installation of sagemath and to manage optional packages manually. Troubleshooting TeX Live does not recognize SageTex If your TeX Live installation does not find the SageTex package, you can try the following procedure (as root or use a local folder): Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh TeX Live: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done. Starting Sage Notebook Server throws an ImportError The Sage Notebook Server is in an extra package. So, if you get an ImportError when launching % sage --notebook ┌────────────────────────────────────────────────────────────────────┐ │ Sage Version 6.4.1, Release Date: 2014-11-23 │ │ Type "notebook()" for the browser-based notebook interface. │ │ Type "help()" for help. │ └────────────────────────────────────────────────────────────────────┘ Please wait while the Sage Notebook server starts... Traceback (most recent call last): File "/usr/bin/sage-notebook", line 180, in <module> launcher(unknown) File "/usr/bin/sage-notebook", line 58, in __init__ from sagenb.notebook.notebook_object import notebook ImportError: No module named sagenb.notebook.notebook_object you most likely do not have the packageinstalled. sage -i doesn't work If you have installed Sage from the official repositories, then you have to install your additional packages system-wide. See Install Sage package 3D plot fails in notebook If you get the following error while trying to plot a 3D object: /usr/lib/python2.7/site-packages/sage/repl/rich_output/display_manager.py:570: RichReprWarning: Exception in _rich_repr_ while displaying object: Jmol failed to create file '/home/nicolas/.sage/temp/archimede/3188/dir_cCpcph/preview.png', see '/home/nicolas/.sage/temp/archimede/3188/tmp_JVpSqF.txt' for details RichReprWarning, Graphics3d Object then you probably miss the jmol package. See Install Sage package to install it.
This question already has an answer here: Explanation: Simple Harmonic Motion 9 answers How can you check whether the given expression shows simple harmonic motion or not?And also how to calculate angular frequency of the given equation? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: How can you check whether the given expression shows simple harmonic motion or not?And also how to calculate angular frequency of the given equation? If the equation describing either the displacement, velocity, or acceleration, contains just a single linear time-dependent sinusoidal term (perhaps with a phase offset - cosine is just sine with a phase shift) then it's simple harmonic. Examples of expressions of simple harmonic motion: $$a(t) = -\omega^2 A e^{-i\omega t}\\ v(t) = B \cos(\omega t)\\ x(t) = C \cos(3t+5)+ D \sin(3t + 1) + 1.23$$ (That last one has two terms, but they both have the same frequency. A bit of manipulation will turn that into a single term, with a different amplitude and phase.) Example of non-simple harmonic (periodic) motion: $$x(t) = A\sin(\omega t) + B\sin(3\omega t)\\ x(t) = A\sin(\omega t^2)$$ For a system if the acceleration $a$ is always proportional to the displacement from a fixed point $x$ and the acceleration is directed towards the fixed point, then the motion is simple harmonic. In symbols this gives the relationship $a = - \omega^2 x$ where the constant $\omega^2$ is specially chosen to be a square so that it is positive. $\omega$ is sometimes called the angular frequency and is related to the frequency $f$ and period $T$ of the motion. $\omega = 2\pi f= \frac{2\pi}{T}$ The equation $a = \ddot x= - \omega^2 x$ is a second order differential equation and has solutions of the form $x = A \sin \omega t + B \cos \omega t,\, x = Ce^{i\omega t} + De^{-i\omega t}$, etc, where $A, \,B, \, C, \, D$ are constants which can be determined from the initial conditions. Another clue that the a motion might be simple harmonic is that the frequency of the motion does not depend on the amplitude.
It is proved that $\Pi^1_1$ -indescribability in $P_{\kappa}\lambda$ can be characterized by combinatorial properties without taking care of cofinality of $\lambda$ . We extend Carr's theorem proving that the hypothesis $\kappa$ is $2^{\lambda^{<\kappa}}$ -Shelah is rather stronger than $\kappa$ is $\lambda$ -supercompact. We investigate the partition property of ${\mathcal{P}_{\kappa}\lambda}$ . Main results of this paper are as follows: (1) If λ is the least cardinal greater than κ such that ${\mathcal{P}_{\kappa}\lambda}$ carries a (λ κ , 2)-distributive normal ideal without the partition property, then λ is ${\Pi^1_n}$ -indescribable for all n < ω but not ${\Pi^2_1}$ -indescribable. (2) If cf(λ) ≥ κ, then every ineffable subset of ${\mathcal{P}_{\kappa}\lambda}$ has the partition property. (3) If cf(λ) ≥ κ, then the completely ineffable ideal over (...) ${\mathcal{P}_{\kappa}\lambda}$ has the partition property. (shrink) The Legal Knowledge Engineering is a new topic of investigationof Artificial Intelligence. This paper discusses some relevant problems relatedto this new area in a summarized way. Within the Normative Law Theory,one question that arises naturally is that of contradiction, like for example:articles conflicting with other articles inside the same code, codes conflictingwith codes, codes conflicting with jurisprudence, and in general, treatmentswith conflicting propositions in Normative Law Theory. This paper suggeststo treat directly inconsistencies in the Legal Knowledge Engineering; thisengineering has as (...) underlying logic a paraconsistent annotated deontic logic.There are three main approximations in Legal Knowledge Engineering basedon: cases, rules and logic. In this paper, we consider the approximationbased on logic. It is considered a paraconsistent annotated deontic logic.Based on this logic, it is established a new proposal that is called Paraconsistent Legal Knowledge Engineering. For this proposal, it it is suggested ameta-interpreter to support the deontic operators as well as inconsistency —entitled on this paper Paralog D that can be used as a base to handle withthe issues discussed. (shrink) In Volume i of his Systematic Theology , Paul Tillich says, ‘Being precedes nonbeing in ontological validity, as the word “nonbeing” itself indicates’ . He also says elsewhere, ‘Being “embraces” itself and nonbeing’, and ‘Nonbeing is dependent on the being it negates. “Dependent”—points first of all to the ontological priority of being over nonbeing’ . Tillich makes these statements in connection with a tendency among some Christian thinkers to take God as Being itself. The same understanding of the relation of (...) being and non-being can be discerned in major strands of Greek philosophy through the ideas of to on and me on . Although Greek philosophy and the Christian movement have different starting points in time, in geographical locale, in conceptual orientation, Tillich's statements demonstrate the manner in which the two strands have, to a significant degree, merged, and his comments reflect a basic under standing of being and nonbeing in the West. (shrink) In this paper we present an overview of Professor Newton C. A. da Costa’s work in logic, emphasizing the main results obtained by him in the several areas of his research activity. The text furnish a detailed bibliographic reference of his works, which are listed in the last section. Dynamics of complex systems is often hierarchically organized on different time scales. To understand the physics of such hierarchy, here Brownian motion of a particle moving through a fluctuating medium with slowly varying temperature is studied as an analytically tractable example, and a kinetic theory is formulated for describing the states of the particle. What is peculiar here is that the (inverse) temperature is treated as a dynamical variable. Dynamical hierarchy is introduced in conformity with the adiabatic scheme. Then, a (...) new analytical method is developed to show how the Fokker–Planck equation admits as a stationary solution the Maxwellian distribution modulated by the temperature fluctuations, the distribution of which turns out to be determined by the drift term. A careful comment is also made on so-called superstatistics. (shrink) Wegner’s method of flow equations offers a useful tool for diagonalizing a given Hamiltonian and is widely used in various branches of quantum physics. Here, generalizing this method, a condition is derived, under which the corresponding flow of a quantum state becomes geodesic in a submanifold of the projective Hilbert space, independently of specific initial conditions. This implies the geometric optimality of the present method as an algorithm of generating stationary states. The result is illustrated by analyzing some physical examples. Jaśkowski [3] presented a new propositional calculus labeled “discussive propositional calculus”, to serve as an underlying basis for inconsistent but non-trivial theories. This system was later extended to lower andhigher order predicate calculus . Jaśkowski’s system of discussiveor discursive propositional calculus can actually be extended to predicatecalculus in at least two ways. We have the intention using this calculus ofbuilding later as a basis for a discussive theory of sets. One way is thatstudied by Da Costa and Dubikajtis. Another one (...) is developed in this paperas a solution to a problem formulated by Da Costa. In this work we study afirst order discussive predicate calculus J∗∗.The paper consists of three parts. In the first part we introduce thecalculus J∗∗ and, following Prof. D. Makinson’s suggestion, we show that itis not identical with the predicate calculus [2] of Da Costa and Dubikajtis.An axiomatization of J∗∗ is presented. In the second one, we introduce newdiscussive connectives and study some of the properties. We observe thatthe usual Kripke semantics can be adapted to the calculus J∗∗. (shrink)
In the paper A Duality Web in 2+1 Dimensions and Condensed Matter Physics by Seiberg, Senthil, Wang, and Witten, they studied the particle-vortex dualities in $2+1$ dimensions. On page 20, section 3.1, they considered phase transitions of the $2+1$ dimensional theory $$\mathcal{L}=-\frac{1}{4e^{2}}f_{\mu\nu}f^{\mu\nu}+|D_{b}\phi|^{2}-\frac{1}{4e^{2}}\hat{f}_{\mu\nu}\hat{f}^{\mu\nu}+|D_{\hat{b}}\hat{\phi}|^{2}-V(|\phi|,|\hat{\phi}|)+\frac{1}{2\pi}\epsilon^{\mu\nu\rho}b_{\mu}\partial_{\nu}\hat{b}_{\rho},$$ where $(D_{b})_{\mu}=\partial_{\mu}+ib_{\mu}$, and $(D_{\hat{b}})_{\mu}=\partial_{\mu}+i\hat{b}_{\mu}$, and $f_{\mu\nu}=\partial_{\mu}b_{\nu}-\partial_{\nu}b_{\mu}$, and $\hat{f}_{\mu\nu}=\partial_{\mu}\hat{b}_{\nu}-\partial_{\nu}\hat{b}_{\mu}$. This theory has two gauge redundancies $U(1)_{b}$ and $U(1)_{\hat{b}}$: $$U(1)_{b}:\,\,\,b_{\mu}(x)\rightarrow b_{\mu}(x)-\partial_{\mu}\lambda(x),\quad \phi(x)\rightarrow e^{-i\lambda(x)}\phi(x)$$ $$U(1)_{\hat{b}}:\,\,\,\hat{b}_{\mu}(x)\rightarrow\hat{b}_{\mu}(x)-\partial_{\mu}\hat{\lambda}(x),\quad \hat{\phi}(x)\rightarrow e^{-i\hat{\lambda}(x)}\hat{\phi}(x)$$ (the BF-coupling $b\wedge d\hat{b}$ is invariant up to a total derivative under the above gauge transformations) In addition, there are two generalized global symmetries (introduced by Gaiotto, Kapustin, Seiberg, and Willett) $U(1)_{f}$ and $U(1)_{\hat{f}}$ associated with the conservation of the topological currencies $$j=\ast f,\quad\mathrm{and}\quad\hat{j}=\ast\hat{f},$$ where $f=db$, and $\hat{f}=d\hat{b}$ are the field strength. The conservation of these two topological currents follows trivially from the Bianchi identity. On page 22, the author studied the consequence by adding a Dirac monopole operator $\mathcal{M}_{\hat{b}}(x)$ of gauge field $\hat{b}$ into the action. To be more specific, such an operator would break the conservation of $\hat{j}=\ast d\hat{b}$, and insert a Dirac monopole at the point $x$, which results in $$d\ast\hat{j}=d\hat{f}=2\pi\delta(x)$$ In such a monopole configuration, the gauge field $\hat{b}$ is not globally defined, and the field strength $\hat{f}$ belongs to a non-trivial first Chern class. i.e. $$\int_{S^{2}}\frac{\hat{f}}{2\pi}=1.$$ The authors claimed that adding such a monopole operator into the Lagrangian explicitly breaks the generalized global symmetry $U(1)_{\hat{f}}$. I will explain such an explicit symmetry breaking in the following simpler example. Let's consider the free Maxwell theory in $2+1$ dimensions $$S[A]=-\frac{1}{2}\int F\wedge\ast F$$ where $F=dA$, and $A$ is a $U(1)$-gauge field. This theory has two generalized global symmetries $U(1)_{e}$ and $U(1)_{m}$ associated with the topological currents $$J_{e}=F,\quad\mathrm{and}\quad J_{m}=\ast F.$$ Their conservation follows directly from the EOM and the Bianchi identity $$d\ast J_{e}=d\ast F=0,\quad d\ast J_{m}=dF=0.$$ The Lagrangian can be converted into the dual photon description. First, one imposes the Bianchi identity by hand into the path integral $$\mathcal{Z}=\int\mathcal{D}F\int\mathcal{D}\sigma \exp\left\{i\int\left(-\frac{1}{2}F\wedge\ast F+\sigma dF\right)\right\}$$ where $\sigma$ is an auxiliary field, whose integral produces the Bianchi identity $dF=0$. Integrating out the gauge invariant variable $F$, one obtains the dual theory $$\mathcal{Z}=\int\mathcal{D}\sigma\exp\left\{i\int\frac{1}{2}d\sigma\wedge\ast d\sigma\right\}$$ This theory should be equivalent to the original one, and its only Abelian symmetry is the shift $$U(1):\sigma\rightarrow\sigma+\alpha$$ where $\alpha\in\mathbb{R}$. The corresponding Noether current is $$J=d\sigma.$$ This symmetry should be identified with the global symmetry $U(1)_{e}$ or $U(1)_{m}$, depending on which one of $F$ or $\ast F$ is dualized. The vaccua manifold of this theory can be identified with $\mathbb{R}$. Picking out one of its vacuum, the global symmetry is spontaneously broken. Next, one can add a Dirac monopole operator $\mathcal{M}(x)$ into the above theory. This can be achieved by imposing $$dF=2\pi\delta(x)$$ into the path-integral. One has $$\mathcal{Z}=\int\mathcal{D}F\int\mathcal{D}\sigma \exp\left\{i\int\left(-\frac{1}{2}F\wedge\ast F+\sigma(x)(dF-2\pi\delta(x))\right)\right\}$$ Integrating out $F$, one obtains $$\mathcal{Z}=\int\mathcal{D}\sigma\exp\left\{i\int\left(\frac{1}{2}d\sigma\wedge\ast d\sigma-2\pi\sigma(x)\delta(x)\right)\right\}=\int\mathcal{D}\sigma e^{-2\pi i\sigma(0)}e^{\frac{i}{2}\int d\sigma\wedge\ast d\sigma}.$$ Therefore, one can define the monopole operator in the dual photon description by $$\mathcal{M}(x)=e^{-2\pi i\sigma(x)}$$ and insert it into the path integral, and write $$\mathcal{Z}=\int\mathcal{D}\sigma\mathcal{M}(0)e^{\frac{i}{2}\int d\sigma\wedge\ast d\sigma}.$$ Under the global $U(1)$ transformation, one has $$\mathcal{M}(x)\rightarrow e^{-2\pi i\alpha}\mathcal{M}(x)$$ where $\alpha\in S^{1}$. Therefore the global $U(1)$ symmetry is broken to $\mathbb{Z}$. On the other hand, I found something strange from David Tong's Lecture Notes on Gauge Theory. In section 8.2 page 377, he claimed that for the Abelian-Higgs model, $$S=\int d^{3}x\left(-\frac{1}{4e^{2}}F_{\mu\nu}F^{\mu\nu}+|D_{\mu}\phi|^{2}-m^{2}|\phi|^{2}-\lambda|\phi|^{4}\right)$$ where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$, the topological symmetry associated with $J=\ast F$ is unbroken in the Higgs phase when the Dirac monopole is present. Can anybody help me understand why the generalized global symmetry in this case in unbroken even when the monopole is present?
Let me tell you about a fascinating paradox arising in certain infinitary two-player games of perfect information. The paradox, namely, is that there are games for which our judgement of who has a winning strategy or not depends on whether we insist that the players play according to a deterministic computable procedure. In the the space of computable play for these games, one player has a winning strategy, but in the full space of all legal play, the other player can ensure a win. The fundamental theorem of finite games, proved in 1913 by Zermelo, is the assertion that in every finite two-player game of perfect information — finite in the sense that every play of the game ends in finitely many moves — one of the players has a winning strategy. This is generalized to the case of open games, games where every win for one of the players occurs at a finite stage, by the Gale-Stewart theorem 1953, which asserts that in every open game, one of the players has a winning strategy. Both of these theorems are easily adapted to the case of games with draws, where the conclusion is that one of the players has a winning strategy or both players have draw-or-better strategies. Let us consider games with a computable game tree, so that we can compute whether or not a move is legal. Let us say that such a game is computably paradoxical, if our judgement of who has a winning strategy depends on whether we restrict to computable play or not. So for example, perhaps one player has a winning strategy in the space of all legal play, but the other player has a computable strategy defeating all computable strategies of the opponent. Or perhaps one player has a draw-or-better strategy in the space of all play, but the other player has a computable strategy defeating computable play. Examples of paradoxical games occur in infinite chess. We described such a paradoxical position in my paper Transfinite games values in infinite chess by giving a computable infinite chess position with the property that both players had drawing strategies in the space of all possible legal play, but in the space of computable play, then white had a computable strategy defeating any particular computable strategy for black. For a related non-chess example, let $T$ be a computable subtree of $2^{<\omega}$ having no computable infinite branch, and consider the game in which black simply climbs in this tree as white watches, with black losing whenever he is trapped in a terminal node, but winning if he should climb infinitely. This game is open for white, since if white wins, this is known at a finite stage of play. In the space of all possible play, Black has a winning strategy, which is simply to climb the tree along an infinite branch, which exists by König’s lemma. But there is no computable strategy to find such a branch, by the assumption on the tree, and so when black plays computably, white will inevitably win. For another example, suppose that we have a computable linear order $\lhd$ on the natural numbers $\newcommand\N{\mathbb{N}}\N$, which is not a well order, but which has no computable infinite descending sequence. It is a nice exercise in computable model theory to show that such an order exists. If we play the count-down game in this order, with white trying to build a descending sequence and black watching. In the space of all play, white can succeed and therefore has a winning strategy, but since there is no computable descending sequence, white can have no computable winning strategy, and so black will win every computable play. There are several proofs of open determinacy (and see my MathOverflow post outlining four different proofs of the fundamental theorem of finite games), but one of my favorite proofs of open determinacy uses the concept of transfinite game values, assigning an ordinal to some of the positions in the game tree. Suppose we have an open game between Alice and Bob, where the game is open for Alice. The ordinal values we define for positions in the game tree will measure in a sense the distance Alice is away from winning. Namely, her already-won positions have value $0$, and if it is Alice’s turn to play from a position $p$, then the value of $p$ is $\alpha+1$, if $\alpha$ is minimal such that she can play to a position of value $\alpha$; if it is Bob’s turn to play from $p$, and all the positions to which he can play have value, then the value of $p$ is the supremum of these values. Some positions may be left without value, and we can think of those positions as having value $\infty$, larger than any ordinal. The thing to notice is that if a position has a value, then Alice can always make it go down, and Bob cannot make it go up. So the value-reducing strategy is a winning strategy for Alice, from any position with value, while the value-maintaining strategy is winning for Bob, from any position without a value (maintaining value $\infty$). So the game is determined, depending on whether the initial position has value or not. What is the computable analogue of the ordinal-game-value analysis in the computably paradoxical games? If a game is open for Alice and she has a computable strategy defeating all computable opposing strategies for Bob, but Bob has a non-computable winning strategy, then it cannot be that we can somehow assign computable ordinals to the positions for Alice and have her play the value-reducing strategy, since if those values were actual ordinals, then this would be a full honest winning strategy, even against non-computable play. Nevertheless, I claim that the ordinal-game-value analysis does admit a computable analogue, in the following theorem. This came out of a discussion I had recently with Noah Schweber during his recent visit to the CUNY Graduate Center and Russell Miller. Let us define that a computable open game is an open game whose game tree is computable, so that we can tell whether a given move is legal from a given position (this is a bit weaker than being able to compute the entire set of possible moves from a position, even when this is finite). And let us define that an effective ordinal is a computable relation $\lhd$ on $\N$, for which there is no computable infinite descending sequence. Every computable ordinal is also an effective ordinal, but as we mentioned earlier, there are non-well-ordered effective ordinals. Let us call them computable pseudo-ordinals. Theorem. The following are equivalent for any computable game, open for White. White has a computable strategy defeating any computable play by Black. There is an effective game-value assignment for white into an effective ordinal $\lhd$, giving the initial position a value. That is, there is a computable assignment of some positions of the game, including the first position, to values in the field of $\lhd$, such that from any valued position with White to play, she can play so as to reduce value, and with Black to play, he cannot increase the value. Proof. ($2\to 1$) Given the computable values into an effective ordinal, then the value-reducing strategy for White is a computable strategy. If Black plays computably, then together they compute a descending sequence in the $\lhd$ order. Since there is no computable infinite descending sequence, it must be that the values hit zero and the game ends with a win for White. So White has a computable strategy defeating any computable play by Black. ($1\to 2$) Conversely, suppose that White has a computable strategy $\sigma$ defeating any computable play by Black. Let $\tau$ be the subtree of the game tree arising by insisting that White follow the strategy $\sigma$, and view this as a tree on $\N$, a subtree of $\N^{<\omega}$. Imagine the tree growing downwards, and let $\lhd$ be the Kleene-Brouwer order on this tree, which is the lexical order on incompatible positions, and otherwise longer positions are lower. This is a computable linear order on the tree. Since $\sigma$ is computably winning for White, the open player, it follows that every computable descending sequence in $\tau$ eventually reaches a terminal node. From this, it follows that there is no computable infinite descending sequence with respect to $\lhd$, and so this is an effective ordinal. We may now map every node in $\tau$, which includes the initial node, to itself in the $\lhd$ order. This is a game-value assignment, since on White’s turn, the value goes down, and it doesn’t go up on Black’s turn. QED Corollary. A computable open game is computably paradoxical if and only if it admits an effective game value assignment for the open player, but only with computable pseudo-ordinals and not with computable ordinals. Proof. If there is an effective game value assignment for the open player, then the value-reducing strategy arising from that assignment is a computable strategy defeating any computable strategy for the opponent. Conversely, if the game is paradoxical, there can be no such ordinal-assignment where the values are actually well-ordered, or else that strategy would work against all play by the opponent. QED Let me make a few additional observations about these paradoxical games. Theorem. In any open game, if the closed player has a strategy defeating all computable opposing strategies, then in fact this is a winning strategy also against non-computable play. Proof. If the closed player has a strategy $\sigma$ defeating all computable strategies of the opponent, then in fact it defeats all strategies of the opponent, since any winning play by the open player against $\sigma$ wins in finitely many moves, and therefore there is a computable strategy giving rise to the same play. QED Corollary. If an open game is computably paradoxical, it must be the open player who wins in the space of computable play and the closed player who wins in the space of all play. Proof. The theorem shows that if the closed player wins in the space of computable play, then that player in fact wins in the space of all play. QED Corollary. There are no computably paradoxical clopen games. Proof. If the game is clopen, then both players are closed, but we just argued that any computable strategy for a closed player winning against all computable play is also winning against all play. QED
Learning Objectives Systems manipulate signals. There are a few simple systems which will perform simple functions upon signals. Examples include amplification (or attenuation), time-reversal, delay, and differentiation/integration. Systems manipulate signals, creating output signals derived from their inputs. Why the following are categorized as "simple" will only become evident towards the end of the course. Sources Sources produce signals without having input. We like to think of these as having controllable parameters, like amplitude and frequency. Examples would be oscillators that produce periodic signals like sinusoids and square waves and noise generators that yield signals with erratic waveforms (more about noise subsequently). Simply writing an expression for the signals they produce specifies sources. A sine wave generator might be specified by: \[y(t) = A\sin (2\pi f_{0}t)u(t)\] The above equation says that the source was turned on at t =0 to produce a sinusoid of amplitude and frequency Amplifiers An amplifier multiplies its input by a constant known as the amplifier gain. \[y(t) = Gx(t)\] Fig. 2.6.1 An amplifier The gain can be positive or negative (if negative, we would say that the amplifier inverts its input) and its magnitude can be greater than one or less than one. If less than one, the amplifier actually attenuates. A real-world example of an amplifier is your home stereo. You control the gain by turning the volume control. Delay A system serves as a time delay when the output signal equals the input signal at an earlier time. \[y(t) = x(t-\tau )\] Fig. 2.6.2 A delay Here, τ is the delay. The way to understand this system is to focus on the time origin: The output at time t = τ equals the input at time t = 0 Thus, if the delay is positive, the output emerges later than the input, and plotting the output amounts to shifting the input plot to the right. The delay can be negative, in which case we say the system advances its input. Such systems are difficult to build (they would have to produce signal values derived from what the input will be), but we will have occasion to advance signals in time. Time Reversal Here, the output signal equals the input signal flipped about the time origin. \[y(t) = x(-t)\] Fig. 2.6.3 A time reversal system Again, such systems are difficult to build, but the notion of time reversal occurs frequently in communications systems. Exercise \(\PageIndex{1}\) Mentioned earlier was the issue of whether the ordering of systems mattered. In other words, if we have two systems in cascade, does the output depend on which comes first? Determine if the ordering matters for the cascade of an amplifier and a delay and for the cascade of a time-reversal system and a delay. Solution In the first case, order does not matter; in the second it does. "Delay" means τ. "Time-reverse" means Case 1 \[y(t) = Gx(t-\tau )\] The way we apply the gain and delay the signal gives the same result. Case 2 Time-reverse then delay: \[y(t) = x\left ( - (t-\tau )\right ) = x(-t+\tau )\] Delay then time-reverse: \[y(t) = x\left ( (-t)-\tau )\right )\] Derivative Systems and Integrators Systems that perform calculus-like operations on their inputs can produce waveforms significantly different than present in the input. Derivative systems operate in a straightforward way: A first-derivative system would have the input-output relationship \[y(t) = \frac{\mathrm{d} }{\mathrm{d} t}x(t)\] Integral systems have the complication that the integral's limits must be defined. It is a signal theory convention that the elementary integral operation have a lower limit of \[-\infty\] and that the value of all signals at \[t = -\infty\] equals zero. A simple integrator would have input-output relation: \[y(t) = \int_{-\infty }^{t}x(\alpha )d\alpha\] Linear Systems Linear systems are a class of systems rather than having a specific input-output relation. Linear systems form the foundation of system theory, and are the most important class of systems in communications. They have the property that when the input is expressed as a weighted sum of component signals, the output equals the same weighted sum of the outputs produced by each component. When S(•) is linear, \[S\left ( G_{1}x_{1}(t)+ G_{2}x_{2}(t) \right ) = G_{1}S(x_{1}(t))+ G_{2}S(x_{2}(t))\] for all choices of signals and gains. This general input-output relation property can be manipulated to indicate specific properties shared by all linear systems. \[S(Gx(t)) = GS(x(t))\] The colloquialism summarizing this property is "Double the input, you double the output." Note that this property is consistent with alternate ways of expressing gain changes: Since 2x(t) also equals x(t)+x(t), the linear system definition provides the same output no matter which of these is used to express a given signal. \[S(0) = 0\] If the input is identically zero for all time, the output of a linear system must be zero. This property follows from the simple derivation: \[S(0) = S(x(t)-x(t)) = S(x(t)) - S(x(t)) = 0\] Just why linear systems are so important is related not only to their properties, which are divulged throughout this course, but also because they lend themselves to relatively simple mathematical analysis. Said another way, "They're the only systems we thoroughly understand!" We can find the output of any linear system to a complicated input by decomposing the input into simple signals. The equation above says that when a system is linear, its output to a decomposed input is the sum of outputs to each input. for example if, \[x(t) = e^{-t} + \sin (2\pi f_{0}t)\] The output S(x(t)) of any linear system equals \[y(t) = S(e^{-t}) + S(\sin (2\pi f_{0}t))\] Time-Invariant Systems Systems that don't change their input-output relation with time are said to be time-invariant. The mathematical way of stating this property is to use the signal delay concept described in Simple Systems above. \[(y(t) = S(x(t))) \Rightarrow (y(t-\tau ) = S(x(t-\tau )))\] If you delay (or advance) the input, the output is similarly delayed (advanced). Thus, a time-invariant system responds to an input you may supply tomorrow the same way it responds to the same input applied today; today's output is merely delayed to occur tomorrow. The collection of linear, time-invariant systems are the most thoroughly understood systems. Much of the signal processing and system theory discussed here concentrates on such systems. For example, electric circuits are, for the most part, linear and time-invariant. Nonlinear ones abound, but characterizing them so that you can predict their behavior for any input remains an unsolved problem. Contributor ContribEEOpenStax
Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/125167 Title: Linearització conforme de punts fixos el·líptics Author: Camps Pallarès, Joan Maria Director/Tutor: Fagella Rabionet, Núria Keywords: Funcions holomorfes Treballs de fi de grau Sistemes dinàmics diferenciables Teoria del punt fix Funcions analítiques Funcions holomorfes Bachelor's thesis Differentiable dynamical systems Fixed point theory Analytic functions Issue Date: 27-Jun-2018 Abstract: [en] The study of dynamics of holomorphic functions near a fixed point has led to numerous works since the end of the nineteenth century, and the Siegel linearization problem plays an important role in this branch of the theory of dynamical systems in one complex variable. The natural way of studying the dynamics of a system near a fixed point is finding a local change of cordinates to represent this system in a simpler way. If $f+$ is a holomorphic function with a fixed point $z_{0} = f (z_{0})$, and multiplier $\lambda = f'(z_{0}) \in S^{1}, \lambda = e^{2\pi i \alpha}$ for an irrational number $\alpha$, we say that f is linearizable if it’s locally conjugated to the linear system $g(z) =\lambda z$. Then, Siegel’s problem consists in describing completely the family of numbers $\alpha$ for which every local system $f$ with multiplier $\lambda$ is linearizable. The contributions of H. Cremer and, specially, of C.L. Siegel to the problem, represent a big step in understanding it's trickyness, as well as the importance of the role that the arithmetical nature of $\alpha$ plays in it. The techniques introduced by J.C. Yoccoz in his resolution of Siegel’s problem, at the end of the past century, have inspired other results to help understanding the dynamics of $f$ in the non-linearizable case, yet not fully understood nowadays. Note: Treballs Finals de Grau de Matemàtiques, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2018, Director: Núria Fagella Rabionet URI: http://hdl.handle.net/2445/125167 Appears in Collections: Treballs Finals de Grau (TFG) - Matemàtiques This item is licensed under a Creative Commons License
The de Broglie wavelength is the wavelength, \(\lambda\), associated with a object and is related to its momentum and mass. Introduction In 1923, Louis de Broglie, a French physicist, proposed a hypothesis to explain the theory of the atomic structure. By using a series of substitution de Broglie hypothesizes particles to hold properties of waves. Within a few years, de Broglie's hypothesis was tested by scientists shooting electrons and rays of lights through slits. What scientists discovered was the electron stream acted the same was as light proving de Broglie correct. Deriving the de Broglie Wavelength De Broglie derived his equation using well established theories through the following series of substitutions: De Broglie first used Einstein's famous equation relating matter and energy: \[ E = mc^2 \label{0}\] with \(E\) = energy, \(m\) = mass, \(c\) = speed of light Using Planck's theory which states every quantum of a wave has a discrete amount of energy given by Planck's equation: \[ E= h \nu \label{1}\] with \(E\) = energy, \(h\) = Plank's constant (6.62607 x 10 -34J s), \(\nu\)= frequency Since de Broglie believed particles and wave have the same traits, he hypothesized that the two energies would be equal: \[ mc^2 = h\nu \label{2}\] Because real particles do not travel at the speed of light, De Broglie submitted velocity (\(v\)) for the speed of light (\(c\)). \[ mv^2 = h\nu \label{3}\] Through the equation \(\lambda\), de Broglie substituted \( v/\lambda\) for \(\nu\) and arrived at the final expression that relates wavelength and particle with speed. \[ mv^2 = \dfrac{hv}{\lambda} \label{4}\] Hence \[ \lambda = \dfrac{hv}{mv^2} = \dfrac{h}{mv} \label{5} \] A majority of Wave-Particle Duality problems are simple plug and chug via Equation \ref{5} with some variation of canceling out units Example \(\PageIndex{1}\) Find the de Broglie wavelength for an electron moving at the speed of \(5.0 \times 10^6\; m/s\) (mass of an electron is \(9.1 \times 10^{-31}\; kg\)). SOLUTION \[ \lambda = \dfrac{h}{p}= \dfrac{h}{mv} =\dfrac{6.63 \times 10^{-34}\; J \cdot s}{(9.1 \times 10^{-31} \; kg)(5.0 \times 10^6\, m/s)}= 1.46 \times 10^{-10}\;m\] Although de Broglie was credited for his hypothesis, he had no actual experimental evidence for his conjecture. In 1927, Clinton J. Davisson and Lester H. Germer shot electron particles onto onto a nickel crystal. What they saw was the diffraction of the electron similar to waves diffraction against crystals (x-rays). In the same year, an English physicist, George P. Thomson fired electrons towards thin metal foil providing him with the same results as Davisson and Germer.
I propose a small modification of the parametrization for the torus that addresses issues with conformality. Try F[t_, u_, r_] := {Cos[t] (r + Cos[u + Sin[u]/r]), Sin[t] (r + Cos[u + Sin[u]/r]), Sin[u + Sin[u]/r]} instead. Next, we wish to choose suitable values for $m, n$ for a given $r$ such that the mapping of the regular hexagonal tiling preserves angles as much as possible. We see that this requires us to choose $m, n$ such that $$\frac{\sqrt{3}}{2} \frac{n}{m} = r.$$ As we also require $n$ to be even (or else the tiling does not fit properly on the torus), we can let $n = 2k$ and this gives us $k \sqrt{3} = rm$; thus for a given $r$ we should try to choose $k, m$ as the nearest integers satisfying this equation. This gives us a very nearly angle-preserving tiling. For example, with $r = 2 \sqrt{3}$, we can choose $m = 11$, $n = 44$ to get something that looks like this: Notice how much more regular the hexagons are throughout the torus--the "inner" ones are not squashed, and the outer ones are not stretched. Addendum. So, the above seems to work reasonably well for large $r$, but when $r = 1 + \epsilon$ for small $\epsilon$, it doesn't work because the mapping I chose is not truly conformal. I found the relevant information here. This suggests that the correct form of $f$ should be F[t_, u_, r_] := {Cos[t], Sin[t], Sin[# u]/#} #^2/(r - Cos[# u]) &[Sqrt[r^2 - 1]] And whereas $t$ is still plotted on the same interval, we need to plot $u$ on $\left(-\frac{\pi}{\sqrt{r^2-1}}, \frac{\pi}{\sqrt{r^2-1}}\right)$. So we modify the plotting command as well: P[r_, m_, n_] := Graphics3D[Polygon /@ Table[F[4 Pi/(3 n) (Cos[Pi k/3] + i 3/2), 2 Pi/(Sqrt[3 (r^2 - 1)] m) (Sin[Pi k/3] + (j + i/2) Sqrt[3]), r], {i, n}, {j, m}, {k, 6}], Boxed -> False] And now the selection of $m, n$ based on $r$ is also more complicated. $n = 2m \sqrt{\frac{r^2 - 1}{3}}$ seems to give good results. Here is a picture for $r = 1.1$, $m = 30$, $n = 20$: This solution calculates exact coordinates. However, for 3D-printing, machine precision is usually enough, and affords a significant speedup. We can force machine arithmetic by adding dots after some of the constants (e.g. 2 Pi to 2. Pi). We can also achieve a 3× speed up by only calculating the location of each vertex once, and using GraphicsComplex to share the locations with each hexagon. (This is how 3D formats like .stl work internally. If you need regular polygon objects to process further, just use Normal to eliminate GraphicsComplex.) Pfast[r_, m_, n_] := Graphics3D[ GraphicsComplex[ Flatten[Table[ F[2. Pi (i + k/3.)/n, Pi (1. + i + 2 j)/m/Sqrt[r^2 - 1.], r // N], {j, m}, {i, n}, {k, {-1, +1}}], 2], Polygon[Join @@ Table[Mod[(j - 1) (2 n) + {1, 2, 3 + If[i == n, n (n - 2), 0]}~ Join~({2, 1, If[i == 1, n (2 - n), 0]} + 2 n) + 2 (i - 1), 2 n m, 1], {i, n}, {j, m}]]], Boxed -> False] The code is almost the same as before, except that we now only need to generate two new coordinates for each cell, so Cos[Pi k/3] only takes on two values and Sin[Pi k/3] only takes on one value, allowing the arithmetic to be simplified considerably. We don't need to change F; it's already extremely fast due to the two-stage calculation it does to avoid recomputing the expensive square root multiple times. We can do a timing and memory usage comparison of the two versions: ByteCount[P2[2, 50, 100]] // Timing (* {0.343750, 1440448} *) ByteCount[P[2, 50, 100]] // Timing (* {5.921875, 60849648} *) The numerical version is around 20 times faster and gives a result 40 times smaller. It's actually now fast enough to quickly make a nice table of tori with different parameters: GraphicsGrid[ ParallelTable[ With[{n = 2 Round[m Sqrt[(r^2 - 1)/3]]}, Show[P2[r, m, n], PlotLabel -> {r, m, n}]], {r, {1.1, 1.5, 2, 3, 5}}, {m, {6, 10, 15, 20, 30, 50}}], ImageSize -> Full]
N. Barton, A. E. Caicedo, G. Fuchs, J. D. Hamkins, and J. Reitz, “Inner-model reflection principles,” ArXiv e-prints, 2017. (manuscript under review) @ARTICLE{BartonCaicedoFuchsHamkinsReitz:Inner-model-reflection-principles, author = {Neil Barton and Andr\'es Eduardo Caicedo and Gunter Fuchs and Joel David Hamkins and Jonas Reitz}, title = {Inner-model reflection principles}, journal = {ArXiv e-prints}, year = {2017}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1708.06669}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/inner-model-reflection-principles}, } Abstract.We introduce and consider the inner-model reflection principle, which asserts that whenever a statement $\varphi(a)$ in the first-order language of set theory is true in the set-theoretic universe $V$, then it is also true in a proper inner model $W\subsetneq V$. A stronger principle, the ground-model reflection principle, asserts that any such $\varphi(a)$ true in $V$ is also true in some nontrivial ground model of the universe with respect to set forcing. These principles each express a form of width reflection in contrast to the usual height reflection of the Lévy-Montague reflection theorem. They are each equiconsistent with ZFC and indeed $\Pi_2$-conservative over ZFC, being forceable by class forcing while preserving any desired rank-initial segment of the universe. Furthermore, the inner-model reflection principle is a consequence of the existence of sufficient large cardinals, and lightface formulations of the reflection principles follow from the maximality principle MP and from the inner-model hypothesis IMH. Every set theorist is familiar with the classical Lévy-Montague reflection principle, which explains how truth in the full set-theoretic universe $V$ reflects down to truth in various rank-initial segments $V_\theta$ of the cumulative hierarchy. Thus, the Lévy-Montague reflection principle is a form of height-reflection, in that truth in $V$ is reflected vertically downwards to truth in some $V_\theta$. In this brief article, in contrast, we should like to introduce and consider a form of width-reflection, namely, reflection to nontrivial inner models. Specifically, we shall consider the following reflection principles. Definition. The inner-model reflectionprinciple asserts that if a statement $\varphi(a)$ in the first-order language of set theory is true in the set-theoretic universe $V$, then there is a proper inner model $W$, a transitive class model of ZF containing all ordinals, with $a\in W\subsetneq V$ in which $\varphi(a)$ is true. The ground-model reflectionprinciple asserts that if $\varphi(a)$ is true in $V$, then there is a nontrivial ground model $W\subsetneq V$ with $a\in W$ and $W\models\varphi(a)$. Variations of the principles arise by insisting on inner models of a particular type, such as ground models for a particular type of forcing, or by restricting the class of parameters or formulas that enter into the scheme. The lightfaceforms of the principles, in particular, make their assertion only for sentences, so that if $\sigma$ is a sentence true in $V$, then $\sigma$ is true in some proper inner model or ground $W$, respectively. We explain how to force the principles, how to separate them, how they are consequences of various large cardinal assumptions, consequences of the maximality principle and of the inner model hypothesis. Kindly proceed to the article (pdf available at the arxiv) for more. N. Barton, A. E. Caicedo, G. Fuchs, J. D. Hamkins, and J. Reitz, “Inner-model reflection principles,” ArXiv e-prints, 2017. (manuscript under review) @ARTICLE{BartonCaicedoFuchsHamkinsReitz:Inner-model-reflection-principles, author = {Neil Barton and Andr\'es Eduardo Caicedo and Gunter Fuchs and Joel David Hamkins and Jonas Reitz}, title = {Inner-model reflection principles}, journal = {ArXiv e-prints}, year = {2017}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1708.06669}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://jdh.hamkins.org/inner-model-reflection-principles}, } This article grew out of an exchange held by the authors on math.stackexchange in response to an inquiry posted by the first author concerning the nature of width-reflection in comparison to height-reflection: What is the consistency strength of width reflection?
Rendezvous with Geostationary Destinations Geostationary orbits are nontrivial to maintain. They are perturbed by non-round-earth forces represented by the J_{22} term in the spherical harmonics of the gravity field, forming attractors at 75 degrees east (above a point slightly east of the Maldives and south of India) and 105 west (west of the Galapagos, directly south of Denver, Colorado). Evading these attractors can take as much as 1.715 m/s delta V per year. The North-South Perturbation For the following analysis, I will approximate both the orbit of the Earth and moon as circles. I also assume that R_M >> R_G and approximately equal tidal forces at near and far sides of the orbit. A more exact analysis suitable for precision rendezvous would numerically compute the actual multibody elliptical orbits, the spherical harmonic gravity terms for Earth and Moon, gravitational contributions from Jupiter and Venus, and optical perturbations. For now, we are looking for an approximation of maximum \Delta V , but do not forget that the launch, position, and velocity for vehicle in a mature launch loop system will be timed and optically measured to nanoseconds and micrometers, and continuously measured and controlled to this precision throughout the transfer orbit. R_G 4.216 e4 km geostationary orbit diameter \mu_M 4.903 e3 km moon's standard gravitational parameter R_M 3.844 e5 km moon/earth semimajor orbit radius \mu_S 1.327e11 km sun standard gravitational parameter R_S 1.496 e8 km earth/sun semimajor orbit radius The north/south perturbation by the moon and sun are much larger. The moon's orbit takes it above and below the equatorial plane, which causes out-of-plane tidal forces. The Moon's orbital plane is inclined 5.145 degrees from the ecliptic plane (the plane of the earth's orbit), and precesses over a nodal period of 18.6 years. The earth's axial tilt is 23°26'21".4119 or 23.439°, so the moon's orbital plane is inclined between 18.294° and 28.584° with respect to the equatorial plane, varying sinusoidally between one and the other over the nodal period. According to Boden in Larsen and Wertz) , counteracting these tidal forces on a geostationary object requires a \Delta V budget of as much as 102.67 ~ cos ~ \alpha ~ sin ~ \alpha ~ m/s per year ( ~ == ~ 51.335 ~ sin ~ 2 \alpha ), where \alpha is the angle between the equatorial (geostationary orbit) plane and the and the Moon's orbital plane. Boden claims the worst case is 36.93 m/s per year, which implies an \alpha of 23.00°. Actually, the worst case \alpha is 28.584°, so using assuming the first "102.67..." formula and its double-angle "51.3345 ..." equivalent, the worst case annual \Delta V budget is more like 43.13 m/s per year. Let's see if we can derive that 102.67 number. Over the course of a month, the moon moves north of the equatorial plane, through it, then south of the equatorial plane, and back. When the moon is on the equatorial plane, there is no perturbation north or south, though there are tidal forces radially on an object in geostationary orbit. The in-plane tidal acceleration is approximately 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ / {R_M}^3 , where \theta is the orbital angle difference between the orbiting object and the moon. As the moon moves around its orbit, it moves north or south. This puts a northerly component of R_M ~ sin ~ \omega on the moon's position, where \omega is the argument of periapsis, the angle around the moon's orbit from the ascending node relative to the equatorial plane. We can approximate the tidal acceleration as 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ sin ~ \omega ~ / {R_M}^3 MORE LATER References Larsen and Wertz (1999). Space Mission Analysis and Design, Kluwer, 3rd Ed., Page 157 (Chapter Author Daryl G. Boden, PhD, USNA). (?) Soop, E. M. (1994). Handbook of Geostationary Orbits. Springer
Your question seems not well-posed to me. User reuns seems to interpret it as: Let $K$ be one fixed finite extension of $\Bbb Q_p$. If $f: K \rightarrow K$ is continuous, is there a continuous map $\hat{f}: \Bbb C_p \rightarrow K$ such that $\hat{f}_{\vert K} = f$? and in his answer simultaneously restricts to the special case $\Bbb Q_p$ for the domain of $f$, but generalises to an arbitrary topological space $X$ for the codomain of $f$ and $\hat{f}$. Both the restriction and the generalisation are harmless though, and indeed this boils down to the question whether there is a continuous map $\phi: \Bbb C_p\rightarrow K$ (or in the special case $\rightarrow \Bbb Q_p$) with $\phi_{\vert K} = id_K$, which I think he indeed constructs. So the answer to that interpretation of the question is yes. I however interpret the question differently, namely as: For each finite extensions $K\vert \Bbb Q_p$ (contained in $\Bbb C_p$), let a continuous map $f_K : K \rightarrow K$ be given. Then is there a continuous map $F: \Bbb C_p \rightarrow \Bbb C_p$ such that $F_{\vert K} = f_K$ for all $K$ as above? The answer to this is no, for two reasons. Quite obviously the given maps $f_K$ need to be compatible in the sense that for every $K_1, K_2$, we need $f_{K_1 \vert (K_1 \cap K_2)} = f_{K_2 \vert (K_1 \cap K_2)}$ (or something similar). Now if the condition in 1. is satisfied, then indeed the maps do define a map $f$ on $\overline{\Bbb Q_p} = \bigcup_{K\vert \Bbb Q_p \text{ finite}} K$, and maybe that is what you have in mind when you write that product in the OP. I am not sure if that map necessarily is continuous on $\overline{\Bbb Q_p}$; however, even if it is continuous, I think that such a map does not necessarily extend to $\Bbb C_p$. A counterexample is: Let $c \in \Bbb C_p \setminus \overline{\Bbb Q_p}$ such that there exists a sequence $(x_n)_n$ with $x_n \to c$ and $v_p(x_n-c) \in \Bbb Z$ for all but finitely many $n$. Then for all $K$ as above, set $f_K(x) = \phi (\dfrac{1}{x-c})$, where $\phi: \Bbb C_p \rightarrow \Bbb Q_p$ is the map constructed in reuns' answer. While the $f_K$ give a map $f$ on $\overline{\Bbb Q_p}$ as above, we have $\lim_{n\to \infty} v_p(f(x_n)) = -\infty$, which means $f$ cannot be extended to $c$.
Rendezvous with Geostationary Destinations Geostationary orbits are nontrivial to maintain. They are perturbed by non-round-earth forces represented by the J_{22} term in the spherical harmonics of the gravity field, forming attractors at 75 degrees east (above a point slightly east of the Maldives and south of India) and 105 west (west of the Galapagos, directly south of Denver, Colorado). Evading these attractors can take as much as 1.715 m/s delta V per year. The North-South Perturbation For the following analysis, I will approximate both the orbit of the Earth and moon as circles. I also assume that R_M >> R_G and approximately equal tidal forces at near and far sides of the orbit. A more exact analysis suitable for precision rendezvous would numerically compute the actual multibody elliptical orbits, the spherical harmonic gravity terms for Earth and Moon, gravitational contributions from Jupiter and Venus, and optical perturbations. For now, we are looking for an approximation of maximum \Delta V , but do not forget that the launch, position, and velocity for vehicle in a mature launch loop system will be timed and optically measured to nanoseconds and micrometers, and continuously measured and controlled to this precision throughout the transfer orbit. R_G 4.216 e4 km geostationary orbit diameter \mu_M 4.903 e3 km moon's standard gravitational parameter R_M 3.844 e5 km moon/earth semimajor orbit radius \mu_S 1.327e11 km sun standard gravitational parameter R_S 1.496 e8 km earth/sun semimajor orbit radius The north/south perturbation by the moon and sun are much larger. The moon's orbit takes it above and below the equatorial plane, which causes out-of-plane tidal forces. The Moon's orbital plane is inclined 5.145 degrees from the ecliptic plane (the plane of the earth's orbit), and precesses over a nodal period of 18.6 years. The earth's axial tilt is 23°26'21".4119 or 23.439°, so the moon's orbital plane is inclined between 18.294° and 28.584° with respect to the equatorial plane, varying sinusoidally between one and the other over the nodal period. According to Boden in Larsen and Wertz) , counteracting these tidal forces on a geostationary object requires a \Delta V budget of as much as 102.67 ~ cos ~ \alpha ~ sin ~ \alpha ~ m/s per year ( ~ == ~ 51.335 ~ sin ~ 2 \alpha ), where \alpha is the angle between the equatorial (geostationary orbit) plane and the and the Moon's orbital plane. Boden claims the worst case is 36.93 m/s per year, which implies an \alpha of 23.00°. Actually, the worst case \alpha is 28.584°, so using assuming the first "102.67..." formula and its double-angle "51.3345 ..." equivalent, the worst case annual \Delta V budget is more like 43.13 m/s per year. Let's see if we can derive that 102.67 m/s/y number. Over the course of a month, the moon moves north of the equatorial plane, through it, then south of the equatorial plane, and back. When the moon is on the equatorial plane, there is no perturbation north or south, though there are tidal forces radially on an object in geostationary orbit. The in-plane tidal acceleration is approximately 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ / {R_M}^3 , where \theta is the orbital angle difference between the orbiting object and the moon. As the moon moves around its orbit, it moves north or south. This puts a northerly component of R_M ~ sin ~ \omega on the moon's position, where \omega is the argument of periapsis, the angle around the moon's orbit from the ascending node relative to the equatorial plane. We can approximate the tidal acceleration as 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ sin ~ \omega ~ / {R_M}^3 MORE LATER References Larsen and Wertz (1999). Space Mission Analysis and Design, Kluwer, 3rd Ed., Page 157 (Chapter Author Daryl G. Boden, PhD, USNA). (?) Soop, E. M. (1994). Handbook of Geostationary Orbits. Springer
Given solely the first $n$ moments $m_1,\dots,m_n$ of a random variables $X\in\mathbb{R}$, I was wondering whether there exists a direct methodology to approximate $X$ with a Gaussian Mixture ? The method of moments can always be used; I assume its properties for Gaussian mixture have been studied but I don’t know any references. Let’s have a look on the mixture of two Gaussian $\mathcal N(\mu_1, \sigma_1^2)$ and $\mathcal N(\mu_2, \sigma_2^2)$ in proportions $p$, $1-p$. We have five parameters to estimate so we will use the first five moments. The moment generating function of this mixture is $$p \exp\left(\mu_1 t + {1\over 2} \sigma_1^2 t^2\right) + (1-p) \exp\left(\mu_2 t + {1\over 2} \sigma_2^2 t^2\right)$$ which gives the first five moments as $$\begin{aligned} m_1 &= p \mu_1 + (1-p) \mu_2 \\ m_2 &= p (\mu_1^2 + \sigma_1^2) + (1-p)(\mu_2^2 + \sigma_2^2) \\ m_3 &= p (\mu_1^3 + 3 \mu_1 \sigma_1^2) + (1-p)(\mu_2^3 + 3 \mu_2 \sigma_2^2)\\ m_4 &= p (\mu_1^4 + 6 \mu_1^2 \sigma_1^2 + 3\sigma_1^2) + (1-p)(\mu_2^4 + 6 \mu_2^2 \sigma_2^2 + 3\sigma_2^4)\\ m_5 &= p (\mu_1^5 + 10 \mu_1^3 \sigma_1^2 + 15 \mu_1 \sigma_1^4) + (1-p)(\mu_2^5 + 10 \mu_2^3 \sigma_2^2 + 15 \mu_2 \sigma_2^4) \end{aligned}$$ The difficulty is to solve these equations in $p$, $\mu_1$ and $\mu_2$ for given moments $m_1$, $m_2$ and $m_3$. This is not easy! We need here a iterative algorithm. There is surely something clever to do here but I’ll use brute force, with a quasi-Newton to minimize the norm of the difference: f <- function(m, p, mu1, s1, mu2, s2) { mm1 <- c(mu1, mu1**2 + s1, 3*mu1*s1 + mu1**3, 3*s1**2 + 6*s1*mu1**2 + mu1**4, 15*mu1*s1^2 + 10*s1*mu1^3 + mu1^5) mm2 <- c(mu2, mu2**2 + s2, 3*mu2*s2 + mu2**3, 3*s2**2 + 6*s2*mu2**2 + mu2**4, 15*mu2*s2^2 + 10*s2*mu2^3 + mu2^5) mm <- p*mm1 + (1-p)*mm2; sum( (m-mm)**2 )}set.seed(1)x <- c( rnorm(100, 0, 1), rnorm(200, 4, 0.5) )m <- c(mean(x), mean(x**2), mean(x**3), mean(x**4), mean(x**5) )r <- optim(c(0.5,0,1,4,0.25), function(x) f(m, x[1], x[2], x[3], x[4], x[5]), method="BFGS")$par Let’s see: hist(x, freq=FALSE, breaks=20)t <- seq(-3,6,length=501)lines(t, r[1]*dnorm(t, mean=r[2], sd=sqrt(r[3])) + (1-r[1])*dnorm(t, mean=r[4], sd=sqrt(r[5])), col="red") This does not look very good. I am pretty sure maximum likelihood behaves better. Moreover it is easy to implement with an EM. I don’t think this deserves more investigations.
Unitary constraints on charged pion photoproduction at large p⊥ Abstract Around $$\theta_{\pi}=$$90$$^\circ$$, the coupling to the $$\rho^\circ N$$ channel leads to a good accounting of the charged pion exclusive photoproduction cross section in the energy range 3 < E γ < 10 GeV, where experimental data exist. Starting from a Regge Pole approach that successfully describes vector meson production, the singular part of the corresponding box diagrams (where the intermediate vector meson-baryon pair propagates on-shell) is evaluated without any further assumptions (unitarity). Such a treatment provides an explanation of the $$s^{-7}$$ scaling of the cross section. Furthermore, elastic rescattering of the charged pion improves the basic Regge pole model at forward and backward angles. Authors: Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States) Publication Date: Research Org.: Thomas Jefferson National Accelerator Facility, Newport News, VA (United States) Sponsoring Org.: USDOE Office of Science (SC), Nuclear Physics (NP) (SC-26) OSTI Identifier: 982215 Report Number(s): JLAB-THY-09-1112; DOE/OR/23177-1060 Journal ID: ISSN 0370-2693; TRN: US201013%%913 Grant/Contract Number: AC05-06OR23177 Resource Type: Accepted Manuscript Journal Name: Physics Letters. Section B Additional Journal Information: Journal Volume: 685; Journal Issue: 2-3; Journal ID: ISSN 0370-2693 Publisher: Elsevier Country of Publication: United States Language: English Subject: 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS; ACCOUNTING; BASIC; COUPLING; CROSS SECTIONS; DIAGRAMS; ENERGY RANGE; EXPERIMENTAL DATA; PHOTOPRODUCTION; PIONS; PRODUCTION; REGGE POLES; RESCATTERING; SCALING; UNITARITY; VECTOR MESONS; VECTORS Citation Formats Laget, Jean-Marc. Unitary constraints on charged pion photoproduction at large p⊥. United States: N. p., 2010. Web. doi:10.1016/j.physletb.2010.01.052. Laget, Jean-Marc. Unitary constraints on charged pion photoproduction at large p⊥. United States. doi:10.1016/j.physletb.2010.01.052. Laget, Jean-Marc. Mon . "Unitary constraints on charged pion photoproduction at large p⊥". United States. doi:10.1016/j.physletb.2010.01.052. https://www.osti.gov/servlets/purl/982215. @article{osti_982215, title = {Unitary constraints on charged pion photoproduction at large p⊥}, author = {Laget, Jean-Marc}, abstractNote = {Around $\theta_{\pi}=$90$^\circ$, the coupling to the $\rho^\circ N$ channel leads to a good accounting of the charged pion exclusive photoproduction cross section in the energy range 3 < Eγ < 10 GeV, where experimental data exist. Starting from a Regge Pole approach that successfully describes vector meson production, the singular part of the corresponding box diagrams (where the intermediate vector meson-baryon pair propagates on-shell) is evaluated without any further assumptions (unitarity). Such a treatment provides an explanation of the $s^{-7}$ scaling of the cross section. Furthermore, elastic rescattering of the charged pion improves the basic Regge pole model at forward and backward angles.}, doi = {10.1016/j.physletb.2010.01.052}, journal = {Physics Letters. Section B}, number = 2-3, volume = 685, place = {United States}, year = {2010}, month = {1} } Citation information provided by Web of Science Web of Science
Rendezvous with Geostationary Destinations Geostationary orbits are nontrivial to maintain. They are perturbed by non-round-earth forces represented by the J_{22} term in the spherical harmonics of the gravity field, forming attractors at 75 degrees east (above a point slightly east of the Maldives and south of India) and 105 west (west of the Galapagos, directly south of Denver, Colorado). Evading these attractors can take as much as 1.715 m/s delta V per year. The North-South Perturbation The north/south perturbation by the moon and sun are much larger. For the following analysis, I will approximate both the orbit of the Earth and moon as circles. I also assume that R_S >> R_M >> R_G and approximately equal tidal forces at near and far sides of the orbit. A more exact analysis suitable for precision rendezvous would numerically compute the actual multibody elliptical orbits, the spherical harmonic gravity terms for Earth and Moon, gravitational contributions from Jupiter and Venus, and optical perturbations. For now, we are looking for an approximation of maximum \Delta V , but do not forget that the launch, position, and velocity for vehicle in a mature launch loop system will be timed and optically measured to nanoseconds and micrometers, and continuously measured and controlled to this precision throughout the transfer orbit. R_G 4.216 e4 km geostationary orbit diameter \mu_M 4.903 e3 km moon's standard gravitational parameter R_M 3.844 e5 km moon/earth semimajor orbit radius \mu_S 1.327e11 km sun standard gravitational parameter R_S 1.496 e8 km earth/sun semimajor orbit radius Lunar North-South Perturbation The moon's orbit takes it above and below the equatorial plane, which causes out-of-plane tidal forces. The Moon's orbital plane is inclined 5.145 degrees from the ecliptic plane (the plane of the earth's orbit), and precesses over a nodal period of 18.6 years. The earth's axial tilt is 23°26'21".4119 or 23.439°, so the moon's orbital plane is inclined between 18.294° and 28.584° with respect to the equatorial plane, varying sinusoidally between one and the other over the nodal period. According to Boden in Larsen and Wertz) , counteracting these tidal forces on a geostationary object requires a \Delta V budget of as much as 102.67 ~ cos ~ \alpha ~ sin ~ \alpha ~ m/s per year ( ~ == ~ 51.335 ~ sin ~ 2 \alpha ), where \alpha is the angle between the equatorial (geostationary orbit) plane and the and the Moon's orbital plane. Boden claims the worst case is 36.93 m/s per year, which implies an \alpha of 23.00°. Actually, the worst case \alpha is 28.584°, so using assuming the first "102.67..." formula and its double-angle "51.3345 ..." equivalent, the worst case annual \Delta V budget is more like 43.13 m/s per year. Let's see if we can derive that 102.67 m/s/y number. Over the course of a month, the moon moves north of the equatorial plane, through it, then south of the equatorial plane, and back. When the moon is on the equatorial plane, there is no perturbation north or south, though there are tidal forces radially on an object in geostationary orbit. The in-plane tidal acceleration is approximately 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ / {R_M}^3 , where \theta is the orbital angle difference between the orbiting object and the moon. As the moon moves around its orbit, it moves north or south. This puts a northerly component of R_M ~ sin ~ \omega on the moon's position, where \omega is the argument of periapsis, the angle around the moon's orbit from the ascending node relative to the equatorial plane. We can approximate the tidal acceleration as 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ sin ~ \omega ~ / {R_M}^3 MORE LATER References Larsen and Wertz (1999). Space Mission Analysis and Design, Kluwer, 3rd Ed., Page 157 (Chapter Author Daryl G. Boden, PhD, USNA). (?) Soop, E. M. (1994). Handbook of Geostationary Orbits. Springer
We should find the Cauchy principal value integral of the form $$ I=\oint \frac{dz}{(z-z_1)(z-z_2)}~, $$ where both roots $z_1$ and $z_2$ lie on the contour path. My answer is: $$ I=a \left(-\oint \frac{dz}{z-z_1}+\oint \frac{dz}{z-z_2}\right)=a(-i\pi+i\pi)=0~, $$ where $a=1/(z_2-z_1)$. However, in a book, they do not find it to be zero, though they do not explain why. Is there any case for which I should find this integral to be "not zero"? In the specific example of the book, the poles are $z_1=e^{-ik}$, $z_2=e^{ik}$, and the contour is the unit circle. First: putting $\;f(z):=\frac1{(z-z_1)(z-z_2)}\;$ , we obtain (since both poles are simple assuming $\;z_1\neq z_2\;$ ) $$\text{Res}_{z=z_i}(f)=\lim_{z\to z_i}\frac{z-z_i}{(z-z_1)(z-z_2)}=\begin{cases}&\;\;\;\;\;\;\;\;\;\frac1{z_1-z_2}&,\;\;i=1\\{}\\&-\frac1{z_1-z_2}=\frac1{z_2-z_1}&,\;\;i=2\end{cases}$$ Now using the lemma, and its corollary, in the most upvoted answer here, we get that the integral indeed equals zero.
This is a question in Treves. Suppose $a>1$ and $\tau \in \mathbb R $, (i) show that for all $(\tau, \xi) \in \mathbb R^{n+1}$, $|(\tau-ia)^2 - |\xi|^2| \ge(\tau ^2+|\xi|^2+a^2)^{1/2}$ (ii) show that if $k>n/2$, $\iint [(\tau-ia)^2-|\xi|^2]^{-1}(1+a^2+(\tau-ia)^2+|\xi|^2)^{-k}\,d {\tau}\, d {\xi}\quad$ is bounded by a constant independent of $a>1$ I've figured out the first question, and I guess one has to use (i) to prove (ii), but I really don't have an idea of integrating such a double integral. Everything seems to be messy to me. I just don't dare to try, and actually I don't know where to start. This really embarrassed me. Can anyone give me some ideas? I'll really appreciate it.
Prove or disprove the following statement: There exists a function whose Maclaurin series converges at only one point. The best solution was submitted by Kook, Yun Bum (국윤범, 수리과학과 2015학번). Congratulations! Here is his solution of problem 2017-21. Alternative solutions were submitted by 민찬홍 (중앙대학교사범대학부속고등학교 3학년, +3), 채지석 (수리과학과 2016학번, +3), 최대범 (수리과학과 2016학번, +3), 하석민 (2017학번, +3), Huy Tung Nguyen (수리과학과 2016학번, +3). Four incorrect solutions were submitted, mostly due to misunderstanding. GD Star Rating loading... Let \(p\), \(q\), \(r\) be positive integers such that \(p,q\ge r\). Ada and Betty independently read all source codes of their programming project. Ada found \(p\) bugs and Betty found \(q\) bugs, including \(r\) bugs that Ada found. What is the expected number of remaining bugs that neither Ada nor Betty found? GD Star Rating loading... Determine whether or not the following infinite series converges. \[ \sum_{n=0}^{\infty} \frac{ 1 }{2^{2n}} \binom{2n}{n}.\] The best solution was submitted by Lee, Bonwoo (이본우, 2017학번). Congratulations! Here is his solution of problem 2017-20. Alternative solutions were submitted by 고성훈 (+3), 국윤범 (수리과학과 2015학번, +3), 길현준 (인천과학고등학교 2학년, +3), 김태균 (수리과학과 2016학번, +3), 민찬홍 (중앙대학교사범대학부속고등학교 3학년, +3), 유찬진 (수리과학과 2015학번, +3), 이원웅 (건국대 수학과 2014학번, +3), 이준협 (하나고등학교, +3), 채지석 (2016학번, +3), 최대범 (수리과학과 2016학번, +3), 하석민 (2017학번, +3), Huy Tung Nguyen (수리과학과 2016학번, +3), 이준성 (상문고등학교 1학년, +3), 정경훈 (서울대학교 컴퓨터공학과, +3), Mirali Ahmadili & Saba Dzmanashvili (2017학번, +3). GD Star Rating loading... For an integer \( p \), define \[ f_p(n) = \sum_{k=1}^n k^p. \] Prove that \[ \frac{1}{2} \sum_{n=1}^{\infty} \frac{f_{-1}(n)}{f_3(n)} + 2\sum_{n=1}^{\infty} \frac{f_{-1}(n)}{f_1(n)} = \sum_{n=1}^{\infty} \frac{(f_{-1}(n))^2}{f_1(n)}. \] The best solution was submitted by Kim, Taegyun (김태균, 수리과학과 2016학번). Congratulations! Here is his solution of problem 2017-19. Alternative solutions were submitted by 국윤범 (수리과학과 2015학번, +3), 길현준 (인천과학고등학교 2학년, +3), 민찬홍 (중앙대학교사범대학부속고등학교 3학년, +3), 유찬진 (수리과학과 2015학번, +3), 이본우 (2017학번, +3), 채지석 (2016학번, +3), 최대범 (수리과학과 2016학번, +3), Huy Tung Nguyen (수리과학과 2016학번, +3), 이재우 (함양고등학교 2학년, +2), 하석민 (2017학번, +2), Saba Dzmanashvili & Mirali Ahmadili (2017학번, +2). GD Star Rating loading... Prove or disprove the following statement: There exists a function whose Maclaurin series converges at only one point. GD Star Rating loading... Determine whether or not the following infinite series converges. \[ \sum_{n=0}^{\infty} \frac{ 1 }{2^{2n}} \binom{2n}{n}.\] GD Star Rating loading... Suppose that \(f\) is differentiable and \[ \lim_{x\to\infty} (f(x)+f'(x))=2.\] What is \( \lim_{x\to\infty} f(x)\)? The best solution was submitted by You, Chanjin (유찬진, 수리과학과 2015학번). Congratulations! Here is his solution of problem 2017-18. Alternative solutions were submitted by 국윤범 (수리과학과 2015학번, +3), 김태균 (수리과학과 2016학번, +3), 민찬홍 (중앙대학교사범대학부속고등학교 3학년, +3), 이본우 (2017학번, +3), 장기정 (수리과학과 2014학번, +3, alternative solution), 채지석 (2016학번, +3), 최대범 (수리과학과 2016학번, +3), 하석민 (2017학번, +3), Huy Tung Nguyen (수리과학과 2016학번, +3), 윤준기 (전기및전자공학부 2014학번, +2). One incorrect solution was received. GD Star Rating loading... For an integer \( n \geq 3 \), evaluate \[ \inf \left\{ \sum_{i=1}^n \frac{x_i^2}{(1-x_i)^2} \right\}, \] where the infimum is taken over all \( n \)-tuple of real numbers \( x_1, x_2, \dots, x_n \neq 1 \) satisfying that \( x_1 x_2 \dots x_n = 1 \). The best solution was submitted by Choi, Daebeom (최대범, 수리과학과 2016학번). Congratulations! Here is his solution of problem 2017-17. Alternative solutions were submitted by 국윤범 (수리과학과 2015학번, +3), 김태균 (수리과학과 2016학번, +3), 장기정 (수리과학과 2014학번, +3), Huy Tung Nguyen (수리과학과 2016학번, +3), 김기택 (수리과학과 2015학번, +2), 유찬진 (수리과학과 2015학번, +2), 윤준기 (전기및전자공학부 2014학번, +2), 이본우 (2017학번, +2). One incorrect solution was received. GD Star Rating loading... For an integer \( p \), define \[ f_p(n) = \sum_{k=1}^n k^p. \] Prove that \[ \frac{1}{2} \sum_{n=1}^{\infty} \frac{f_{-1}(n)}{f_3(n)} + 2\sum_{n=1}^{\infty} \frac{f_{-1}(n)}{f_1(n)} = \sum_{n=1}^{\infty} \frac{(f_{-1}(n))^2}{f_1(n)}. \] GD Star Rating loading...
Joint work with Øystein Linnebo, University of Oslo. J. D. Hamkins and Ø. Linnebo, “The modal logic of set-theoretic potentialism and the potentialist maximality principles,” to appear in Review of Symbolic Logic, 2018. @ARTICLE{HamkinsLinnebo:Modal-logic-of-set-theoretic-potentialism, author = {Hamkins, Joel David and Linnebo, \O{}ystein}, title = {The modal logic of set-theoretic potentialism and the potentialist maximality principles}, journal = {to appear in Review of Symbolic Logic}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {}, abstract = {}, keywords = {to-appear}, source = {}, eprint = {1708.01644}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1zC}, doi = {}, } Abstract.We analyze the precise modal commitments of several natural varieties of set-theoretic potentialism, using tools we develop for a general model-theoretic account of potentialism, building on those of Hamkins, Leibman and Löwe (Structural connections between a forcing class and its modal logic), including the use of buttons, switches, dials and ratchets. Among the potentialist conceptions we consider are: rank potentialism (true in all larger $V_\beta$); Grothendieck-Zermelo potentialism (true in all larger $V_\kappa$ for inaccessible cardinals $\kappa$); transitive-set potentialism (true in all larger transitive sets); forcing potentialism (true in all forcing extensions); countable-transitive-model potentialism (true in all larger countable transitive models of ZFC); countable-model potentialism (true in all larger countable models of ZFC); and others. In each case, we identify lower bounds for the modal validities, which are generally either S4.2 or S4.3, and an upper bound of S5, proving in each case that these bounds are optimal. The validity of S5 in a world is a potentialist maximality principle, an interesting set-theoretic principle of its own. The results can be viewed as providing an analysis of the modal commitments of the various set-theoretic multiverse conceptions corresponding to each potentialist account. Set-theoretic potentialism is the view in the philosophy of mathematics that the universe of set theory is never fully completed, but rather unfolds gradually as parts of it increasingly come into existence or become accessible to us. On this view, the outer reaches of the set-theoretic universe have merely potential rather than actual existence, in the sense that one can imagine “forming” or discovering always more sets from that realm, as many as desired, but the task is never completed. For example, height potentialism is the view that the universe is never fully completed with respect to height: new ordinals come into existence as the known part of the universe grows ever taller. Width potentialism holds that the universe may grow outwards, as with forcing, so that already existing sets can potentially gain new subsets in a larger universe. One commonly held view amongst set theorists is height potentialism combined with width actualism, whereby the universe grows only upward rather than outward, and so at any moment the part of the universe currently known to us is a rank initial segment $V_\alpha$ of the potential yet-to-be-revealed higher parts of the universe. Such a perspective might even be attractive to a Platonistically inclined large-cardinal set theorist, who wants to hold that there are many large cardinals, but who also is willing at any moment to upgrade to a taller universe with even larger large cardinals than had previously been mentioned. Meanwhile, the width-potentialist height-actualist view may be attractive for those who wish to hold a potentialist account of forcing over the set-theoretic universe $V$. On the height-and-width-potentialist view, one views the universe as growing with respect to both height and width. A set-theoretic monist, in contrast, with an ontology having only a single fully existing universe, will be an actualist with respect to both width and height. The second author has described various potentialist views in previous work. Although we are motivated by the case of set-theoretic potentialism, the potentialist idea itself is far more general, and can be carried out in a general model-theoretic context. For example, the potentialist account of arithmetic is deeply connected with the classical debates surrounding potential as opposed to actual infinity, and indeed, perhaps it is in those classical debates where one finds the origin of potentialism. More generally, one can provide a potentialist account of truth in the context of essentially any kind of structure in any language or theory. Our project here is to analyze and understand more precisely the modal commitments of various set-theoretic potentialist views. After developing a general model-theoretic account of the semantics of potentialism and providing tools for establishing both lower and upper bounds on the modal validities for various kinds of potentialist contexts, we shall use those tools to settle exactly the propositional modal validities for several natural kinds of set-theoretic height and width potentialism. Here is a summary account of the modal logics for various flavors of set-theoretic potentialism. In each case, the indicated lower and upper bounds are realized in particular worlds, usually in the strongest possible way that is consistent with the stated inclusions, although in some cases, this is proved only under additional mild technical hypotheses. Indeed, some of the potentialist accounts are only undertaken with additional set-theoretic assumptions going beyond ZFC. For example, the Grothendieck-Zermelo account of potentialism is interesting mainly only under the assumption that there are a proper class of inaccessible cardinals, and countable-transitive-model potentialism is more robust under the assumption that every real is an element of a countable transitive model of set theory, which can be thought of as a mild large-cardinal assumption. The upper bound of S5, when it is realized, constitutes a potentialist maximality principle, for in such a case, any statement that could possibly become actually true in such a way that it remains actually true as the universe unfolds, is already actually true. We identify necessary and sufficient conditions for each of the concepts of potentialism for a world to fulfill this potentialist maximality principle. For example, in rank-potentialism, a world $V_\kappa$ satisfies S5 with respect to the language of set theory with arbitrary parameters if and only if $\kappa$ is $\Sigma_3$-correct. And it satisfies S5 with respect to the potentialist language of set theory with parameters if and only if it is $\Sigma_n$-correct for every $n$. Similar results hold for each of the potentialist concepts. Finally, let me mention the strong affinities between set-theoretic potentialism and set-theoretic pluralism, particularly with the various set-theoretic multiverse conceptions currently in the literature. Potentialists may regard themselves mainly as providing an account of truth ultimately for a single universe, gradually revealed, the limit of their potentialist system. Nevertheless, the universe fragments of their potentialist account can often naturally be taken as universes in their own right, connected by the potentialist modalities, and in this way, every potentialist system can be viewed as a multiverse. Indeed, the potentialist systems we analyze in this article—including rank potentialism, forcing potentialism, generic-multiverse potentialism, countable-transitive-model potentialism, countable-model potentialism—each align with corresponding natural multiverse conceptions. Because of this, we take the results of this article as providing not only an analysis of the modal commitments of set-theoretic potentialism, but also an analysis of the modal commitments of various particular set-theoretic multiverse conceptions. Indeed, one might say that it is possible ( ahem), in another world, for this article to have been entitled, “ The modal logic of various set-theoretic multiverse conceptions.” For more, please follow the link to the arxiv where you can find the full article. J. D. Hamkins and Ø. Linnebo, “The modal logic of set-theoretic potentialism and the potentialist maximality principles,” to appear in Review of Symbolic Logic, 2018. @ARTICLE{HamkinsLinnebo:Modal-logic-of-set-theoretic-potentialism, author = {Hamkins, Joel David and Linnebo, \O{}ystein}, title = {The modal logic of set-theoretic potentialism and the potentialist maximality principles}, journal = {to appear in Review of Symbolic Logic}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {}, abstract = {}, keywords = {to-appear}, source = {}, eprint = {1708.01644}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1zC}, doi = {}, }
Rendezvous with Geostationary Destinations Geostationary orbits are nontrivial to maintain. They are perturbed by non-round-earth forces represented by the J_{22} term in the spherical harmonics of the gravity field, forming attractors at 75 degrees east (above a point slightly east of the Maldives and south of India) and 105 west (west of the Galapagos, directly south of Denver, Colorado). Evading these attractors can take as much as 1.715 m/s delta V per year. The North-South Perturbation The north/south perturbation by the moon and sun are much larger. For the following analysis, I will approximate both the orbit of the Earth and moon as circles. I also assume that R_S \gg R_M \gg R_G and approximately equal tidal forces at near and far sides of the orbit. A more exact analysis suitable for precision rendezvous would numerically compute the actual multibody elliptical orbits, the spherical harmonic gravity terms for Earth and Moon, gravitational contributions from Jupiter and Venus, and optical perturbations. For now, we are looking for an approximation of maximum \Delta V , but do not forget that the launch, position, and velocity for vehicle in a mature launch loop system will be timed and optically measured to nanoseconds and micrometers, and continuously measured and controlled to this precision throughout the transfer orbit. R_G 4.216 e4 km geostationary orbit diameter \mu_M 4.903 e3 km moon's standard gravitational parameter R_M 3.844 e5 km moon/earth semimajor orbit radius \mu_S 1.327e11 km sun standard gravitational parameter R_S 1.496 e8 km earth/sun semimajor orbit radius Lunar North-South Perturbation The moon's orbit takes it above and below the equatorial plane, which causes out-of-plane tidal forces. The Moon's orbital plane is inclined 5.145 degrees from the ecliptic plane (the plane of the earth's orbit), and precesses over a nodal period of 18.6 years. The earth's axial tilt is 23°26'21".4119 or 23.439°, so the moon's orbital plane is inclined between 18.294° and 28.584° with respect to the equatorial plane, varying sinusoidally between one and the other over the nodal period. According to Boden in Larsen and Wertz) , counteracting these tidal forces on a geostationary object requires a \Delta V budget of as much as 102.67 ~ cos ~ \alpha ~ sin ~ \alpha ~ m/s per year ( ~ \equiv ~ 51.335 ~ sin ~ 2 \alpha ), where \alpha is the angle between the equatorial (geostationary orbit) plane and the and the Moon's orbital plane. Boden claims the worst case is 36.93 m/s per year, which implies an \alpha of 23.00°. Actually, the worst case \alpha is 28.584°, so using assuming the first "102.67..." formula and its double-angle "51.3345 ..." equivalent, the worst case annual \Delta V budget is more like 43.13 m/s per year. Let's see if we can derive that 102.67 m/s/y number. Over the course of a month, the moon moves north of the equatorial plane, through it, then south of the equatorial plane, and back. When the moon is on the equatorial plane, there is no perturbation north or south, though there are tidal forces radially on an object in geostationary orbit. The in-plane tidal acceleration is approximately 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ / {R_M}^3 , where \theta is the orbital angle difference between the orbiting object and the moon. As the moon moves around its orbit, it moves north or south. This puts a northerly component of R_M ~ sin ~ \omega on the moon's position, where \omega is the argument of periapsis, the angle around the moon's orbit from the ascending node relative to the equatorial plane. We can approximate the tidal acceleration as 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ sin ~ \omega ~ / {R_M}^3 MORE LATER References Larsen and Wertz (1999). Space Mission Analysis and Design, Kluwer, 3rd Ed., Page 157 (Chapter Author Daryl G. Boden, PhD, USNA). (?) Soop, E. M. (1994). Handbook of Geostationary Orbits. Springer
№ 9 All Issues Estimates for the best asymmetric approximations of asymmetric classes of functions Abstract Asymptotically sharp estimates are obtained for the best $(\alpha, \beta)$ -approximations of the classes $W^r_{1; \gamma, \delta}$ with natural $r$ by algebraic polynomials in the mean. English version (Springer): Ukrainian Mathematical Journal 63 (2011), no. 6, pp 927-939. Citation Example: Motornyi V. P., Pas'ko A. N. Estimates for the best asymmetric approximations of asymmetric classes of functions // Ukr. Mat. Zh. - 2011. - 63, № 6. - pp. 798-808. Full text
So in RBC and Ramsey-derived utility function, the following is usually the form of utility: $$u(c,l) = c^{1-\sigma}(1 + \omega(l))$$ where $\omega(l)$ is arbitrary function of $l$, labor, that satisfies $u_c>0$, $u_l <0$, $u_{ll} \leq 0$ and $u_{cc} < 0$. $c$ is consumption. In Mankiw/Rotemberg/Summers paper Intertemporal Substitution in Macroeconomics (link: http://scholar.harvard.edu/files/mankiw/files/intertemporal_substitution.pdf), utility function of following is used to test RBC model: $$u(c,l) = \frac{1}{1-\gamma}\left[\frac{c^{1-\alpha} - 1}{1-\alpha} + d\frac{l^{1-\beta} - 1}{1-\beta}\right]^{1-\gamma}$$ As a special case, not considering multiplicative and additive constants, a special case of $$u(c,l) = \frac{c^{1-\alpha}}{1-\alpha} - \frac{l^{1+\beta}}{1+\beta}$$ can be considered. Now for the third utility, it seems that third utility must be special case of the first utility functional form, but I cannot see how this is possible.
Rendezvous with Geostationary Destinations Geostationary orbits are nontrivial to maintain. They are perturbed by non-round-earth forces represented by the J_{22} term in the spherical harmonics of the gravity field, forming attractors at 75 degrees east (above a point slightly east of the Maldives and south of India) and 105 west (west of the Galapagos, directly south of Denver, Colorado). Evading these attractors can take as much as 1.715 m/s delta V per year. The North-South Perturbation The north/south perturbation by the moon and sun are much larger. For the following analysis, I will approximate both the orbit of the Earth and moon as circles. I also assume that R_S \gg R_M \gg R_G and approximately equal tidal forces at near and far sides of the orbit. A more exact analysis suitable for precision rendezvous would numerically compute the actual multibody elliptical orbits, the spherical harmonic gravity terms for Earth and Moon, gravitational contributions from Jupiter and Venus, and optical perturbations. For now, we are looking for an approximation of maximum \Delta V , but do not forget that the launch, position, and velocity for vehicle in a mature launch loop system will be timed and optically measured to nanoseconds and micrometers, and continuously measured and controlled to this precision throughout the transfer orbit. R_G 4.216 e4 km geostationary orbit diameter \mu_M 4.903 e3 km moon's standard gravitational parameter R_M 3.844 e5 km moon/earth semimajor orbit radius \mu_S 1.327e11 km sun standard gravitational parameter R_S 1.496 e8 km earth/sun semimajor orbit radius Lunar North-South Perturbation The moon's orbit takes it above and below the equatorial plane, which causes out-of-plane tidal forces. The Moon's orbital plane is inclined 5.145 degrees from the ecliptic plane (the plane of the earth's orbit), and precesses over a nodal period of 18.6 years. The earth's axial tilt is 23°26'21" or 23.439°, so the moon's orbital plane is inclined between 18.294° and 28.584° with respect to the equatorial plane, varying sinusoidally between one and the other over the nodal period. According to Boden in Larsen and Wertz) , counteracting these tidal forces on a geostationary object requires a \Delta V budget of as much as 102.67 ~ cos ~ \alpha ~ sin ~ \alpha ~ m/s per year ( ~ \equiv ~ 51.335 ~ sin ~ 2 \alpha ), where \alpha is the angle between the equatorial (geostationary orbit) plane and the and the Moon's orbital plane. Boden claims the worst case is 36.93 m/s per year, which implies an \alpha of 23.00°. Actually, the worst case \alpha is 28.584°, so using assuming the first "102.67..." formula and its double-angle "51.3345 ..." equivalent, the worst case annual \Delta V budget is more like 43.13 m/s per year. Let's see if we can derive that 102.67 m/s/y number. Over the course of a month, the moon moves north of the equatorial plane, through it, then south of the equatorial plane, and back. When the moon is on the equatorial plane, there is no perturbation north or south, though there are tidal forces radially on an object in geostationary orbit. The in-plane tidal acceleration is approximately 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ / {R_M}^3 , where \theta is the orbital angle difference between the orbiting object and the moon. As the moon moves around its orbit, it moves north or south. This puts a northerly component of R_M ~ sin ~ \omega on the moon's position, where \omega is the argument of periapsis, the angle around the moon's orbit from the ascending node relative to the equatorial plane. We can approximate the tidal acceleration as 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ sin ~ \omega ~ / {R_M}^3 MORE LATER References Larsen and Wertz (1999). Space Mission Analysis and Design, Kluwer, 3rd Ed., Page 157 (Chapter Author Daryl G. Boden, PhD, USNA). (?) Soop, E. M. (1994). Handbook of Geostationary Orbits. Springer
Learning Outcomes Calculate and interpret the correlation coefficient The correlation coefficient, r, tells us about the strength and direction of the linear relationship between x and y. However, the reliability of the linear model also depends on how many observed data points are in the sample. We need to look at both the value of the correlation coefficient r and the sample size n, together. We perform a hypothesis test of the “ significance of the correlation coefficient” to decide whether the linear relationship in the sample data is strong enough to use to model the relationship in the population. The sample data are used to compute r, the correlation coefficient for the sample. If we had data for the entire population, we could find the population correlation coefficient. But because we have only have sample data, we cannot calculate the population correlation coefficient. The sample correlation coefficient, r, is our estimate of the unknown population correlation coefficient. The symbol for the population correlation coefficient is ρ, the Greek letter “rho.” ρ= population correlation coefficient (unknown) r= sample correlation coefficient (known; calculated from sample data) The hypothesis test lets us decide whether the value of the population correlation coefficient ρ is “close to zero” or “significantly different from zero”. We decide this based on the sample correlation coefficient r and the sample size n. If the test concludes that the correlation coefficient is significantly different from zero, we say that the correlation coefficient is “significant.”Conclusion: There is sufficient evidence to conclude that there is a significant linear relationship between xand ybecause the correlation coefficient is significantly different from zero. What the conclusion means: There is a significant linear relationship between xand y. We can use the regression line to model the linear relationship between xand yin the population. If the test concludes that the correlation coefficient is not significantly different from zero (it is close to zero), we say that correlation coefficient is “not significant.” Conclusion: “There is insufficient evidence to conclude that there is a significant linear relationship between x and y because the correlation coefficient is not significantly different from zero.” What the conclusion means: There is not a significant linear relationship between x and y. Therefore, we CANNOT use the regression line to model a linear relationship between x and y in the population. Note If ris significant and the scatter plot shows a linear trend, the line can be used to predict the value of yfor values of xthat are within the domain of observed xvalues. If ris not significant OR if the scatter plot does not show a linear trend, the line should not be used for prediction. If ris significant and if the scatter plot shows a linear trend, the line may NOT be appropriate or reliable for prediction OUTSIDE the domain of observed xvalues in the data. Performing the Hypothesis Test Null Hypothesis: H: 0 ρ= 0 Alternate Hypothesis: H: a ρ≠ 0 What the Hypotheses Mean in Words Null Hypothesis H: The population correlation coefficient IS NOT significantly different from zero. There IS NOT a significant linear relationship(correlation) between 0 xand yin the population. Alternate Hypothesis H: The population correlation coefficient IS significantly DIFFERENT FROM zero. There IS A SIGNIFICANT LINEAR RELATIONSHIP (correlation) between a xand yin the population. Drawing a Conclusion There are two methods of making the decision. The two methods are equivalent and give the same result. Method 1: Using the p-value Method 2: Using a table of critical values In this chapter of this textbook, we will always use a significance level of 5%, α = 0.05 Note Using the p-value method, you could choose any appropriate significance level you want; you are not limited to using α = 0.05. But the table of critical values provided in this textbook assumes that we are using a significance level of 5%, α = 0.05. (If we wanted to use a different significance level than 5% with the critical value method, we would need different tables of critical values that are not provided in this textbook.) Method 1: Using a p-value to make a decision To calculate the p-value using LinRegTTEST: On the LinRegTTEST input screen, on the line prompt for βor ρ, highlight “≠ 0” The output screen shows the p-value on the line that reads “p =”. (Most computer statistical software can calculate the p-value.) If the p-value is less than the significance level (α = 0.05) Decision: Reject the null hypothesis. Conclusion: “There is sufficient evidence to conclude that there is a significant linear relationship between xand ybecause the correlation coefficient is significantly different from zero.” If the p-value is NOT less than the significance level (α = 0.05) Decision: DO NOT REJECT the null hypothesis. Conclusion: “There is insufficient evidence to conclude that there is a significant linear relationship between xand ybecause the correlation coefficient is NOT significantly different from zero.” Calculation Notes: You will use technology to calculate the p-value. The following describes the calculations to compute the test statistics and the p-value: The p-value is calculated using a t-distribution with n– 2 degrees of freedom. The formula for the test statistic is [latex]\displaystyle{t}=\frac{{{r}\sqrt{{{n}-{2}}}}}{\sqrt{{{1}-{r}^{{2}}}}}[/latex]. The value of the test statistic, t, is shown in the computer or calculator output along with the p-value. The test statistic thas the same sign as the correlation coefficient r. The p-value is the combined area in both tails. An alternative way to calculate the p-value (p) given by LinRegTTest is the command 2*tcdf(abs(t),10^99, n-2) in 2nd DISTR. Method 2: Using a table of Critical Values to make a decision The 95% Critical Values of the Sample Correlation Coefficient Table can be used to give you a good idea of whether the computed value of is significant or not. Compare r to the appropriate critical value in the table. If r is not between the positive and negative critical values, then the correlation coefficient is significant. If r is significant, then you may want to use the line for prediction. Example Suppose you computed r = 0.801 using n = 10 data points. df = n – 2 = 10 – 2 = 8. The critical values associated with df = 8 are -0.632 and + 0.632. If r < negative critical value or r > positive critical value, then r is significant. Since r = 0.801 and 0.801 > 0.632, r is significant and the line may be used for prediction. If you view this example on a number line, it will help you. r is not significant between -0.632 and +0.632. r = 0.801 > +0.632. Therefore, r is significant. try it For a given line of best fit, you computed that r = 0.6501 using n = 12 data points and the critical value is 0.576. Can the line be used for prediction? Why or why not? If the scatter plot looks linear then, yes, the line can be used for prediction, because r > the positive critical value. Example Suppose you computed r = –0.624 with 14 data points. df = 14 – 2 = 12. The critical values are –0.532 and 0.532. Since –0.624 < –0.532, r is significant and the line can be used for prediction r = –0.624-0.532. Therefore, r is significant. try it For a given line of best fit, you compute that r = 0.5204 using n = 9 data points, and the critical value is 0.666. Can the line be used for prediction? Why or why not? No, the line cannot be used for prediction, because r < the positive critical value. Example 3 Suppose you computed r = 0.776 and n = 6. df = 6 – 2 = 4. The critical values are –0.811 and 0.811. Since –0.811 < 0.776 < 0.811, r is not significant, and the line should not be used for prediction. –0.811 < r = 0.776 < 0.811. Therefore, r is not significant. Try it For a given line of best fit, you compute that r = –0.7204 using n = 8 data points, and the critical value is = 0.707. Can the line be used for prediction? Why or why not? Yes, the line can be used for prediction, because r < the negative critical value. Example Suppose you computed the following correlation coefficients. Using the table at the end of the chapter, determine if r is significant and the line of best fit associated with each r can be used to predict a y value. If it helps, draw a number line. r= –0.567 and the sample size, n, is 19. The df= n– 2 = 17. The critical value is –0.456. –0.567 < –0.456 so ris significant. r= 0.708 and the sample size, n, is nine. The df= n– 2 = 7. The critical value is 0.666. 0.708 > 0.666 so ris significant. r= 0.134 and the sample size, n, is 14. The df= 14 – 2 = 12. The critical value is 0.532. 0.134 is between –0.532 and 0.532 so ris not significant. r= 0 and the sample size, n, is five. No matter what the dfs are, r= 0 is between the two critical values so ris not significant. try it For a given line of best fit, you compute that r = 0 using n = 100 data points. Can the line be used for prediction? Why or why not? No, the line cannot be used for prediction no matter what the sample size is. Assumptions in Testing the Significance of the Correlation Coefficient Testing the significance of the correlation coefficient requires that certain assumptions about the data are satisfied. The premise of this test is that the data are a sample of observed points taken from a larger population. We have not examined the entire population because it is not possible or feasible to do so. We are examining the sample to draw a conclusion about whether the linear relationship that we see between x and y in the sample data provides strong enough evidence so that we can conclude that there is a linear relationship between x and y in the population. The regression line equation that we calculate from the sample data gives the best-fit line for our particular sample. We want to use this best-fit line for the sample as an estimate of the best-fit line for the population. Examining the scatterplot and testing the significance of the correlation coefficient helps us determine if it is appropriate to do this. The assumptions underlying the test of significance are: There is a linear relationship in the population that models the average value of yfor varying values of x. In other words, the expected value of yfor each particular value lies on a straight line in the population. (We do not know the equation for the line for the population. Our regression line from the sample is our best estimate of this line in the population.) The yvalues for any particular xvalue are normally distributed about the line. This implies that there are more yvalues scattered closer to the line than are scattered farther away. Assumption (1) implies that these normal distributions are centered on the line: the means of these normal distributions of yvalues lie on the line. The standard deviations of the population yvalues about the line are equal for each value of x. In other words, each of these normal distributions of yvalues has the same shape and spread about the line. The residual errors are mutually independent (no pattern). The data are produced from a well-designed, random sample or randomized experiment. The y values for each x value are normally distributed about the line with the same standard deviation. For each x value, the mean of the y values lies on the regression line. More y values lie near the line than are scattered further away from the line. Concept Review Linear regression is a procedure for fitting a straight line of the form [latex]\displaystyle\hat{{y}}={a}+{b}{x}[/latex] to data. The conditions for regression are: Linear:In the population, there is a linear relationship that models the average value of yfor different values of x. Independent:The residuals are assumed to be independent. Normal:The yvalues are distributed normally for any value of x. Equal variance:The standard deviation of the yvalues is equal for each xvalue. Random:The data are produced from a well-designed random sample or randomized experiment. The slope b and intercept a of the least-squares line estimate the slope β and intercept α of the population (true) regression line. To estimate the population standard deviation of y, σ, use the standard deviation of the residuals, s. [latex]\displaystyle{s}=\sqrt{{\frac{{{S}{S}{E}}}{{{n}-{2}}}}}[/latex] The variable ρ (rho) is the population correlation coefficient. To test the null hypothesis H 0: ρ = hypothesized value, use a linear regression t-test. The most common null hypothesis is H 0: ρ = 0 which indicates there is no linear relationship between x and y in the population. The TI-83, 83+, 84, 84+ calculator function LinRegTTest can perform this test (STATS TESTS LinRegTTest). Formula Review Least Squares Line or Line of Best Fit: [latex]\displaystyle\hat{{y}}={a}+{b}{x}[/latex] where a = y-intercept, b = slope Standard deviation of the residuals: [latex]\displaystyle{s}=\sqrt{{\frac{{{S}{S}{E}}}{{{n}-{2}}}}}[/latex] where SSE = sum of squared errors n = the number of data points
Subramanian, CR and Furer, Martin and Madhavan, Veni CE (1998) Algorithms for Coloring Semi-random Graphs. In: Random Structures and Algorithms, 13 (2). pp. 125-158. PDF page163.pdf Restricted to Registered users only Download (382kB) | Request a copy Abstract The graph coloring problem is to color a given graph with the minimum number of colors. This problem is known to be NP-hard even if we are only aiming at approximate solutions. On the other hand, the best known approximation algorithms require $n^\delta (\delta>0)$ colors even for bounded chromatic k-colorable for fixed k n-vertex graphs. The situation changes dramatically if we look at the average performance of an algorithm rather than its worst case performance. A k-colorable graph drawn from certain classes of distributions can be k-colored almost surely in polynomial time. It is also possible to k-color such random graphs in polynomial average time. In this paper, we present polynomial time algorithms for k-coloring graphs drawn from the semirandom model. In this model, the graph is supplied by an adversary each of whose decisions regarding inclusion of edges is reversed with some probability p. In terms of randomness, this model lies between the worst case model and the usual random model where each edge is chosen with equal probability. We present polynomial time algorithms of two different types. The first type of algorithms w always run in polynomial time and succeed almost surely. Blum and Spencer [J. Algorithms, 19, 204-234 1995] have also obtained independently such algorithms, but our results are based on different proof techniques which are interesting in their own right. The second type of algorithms always succeed and have polynomial running time on the average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work for semirandom graphs drawn from a wide range of distributions and work $p \ge n^{-{\alpha (k)}+\epsilon}}$ Where $\alpha(k) = \frac{(2k)}{((k-1)(k+2))}$ and \epsilon is a positive constant. Item Type: Journal Article Additional Information: The copyright belongs to John Wiley & Sons, Inc. Keywords: graph coloring;random graphs;semi random graphs;polynomial average time algorithms;probabilistic analysis;complexity classes Department/Centre: Division of Electrical Sciences Division of Electrical Sciences > Computer Science & Automation Depositing User: Sandhya Jagirdar Date Deposited: 09 Jan 2006 Last Modified: 19 Sep 2010 04:22 URI: http://eprints.iisc.ac.in/id/eprint/4967 Actions (login required) View Item
Rendezvous with Geostationary Destinations Geostationary orbits are nontrivial to maintain. They are perturbed by non-round-earth forces represented by the J_{22} term in the spherical harmonics of the gravity field, forming attractors at 75 degrees east (above a point slightly east of the Maldives and south of India) and 105 west (west of the Galapagos, directly south of Denver, Colorado). Evading these attractors can take as much as 1.715 m/s delta V per year. The North-South Perturbation The north/south perturbation by the moon and sun are much larger. For the following analysis, I will approximate both the orbit of the Earth and moon as circles. I also assume that R_S \gg R_M \gg R_G and approximately equal tidal forces at near and far sides of the orbit. A more exact analysis suitable for precision rendezvous would numerically compute the actual multibody elliptical orbits, the spherical harmonic gravity terms for Earth and Moon, gravitational contributions from Jupiter and Venus, and optical perturbations. For now, we are looking for an approximation of maximum \Delta V , but do not forget that the launch, position, and velocity for vehicle in a mature launch loop system will be timed and optically measured to nanoseconds and micrometers, and continuously measured and controlled to this precision throughout the transfer orbit. R_G 4.216 e4 km geostationary orbit diameter \mu_M 4.903 e3 km moon's standard gravitational parameter R_M 3.844 e5 km moon/earth semimajor orbit radius \mu_S 1.327e11 km sun standard gravitational parameter R_S 1.496 e8 km earth/sun semimajor orbit radius Lunar North-South Perturbation The moon's orbit takes it above and below the equatorial plane, which causes out-of-plane tidal forces. The Moon's orbital plane is inclined 5.145 degrees from the ecliptic plane (the plane of the earth's orbit), and precesses over a nodal period of 18.6 years. The earth's axial tilt is 23°26'21" or 23.439°, so the moon's orbital plane is inclined between 18.294° and 28.584° with respect to the equatorial plane, varying sinusoidally between one and the other over the nodal period. According to Boden in Larsen and Wertz) , counteracting these tidal forces on a geostationary object requires a \Delta V budget of as much as 102.67 ~ cos ~ \alpha ~ sin ~ \alpha ~ m/s per year ( ~ \equiv ~ 51.335 ~ sin ~ 2 \alpha ), where \alpha is the angle between the equatorial (geostationary orbit) plane and the and the Moon's orbital plane. Boden claims the worst case is 36.93 m/s per year, which implies an \alpha of 23.00°. Actually, the worst case \alpha is 28.584°, so using assuming the first "102.67..." formula and its double-angle "51.3345 ..." equivalent, the worst case annual \Delta V budget is more like 43.13 m/s per year. Let's see if we can derive that 102.67 m/s/y number. Over the course of a month, the moon moves north of the equatorial plane, through it, then south of the equatorial plane, and back. When the moon is on the equatorial plane, there is no perturbation north or south, though there are tidal forces radially on an object in geostationary orbit. The in-plane tidal acceleration is approximately 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ / {R_M}^3 , where \theta is the orbital angle difference between the orbiting object and the moon. As the moon moves around its orbit, it moves north or south. This puts a northerly component of R_M ~ sin ~ \omega on the moon's position, where \omega is the argument of periapsis, the angle around the moon's orbit from the ascending node relative to the equatorial plane. We can approximate the tidal acceleration as 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ sin ~ \omega ~ / {R_M}^3 MORE LATER References Larsen and Wertz (1999). Space Mission Analysis and Design, Kluwer, 3rd Ed., Page 157 (Chapter Author Daryl G. Boden, PhD, USNA). (?) Soop, E. M. (1994). Handbook of Geostationary Orbits. Springer
I want it to be stable near $f(0) = 1$. Is there a nice function that does this already, like maybe a hyperbolic trig function or something like expm1, or should I just check if $x$ is near zero and then use a polynomial approximation? Consider the Bernoulli numbers, defined by the recursive formula: $$B_0=1$$ $$\sum_{k<n} {n\choose k }B_k=0\text{ ; } n\geq 2$$ This gives the sequence: $$\{B_n\}_{n\in \Bbb N}=\left\{ 1,-\frac 1 2,\frac 1 6 ,0,\frac 1 {30},0,\dots\right\}$$ It's generating function is $$ \sum_{n=0}^\infty B_n \frac{x^n}{n!}=\frac{x}{e^x-1}$$ It's first few terms are $$1-\frac x 2 +\frac {x^2}{12}-\frac{x^4}{720}+\cdots$$ The numbers' denominators grow pretty fast, so you should have no problem with convergence: in fact, the function is essentialy $=-x$ for large negative $x$ and $=0$ for large positive $x$, so a few terms should suffice the "near origin" approximation. If you don't want to use the expm1() function for some reason, one possibility, detailed in Higham's book, is to let $u=\exp\,x$ and then compute $\log\,u/(u-1)$. The trick is attributed to Velvel Kahan. You mention using hyperbolic functions. You might try $$ \frac{x}{\exp(x)-1}=\frac{x/2}{\exp(x/2)\sinh(x/2)} $$ This loses no precision if the $\sinh$ is computed to full precision by the underlying system. Note that $\mathrm{expm1}(x)=2\exp(x/2)\sinh(x/2)$. Example: 15 digit calculations $x=.00001415926535897932$: $$ \begin{align} \frac{x}{\exp(x)-1} &=\frac{.00001415926535897932}{1.000014159365602-1}\\ &=\color{#C00000}{.9999929203}73447 \end{align} $$ $$ \begin{align} \frac{x/2}{\exp(x/2)\sinh(x/2)} &=\frac{.00000707963267948966}{1.000005000012500\cdot.00000707963267954879}\\ &=\color{#C00000}{.999992920384028} \end{align} $$ If your system provides expm1(x), they should have worried about errors and stability at least as well as you can. It is a better question if that is not available. Wolfram Alpha gives $1-\frac {x}2+\frac {x^2}{12}$ for the second order series around $0$, so you could check if $x$ is close to zero and use that. You can construct Lagrange interpolation polynomial, of degree $n$, given $n$ a positive integer. You should first extract values of $f (x) = \frac {x} {e^x - 1}$ at any $n + 1$ points you like, say, $x_0, x_1, \cdots, x_n$. Then $$L (x) := \sum_{0 \leqslant k \leqslant n} f (x_k) \prod_{0 \leqslant j \leqslant n} \frac {x - x_j} {x_k - x_j},$$ where $j \neq k$, is the Lagrange polynomial that interpolates $f (x)$ at the points $$(x_0, f (x_0)), (x_1, f (x_1)), \cdots, (x_n, f (x_n)).$$ But even if Lagrange polynomial is a very good approximation of the function that it interpolates, its analytic behaviour at turn points may radically differ from that of the original function. That is why, you may want to interpolate $f (x)$ with such a polynomial, that not only its values at given points overlap with those of the function, but also the values of any of its derivatives at the given points overlap with those of $f (x)$. Such is Taylor polynomial, i.e., Taylor series that is truncated at any given place (in our case, after the $(n + 1)$st term): $$\sum_{0 \leqslant k \leqslant n} \frac {f^{(k)}(a)}{k!} \, (x-a)^{k}.$$ My suggestion may look rather naive to the people here, but interpolation polynomials have long been used in computer algebra systems classically.
I have the following family of polynomials $$ (p-1)(1-x^{2(n+1)}) - x(1-x^{2n}) = 0 $$ where $p\in\mathbb{R}$. By construction this polynomial has roots on the unit circle, and this is reflected in the fact that it is anti-palindromic. I can divide out two roots, $x=1,-1$ and get the following polynomial $$ (p-1)x^{2n} - x^{2n-1} + ... -x + (p-1) =0 $$ Which is a palindromic polynomial. What's interesting about these is that for every root $\alpha$ there is a corresponding root $\frac{1}{\alpha}$ and generically you can reduce this type of polynomial to something of the form $$ x^m g(x+\frac{1}{x}) $$ for some value of $m$, where $g$ is a polynomial of lower order (for this case, $m=n$ and $g(x+\frac{1}{x}) = 2T_n (x + \frac{1}{x}) - \frac{2}{p-1}T_{n-1} (x+\frac{1}{x}) + ... + 1$ where $T_n$ is the $n$th Chebyshev polynomial of the first kind) which also has the information of the roots. My question is how can I determine which, if any, roots of my original polynomial are roots of unity i.e. of the form $e^{i\frac{r}{s}\pi}, r,s\in\mathbb{Z}$, for general values of $p$. There are special values of $p$ for which this equation is monic with all unital coefficients and the roots are all roots of unity, namely $p=0,1,2$. I expect that the roots depend continuously on the parameter $p$, with $p=1$ a singular point, but given that the polynomial is not monic for all other values of $p$ is it reasonable to expect that the roots are no longer simple roots of unity in these regions.
Let $\Phi: G \times M \rightarrow M$ be a group action on a symplectic manifold $M$ and $G$ be a Lie group. Furthermore, $x$ is a solution of the Hamilton equation $\dot{x}(t) = X_H(x(t))$ and for a any $t$ there is a $g(t) \in G_x:=\{g \in G; Ad^*_h(x)=x\}$ (here $x \in \mathfrak{g}^*$) such that $x(t) = \Phi(g(t),d(t)).$ Now this $d:I \rightarrow M$ is another path in $M$ that is not necessarily Hamiltonian, but can be transformed into one under the group action, where the group elements can be taken from $G_x.$ In this situation I found that $d$ already satisfies the Hamilton equation $$\dot{d}(t) + Z_{\zeta(t)}(d(t))= X_H(d(t)),$$ where $Z_{\zeta(t)}(p) := \frac{d}{dt}|_{t=0} \Phi(e^{t \zeta(t)},p)$ and $\zeta(t):=dL_{g(t)^{-1}}\dot{g}(t).$ So actually $d$ satisfies a Hamiltonian equation with an additional term. Does anybody know how this result can be derived? It should not be too hard, as we already have a strong relationship between $x$ and $d$, but I don't really see how we can get the equation modified Hamilton's equation shown above.
Euclid's proof is typically regarded as very weak, because it produces a density which is extremely thin. My question is how far the most obvious generalizations of Euclid's algorithms (for definiteness, algorithm E2) partially fix this problem. In particular, does E2 produce a smooth monotone $\pi(n)$ which asymptotically grows at least like log(N)? The algorithms which to me are the obvious generalizations: E1. Given a set P of primes, partition it into into two parts A and B, multiply the primes in A and the primes in B, and either add or subtract the two products. These $2^{|P|}$ numbers are all coprime to all the numbers in P. E2. Given any partition of P into K disjoint subsets $A_1, \ldots , A_K$, Generate $E_k(A_k) = \sum_{1\le k\le K} {\pm (k)} \prod_{p \in (P-A_k)} p $, where ${\pm(k)}$ is a sign for each partition. In words: multiply all the primes in the complement of each partition and add the results with arbitrary signs. These are all coprime too. E3. Given any list of K exponents $e(p,k)$ for each prime p and $k\in {1..K}$, where for each fixed p, exactly one of the e(p,k) is zero, the broadest reasonable generalization is to generate coprime numbers with multiplicity: $E_k(A_k,e) = \sum_{1\le k\le K} {\pm (k)} \prod_{p\in P} p^{e(p,k)} $ Algorithm E3 produces an infinite list of coprimes, so consider E2. Starting with the set {2}, and applying algorithm E2, you get a very dense list of new primes, at least for a while. I am asking the weakest question I can think of, namely does E2 produces a better than 1/x density, and a better than logarithmic growth for the prime counting function, when iterated from an initial segment of primes: Question: Does algorithm E2 produce enough new coprimes to prove that p(x)> clog(x) Heuristics The reason I am asking for such weak growth is because of the following (very bad, but perhaps essentially accurate) heuristic: let $\pi(N)$ be the prime counting number, and partition the primes less than $N$ into subsets of just one prime. There are about $2^{\pi(N)}$ E2 coprimes of this form, and I will only consider these. I will assume that all they are all distinct, and prime, since when they factor, they make more primes. So that, calling P(N) the product of all primes less than N $\pi(P) >\approx 2^{\pi(N)}$ which, since $P = (2^{\pi(N)}) ^{\overline{\log_2 p}}$, where $\overline{\log_2 p}$ is the mean of the base two logarithm of all primes less than N. Assuming that the average log is $\log_2(N)$ (which overestimates \pi(N), one finds $\pi((2^{\pi(N)})^{\log_2 N} ) \approx 2^{\pi(N)}$ which, calling the big number $2^{\pi(N)}$ by the name A, says that $\pi(A^{\log_2(N)}) \approx A$ and noting that N is about log(A) up to what matters, one finds that $\pi(A^{\log_2(\log_2(A))}) = A$ so that the heuristic lower bound for $\pi(x)$ is asymptotically approximately the inverse function of $x^{\log_2(\log_2(x))}$, i.e. eventually approximately logarithmic. (the motivation for this question is an [What are some proofs of Godel's Theorem which are *essentially different* from the original proof? unrelated answer] to an unrelated question) Embarassing Omission LATER EDIT: I thought E2 was the interesting algorithm, but set of numbers generated by E3: is closed under products is closed under adding multiples of P, where P is the product of all the primes in the original set. So the only way it can fail to generate the full multiplicative group mod P is if by some miracle it stays in a multiplicative subgroup, and that's ridiculous, but ridiculous is not a proof. Given this the restriction of multiplicity to get E2 starts to look completely artificial. At least this easier observation would completely answer the original nebulous motivating question: Euclid's proof, when turned into the most general equivalent algorithm, does generate all the primes by finding the next primes for any given collection of primes (but one needs an exponent bound to make the algorithm effective). So, I guess a much better question would be the following conjecture: Q2(conjecture): E3 starting with an initial set of primes gives all the coprimes.
kidzsearch.com > wiki Explore:images videos games Function (mathematics) In mathematics, a function is a prescription that assigns to every object of one set an object of another (or the same) set. In many cases the objects are numbers. A function may be seen as producing an output - the assigned object -, when given an input - the object from the first set. So a function is like a process. Each input x that is in the set X of inputs is paired with one output y in the set Y of outputs. The set X of inputs is called the domain and the set Y of possible outputs is called the codomain. Then it is said that y is a function of x, and we write y = f( x). f is the name of the function and one writes [math]f:X \to Y [/math] (function from X to Y) to represent the three parts of the function, the domain, the codomain and the pairing process. An example of a function is the factorial: [math]Factorial:N \to N[/math]. One gives a natural number [math]x[/math] as the input and gets a natural number [math]y[/math] as the output with the property that [math]y=x![/math]. The idea of a function has been set up to cover all sorts of possibilities. It is not necessary that the pairing is given by an equation. The main idea is that inputs and outputs are paired up somehow even if the process is very complicated or not obvious. Metaphors Tables The inputs and outputs can be put in a table like the picture; this is easy if there is not too much data. Graphs In the picture it can be seen that both 2 and 3 have been paired with c; this is not allowed in the other direction, 2 could not output c and d,each input can only have one output.All of the [math]f(x)[/math] (c and d in the picture) are usually called the image set of [math]f[/math] and the image set can be all of the codomain or not.One can say that the subset A of the codomain with the image set is f(A).If the inputs and outputs have an order it is easy to plot them on a graph:In that way the image come on the image of the set A.This will make a both of 2 and 3 have paired with is not allowed in the other direction,even one can make between codomain or not.So,we can make a conclusion that the subset A of the codomain is the image set is F(A). History In 1748 Leonhard Euler gave: "A function of a variable quantity is an analytic expression composed in any way whatsoever of the variable quantity and numbers or constant quantities." and then in 1755: "If some quantities so depend on other quantities that if the latter are changed the former undergoes change, then the former quantities are called functions of the latter. This definition applies rather widely and includes all ways in which one quantity could be determined by other. If, therefore, x denotes a variable quantity, then all quantities which depend upon x in any way, or are determined by it, are called functions of x." which is very modern. Usually, Dirichlet is credited with the version used in schools until the second half of the 20th century: "y is a function of a variable x, defined on the interval a < x < b, if to every value of the variable x in this interval there corresponds a definite value of the variable y. Also, it is irrelevant in what way this correspondence is established." In 1939, the Bourbaki generalized the Dirichlet definition and gave a set theoretic version of the definition as a correspondence between inputs and outputs;this was used in schools from about 1960. Finally in 1970, the Bourbaki gave the modern definition as a triple [math]f = (X,Y,F)[/math], with [math]F\sub X\times Y, (x,f(x))\in F[/math] (i.e. [math]f:X \to Y[/math] and [math] F=\{(x,f(x))|x\in X,f(x)\in Y\}[/math]). Types of functions Elementary functions. Inverse functions. {{Link FA|lm
Contents There is very few documents on the Web and few books that explain the Perlin noise method in an intuitive way; but finding information on how to compute Perlin noise derivatives (especially in an analytical way) is even harder. Though, for those of you who don't know what these derivatives are and why they are useful, let's first go through a quick introduction on derivatives. A Quick Introduction to (Partial) Derivatives Derivatives of any function (whether it is a one-, two- or three-dimensional function) are very useful. But before we give an example, let's first review what they are. If you create an image of a 2D noise and apply some sort of regular grid on top of that image, then we may want to know by how much the noise function varies along the x and y direction at each point of the grid (figure 1). Maybe this idea sounds familiar already? Remember that in the previous chapter, we used the result of a 2D noise function to displace a mesh. But let's get back to what we are trying to achieve here: how do know the rate of change of our 2D noise function along the x- or y-axis? A very simple solution to this problem consists of taking the value of the noise at the point where you want to compute this variation (let's call this point \(Gn_x\)), the value of the noise at the point a step further to the right from \(Gn_x\) (let's call this second point \(Gn_{x+1}\)), and then subtract the second value from the first. Example: if at the grid position \(Gn_{11}\) the noise is equal to 0.1 and at that the grid position \(Gn_{12}\) the noise value is equal to 0.7, then we can assume that the noise has varied from \(Gn_{11}\) to \(Gn_{12}\) (along the x-axis) by 0.6 (figure 1). In equation form, we would write: Technically it is best to normalize this difference so that we get consistent results regardless of the distance that separates two points on the grids (if the results are normalised, measurements made with different grid spacing can then be compared to each other). To normalize this result, we just need to divide the difference by the distance between \(G_{11}\) and \(G_{12}\). So if the distance between two points on the grid is 2 for example, then we would need to write:$$\Delta_x Gn_{11} = {\dfrac{Gn_{12} - Gn_{11}}{2}}.$$ In mathematics this technique is called a forward difference. Forward because we take the next computed point and subtract the value at the current point from the value at the next point. Mathematically we can formalize this concept with the following equation:$$f'(x) = \lim_{h \to 0}{\dfrac{f(x+h) - f(x)}{h}}.$$ This means that we can compute the derivative of the function \(f(x)\) using the forward difference technique that we just introduced but that the value of this derivative will become more and more accurate as the distance between the two points becomes smaller (in theory \(h\) tends toward 0). When the spacing is large you get some sort of very crude value for what the derivative is at a given point \(x\) but as the spacing becomes small this approximation improves. In the case of our noise image, the spacing of the grid is pretty large, so in fact, you would get a much better approximation of the variation of the noise function at each point on the grid, if the grid spacing was smaller (figure 2). This concept is more easily understood with a 1D example. Figure 3 shows the profile of one-dimensional function. Let's assume that we now want to know by how much this function varies within the proximity of P. Using the principle of forward differencing, we can take a point further down along the x-axis such as for example \(x_1\), compute the value of the function at that point and then subtract \(f(x_1)\) to \(f(x)\). Note that that when we trace a line from \(f(x)\) to\(f(x_1)\) that line is tangent to the function at \(x\). That's because in fact, the derivative of a one-dimensional function gives the slope of the line tangent to the function where the function's derivative is being computed. So now that we know how to geometrically interpret the derivative of a function, you can easily see that if we take a point further away than \(x_1\) such as for example \(x_2\), then the line between \(x\) and \(x_2\) is not "as tangent" to \(x\) than is the line \(x\)-\(x_1\). Conclusion, if you use forward difference to compute the derivative of a function, then the smaller the distance between \(x\) and \(x+h\) the better. What have we learned so far? We learned that the derivative \(f'(x)\) of a one-dimensional function \(f(x)\) can be interpreted as the slope of the line tangent to the function \(f(x)\) at \(x\) We also learned that we could use a technique called forward difference (the general method is called finite difference) to compute a "approximation" of that slope, but also that this approximation gets better as \(h\) in the forward difference equation gets smaller. There is something really important to understand in order to make sense of how we will be using derivatives (actually partial derivatives) later on this chapter. So far, we explained that the derivative of a function can be interpreted as the slope of the function \(f(x)\) at any value of \(x\). The way we trace the tangent at \(x\) (where we computed the derivative of the function \(f(x)\)) is by simply drawing the line \(y = mx\) at the point on the function where we computed the derivative of the function. The value \(m\) here is of course the slope of the function derivative we computed. Why is this important? It's important because note that when \(x=1\) then \(y=m\). This means that using this observation, we can say that the 2D vector tangent to the curve at the point where we evaluated the derivative is equal to Vec2f(1, m). Do you agree? Of course you then need to normalize this vector, but nonetheless note how this vector is tangent to the point where we evaluated the function's derivative (and that's what we want you to remember). It is important you understand this idea (which is illustrated in figure 4). You can see the 3D Perlin noise function as two 2D functions perpendicular to each other at the point where the derivative is computed. So you will have a 1D function to compute the derivative of the 2D function in the xy plane if you wish, and another 1D function to compute the derivative of 2D noise function in the yz plane as showed in figure 5. Now if we apply the technique we just learned to compute the tangent at each one of these functions, note that the two obtained tangents are perpendicular to each other. But more interestingly by now taking the cross product of these two vectors you get a vector which is in fact perpendicular to the plane tangent to the point where the 2D noise function derivative was originally computed (as shown again in figure 5). This vector is the normal of our function at P. Hopefully by now you start to get it. These derivatives are going to be useful to compute the normals of our mesh displaced by a 3D or 2D noise function. This is simple and elegant. Now one remark and one question. Remark: note that we don't really compute the derivative of the noise function here. We sort of cheat by computing a derivative of the function along the x-axis and then another derivative along the z-axis. What's interesting when we do that is that only one of the quantities varies. For example when compute the derivative of the 3D noise function along the x-axis then of course the value we get doesn't change because of a variation of the noise along the z-axis (since we evaluate the noise function in the plane xy - aka there's no variation in z). In mathematics when you have a function with several variables but that you compute its derivative with respect to one of its variables only, with the others held constant, then we say that we compute the function's partial derivatives. Let's take an example, if you have the function: Then if you wish to compute the derivative of the function while holding \(y\) constant then you get:$${\dfrac{f(x,y)}{\partial x}} = 2x + y,$$ which you can read as the function \(f(x,y)\) partial derivative with respect to \(x\). In other words, we ignore all the terms in which \(x\) doesn't show up such as \(y^2\) in our example (including any constant term), then compute the derivative of \(x\) for each term in which \(x\) shows up. For example, if in one of the terms we have a \(x^2y\) then we replace it in the partial derivative by \(2xy\). If we have \(xy\), then we replace the term in the partial derivative by \(y\). Simple? Question: how do we compute these partial derivatives then? One method consists of using the forward difference technique (which is why we learned about it at the beginning of this chapter). If we take the example of our displaced meshed, then we can compute the derivate along the x-axis at the vertex \(V_{x,z}\) by subtracting the derivate from the noise value at the vertex \(V_{x+1,z}\) from the noise value at the vertex \(V_{x,z}\) (figure 6). In other words we can write:$$\partial Nx = N_{Vx+1,z} - N_{Vx,z}.$$ Where \(\partial Nx\) is the partial derivative of the noise function along the x-axis. Note that at this point in time, this is a real value, not a vector. We can similarly compute the partial derivative of the noise function along the z-axis:$$\partial Nz = N_{Vx,z+1} - N_{Vx,z} .$$ The question is now, how do we transform this real value (in other words a float) into a vector (the vector tangent to the noise function at the point where the derivative is computed along the x- and z-axis). Well this is simple. Remember that when we compute the partial derivative along the x-axis we work in the xy plane. Thus the z-coordinate of the vector tangent to the noise function along the x-axis is necessarily 0:$$T_x = \{?, ?, 0\}.$$ To compute the other two coordinates, you need to look at figure 4 again where we explained that to compute the tangent to the function in a plane, you need to set the x coordinate to 1 and the y coordinate of the vector to function partial derivative value (if you want to compute the tangent of the vector in the yz plane then you need to set the z-coordinate of the tangent to 1, the y-coordinate of the vector to the function partial derivative with respect to z, and then the x-coordinate of the tangent vector to 0). Finally we have:$$ \begin{array}{l} T_x = \{1, Nx, 0\}\\ T_z = \{0, Nz, 1\}. \end{array} $$ Finally to compute the normal of the vertex, all you need to do now is to compute the cross product of these two vectors (the result of the cross product will be correct even of the two input vectors are not normalized: the resulting vector will be perpendicular to the two input vectors, though it might not be normalized itself):$$Normal_{Vx,z} = T_z \times T_x.$$ This technique works great but: First what happens when we want to compute the derivatives for the vertices at the edges of the grid? Well you can't. We also explained (figure 3) that the smaller the space between two samples when we compute the derivative using a forward difference, the more accurate the result. In other words the larger the space between the vertices, the less accurate the computation of the partial derivatives. Later on in this chapter we will show the difference between the partial derivative computed with a forward difference and the analytical solution that we are now going to study. Analytical Partial Derivatives of the Perlin Noise Function So there is a better way of computing these partial derivatives. This technique only relies on maths and provides an "accurate" solution (in the mathematical sense of the term). A quick reminder: the partial derivative of the following equation: $$f(x,y) = x^2 + xy + y^2$$ with respect to x is:$${\dfrac{f(x,y)}{\partial x}} = 2x + y.$$ Now let's re-write the Noise function a little. Let's first replace all the dot products with letters (as shown below): For the sake of the exercise let's recall that the parameters \(u\), \(v\) and \(w\) are computed as follows (we use the smoothstep function): Let's now write the different interpolations of the Perlin noise function into one single line (see the first chapter): Let's now replace the call to the lerp(a, b, t) function with its actual code (a(1-t)+bt) and develop: And then finally regroup the terms as follows: As you can see (and as expected) this is a function of three variables: \(u\), \(v\) and \(w\). If we apply the technique we learned to compute the partial derivative of a function with respect to one of its variable, we need to remove all the terms that do not contain the variable in question, and then replace the variable by its derivative in the remaining terms. For example, if we wish to compute the noise function partial derivative with respect to \(u\) we get: Similarly, the partial derivatives with respect to \(v\) and \(w\) are: The remaining question is what is the derivative of \(u'\), \(v'\) and \(w'\)? Well simple \(u\), \(v\) and \(w\) are computed as follows:$$ \begin{array}{l} u = 3tx^2 - 2tx^3\\ v = 3ty^2 - 2ty^3\\ w = 3tz^2 - 2tz^3\\ \end{array} $$ Thus the derivatives of these functions are:$$ \begin{array}{l} u' = 6tx - 6tx^2\\ v' = 6ty - 6ty^2\\ w' = 6tz - 6tz^2\\ \end{array} $$ Et voila! All you need to do now is compute these derivatives, and then construct the vectors tangent to the point where the function is evaluated using the technique we provided above. Here is a modified version of our eval function that evaluates the 3D noise function and its partial derivatives at a given location: Analytical Solution vs Forward Difference We can now compete two versions of the displaced mesh, one using the geometric solution to compute the vertex normals of the mesh, and one using the analytic solution. To code to compute either one of these solutions is as follows: Here is an image of these two meshes with their associate vertex normals (displayed on the right): Note that the vertex normals are not defined at the edge of the mesh whose normals were computed using the forward difference method (geometric solution). Note also the directions of the vertex normals are significantly different between the two meshes (even though their shape is the same). This obviously causes the shading of the two meshes to be noticeably different as well (remember that normals are used in shading).
Rendezvous with Geostationary Destinations Geostationary orbits are nontrivial to maintain. They are perturbed by non-round-earth forces represented by the J_{22} term in the spherical harmonics of the gravity field, forming attractors at 75 degrees east (above a point slightly east of the Maldives and south of India) and 105 west (west of the Galapagos, directly south of Denver, Colorado). Evading these attractors can take as much as 1.715 m/s delta V per year. The North-South Perturbation The north/south perturbation by the moon and sun are much larger. For the following analysis, I will approximate both the orbit of the Earth and moon as circles. I also assume that R_S \gg R_M \gg R_G and approximately equal tidal forces at near and far sides of the orbit. A more exact analysis suitable for precision rendezvous would numerically compute the actual multibody elliptical orbits, the spherical harmonic gravity terms for Earth and Moon, gravitational contributions from Jupiter and Venus, and optical perturbations. For now, we are looking for an approximation of maximum \Delta V , but do not forget that the launch, position, and velocity for vehicle in a mature launch loop system will be timed and optically measured to nanoseconds and micrometers, and continuously measured and controlled to this precision throughout the transfer orbit. R_G 4.216 e4 km geostationary orbit diameter \mu_M 4.903 e3 km moon's standard gravitational parameter R_M 3.844 e5 km moon/earth semimajor orbit radius \mu_S 1.327e11 km sun standard gravitational parameter R_S 1.496 e8 km earth/sun semimajor orbit radius Lunar North-South Perturbation The moon's orbit takes it above and below the equatorial plane, which causes out-of-plane tidal forces. The Moon's orbital plane is inclined 5.145 degrees from the ecliptic plane (the plane of the earth's orbit), and precesses over a nodal period of 18.6 years. The earth's axial tilt is 23°26'21" or 23.439°, so the moon's orbital plane is inclined between 18.294° and 28.584° with respect to the equatorial plane, varying sinusoidally between one and the other over the nodal period. According to Boden in (Larsen and Wertz) , counteracting these tidal forces on a geostationary object requires a \Delta V budget of as much as 102.67 ~ cos ~ \alpha ~ sin ~ \alpha ~ m/s per year ( ~ \equiv ~ 51.335 ~ sin ~ 2 \alpha ), where \alpha is the angle between the equatorial (geostationary orbit) plane and the and the Moon's orbital plane. Boden claims the worst case is 36.93 m/s per year, which implies an \alpha of 23.00°. Actually, the worst case \alpha is 28.584°, so using assuming the first "102.67..." formula and its double-angle "51.3345 ..." equivalent, the worst case annual \Delta V budget is more like 43.13 m/s per year. Let's see if we can derive that 102.67 m/s/y number. Over the course of a month, the moon moves north of the equatorial plane, through it, then south of the equatorial plane, and back. When the moon is on the equatorial plane, there is no perturbation north or south, though there are tidal forces radially on an object in geostationary orbit. The in-plane tidal acceleration is approximately 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ / {R_M}^3 , where \theta is the orbital angle difference between the orbiting object and the moon. As the moon moves around its orbit, it moves north or south. This puts a northerly component of R_M ~ sin ~ \omega on the moon's position, where \omega is the argument of periapsis, the angle around the moon's orbit from the ascending node relative to the equatorial plane. We can approximate the tidal acceleration as 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ sin ~ \omega ~ / {R_M}^3 MORE LATER References Larsen and Wertz (1999). Space Mission Analysis and Design, Kluwer, 3rd Ed., Page 157 (Chapter Author Daryl G. Boden, PhD, USNA). (?) Soop, E. M. (1994). Handbook of Geostationary Orbits. Springer
Rendezvous with Geostationary Destinations Geostationary orbits are nontrivial to maintain. They are perturbed by non-round-earth forces represented by the J_{22} term in the spherical harmonics of the gravity field, forming attractors at 75 degrees east (above a point slightly east of the Maldives and south of India) and 105 west (west of the Galapagos, directly south of Denver, Colorado). Evading these attractors can take as much as 1.715 m/s delta V per year. The North-South Perturbation The north/south perturbation by the moon and sun are much larger. For the following analysis, I will approximate both the orbit of the Earth and moon as circles. I also assume that R_S \gg R_M \gg R_G and approximately equal tidal forces at near and far sides of the orbit. A more exact analysis suitable for precision rendezvous would numerically compute the actual multibody elliptical orbits, the spherical harmonic gravity terms for Earth and Moon, gravitational contributions from Jupiter and Venus, and optical perturbations. For now, we are looking for an approximation of maximum \Delta V , but do not forget that the launch, position, and velocity for vehicle in a mature launch loop system will be timed and optically measured to nanoseconds and micrometers, and continuously measured and controlled to this precision throughout the transfer orbit. R_G 4.216 e4 km geostationary orbit diameter \mu_M 4.903 e3 km moon's standard gravitational parameter R_M 3.844 e5 km moon/earth semimajor orbit radius \mu_S 1.327e11 km sun standard gravitational parameter R_S 1.496 e8 km earth/sun semimajor orbit radius Lunar North-South Perturbation The moon's orbit takes it above and below the equatorial plane, which causes out-of-plane tidal forces. The Moon's orbital plane is inclined 5.145 degrees from the ecliptic plane (the plane of the earth's orbit), and precesses over a nodal period of 18.6 years. The earth's axial tilt is 23°26'21" or 23.439°, so the moon's orbital plane is inclined between 18.294° and 28.584° with respect to the equatorial plane, varying sinusoidally between one and the other over the nodal period. According to Boden (in Larsen and Wertz) , counteracting these tidal forces on a geostationary object requires a \Delta V budget of as much as 102.67 ~ cos ~ \alpha ~ sin ~ \alpha ~ m/s per year ( ~ \equiv ~ 51.335 ~ sin ~ 2 \alpha ), where \alpha is the angle between the equatorial (geostationary orbit) plane and the and the Moon's orbital plane. Boden claims the worst case is 36.93 m/s per year, which implies an \alpha of 23.00°. Actually, the worst case \alpha is 28.584°, so using assuming the first "102.67..." formula and its double-angle "51.3345 ..." equivalent, the worst case annual \Delta V budget is more like 43.13 m/s per year. Let's see if we can derive that 102.67 m/s/y number. Over the course of a month, the moon moves north of the equatorial plane, through it, then south of the equatorial plane, and back. When the moon is on the equatorial plane, there is no perturbation north or south, though there are tidal forces radially on an object in geostationary orbit. The in-plane tidal acceleration is approximately 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ / {R_M}^3 , where \theta is the orbital angle difference between the orbiting object and the moon. As the moon moves around its orbit, it moves north or south. This puts a northerly component of R_M ~ sin ~ \omega on the moon's position, where \omega is the argument of periapsis, the angle around the moon's orbit from the ascending node relative to the equatorial plane. We can approximate the tidal acceleration as 2 ~ R_G ~ \mu_M ~ ~ sin ~ \theta ~ sin ~ \omega ~ / {R_M}^3 MORE LATER References Larsen and Wertz (1999). Space Mission Analysis and Design, Kluwer, 3rd Ed., Page 157 (Chapter Author Daryl G. Boden, PhD, USNA). (?) Soop, E. M. (1994). Handbook of Geostationary Orbits. Springer
Difference between revisions of "Antenna Position" (Created page with "== Obtaining ''u,v,w'' From An Antenna Array == A synthesis imaging radio instrument consists of a number of radio elements (radio dishes, dipoles, or other collectors of ra...") Line 46: Line 46: − This vector difference in positions can point in any direction in space. Let us express the phase center direction as a unit vector '''<math>\vec{s_o}</math>''' <math>= (h_o, \delta_o)</math>, where <math>h_o</math> is the hour angle (relative to the local meridian) and <math>\delta_o</math> is the declination (relative to the celestial equator) + This vector difference in positions can point in any direction in space. Let us express the phase center direction as a unit vector '''<math>\vec{s_o}</math>''' <math>= (h_o, \delta_o)</math>, where <math>h_o</math> is the hour angle (relative to the local meridian) and <math>\delta_o</math> is the declination (relative to the celestial equator) + + that the spatial frequencies ''u,v,w'' are just the distances expressed in wavelength unitswe can get the ''u,v,w'' coordinates from the following coordinate transformation: + + Line 55: Line 59: w \\ w \\ \end{bmatrix} \end{bmatrix} − = (1/\lambda) + = (1/\lambda) \begin{bmatrix} \begin{bmatrix} \sin h_o & \cos h_o & 0 \\ \sin h_o & \cos h_o & 0 \\ Line 71: Line 75: === How baseline errors can contribute to the error in phase === === How baseline errors can contribute to the error in phase === − The geometric phase difference at the phase center ( + The geometric phase difference at the phase center (wterm in (1)) is: Revision as of 22:07, 2 November 2016 Contents Obtaining u,v,w From An Antenna Array A synthesis imaging radio instrument consists of a number of radio elements (radio dishes, dipoles, or other collectors of radio emission), which represent measurement points in u,v,w space. We need to describe how to convert an array of dishes on the ground to a set of points in u,v,w space. E, N, U coordinates to x, y, z The first step is to determine a consistent coordinate system. Antennas are typically measured in units such as meters along the ground. We will use a right-handed coordinate system of , East , and North Up . These coordinates are relative to the local horizon, however, and will change depending on where we are on the spherical Earth. It is convenient in astronomy to use a coordinate system aligned with the Earth's rotational axis, for which we will use coordinates (E, N, U) as shown in (x, y, z) Figure 1. Conversion from (E, N, U)to (x, y, z)is done via a simple rotation matrix: which yields the relations: Baselines and Spatial Frequencies Note that the baselines are differences of coordinates, i.e. for the baseline between two antennas we have a vector: This vector difference in positions can point in any direction in space, but the part of the baseline that matters in calculating u,v,w is the component perpendicular to the direction (the phase center direction), which we called in Figure 2. Let us express the phase center direction as a unit vector , where is the hour angle (relative to the local meridian) and is the declination (relative to the celestial equator). Then . Recall that the spatial frequencies u,v,w are just the distances expressed in wavelength units, so we can get the u,v,w coordinates from the baseline length expressed in wavelength units from the following coordinate transformation: How baseline errors can contribute to the error in phase The geometric phase difference at the phase center ( term in (1)) is: We can see what can affect the geometric phase by taking the differential of this expression: where we have used the relation between right ascension and hour angle: , so . Equation (2) shows how baseline errors and source position errors (, ) will affect the error in group delay (or yield an error in phase ). Note that a clock error is equivalent to a source position error . If we have a source whose position is known, we can use Equation (2) to find the location of the antennas (this is called ). The error in antenna position is largely independent of the baseline lengths. For example, say that we can measure to within 1 degree at 5 GHz ( = 6 cm). Then we can measure , and to a precision of order (1 / 360) 6 cm ~ 1 / 60 cm even though = 5000 km or more (VLBI). baseline determination The time of day and location of the antennas must be known to relatively high accuracy -- needed for determining the geometric delay. A clock error of 1 s, or a baseline error of a few cm, will cause a serious phase shift of the source over, say, 10 minutes. At OVRO, using a GPS clock and measuring baselines with cosmic source calibration, we get a time accuracy of << 1 ms, and baseline errors of about 3 mm. Therefore, these effects are not serious over a short time interval, but may still be problematic over 8 hours. This is one reason that we do phase calibration observations every ~ 2 hours.
I'll write $\longrightarrow$ for some standard iterations and $\implies$ for some possibly non-standard iterations. The relaxed Collatz conjecture is that for all $x$, $x \implies 1$. I'll call counterexamples to the conjecture escapees. First of all, since it's a good example of the notation, a lemma: for all $a$, $4 a \implies 9 a + 1$. Proof: $4a \implies 12 a + 1 \longrightarrow 36a + 4 \longrightarrow 18a + 2 \longrightarrow 9 a + 1$. $\square$ Now here is what I will try to show: Theorem: If $x$ is the smallest escapee, then $4x \implies 1$. Proof: By elementary considerations, $x \equiv 3 \pmod{4}$. If $x \equiv 2 \pmod{3}$, then $\frac{2 x - 1}{3} \longrightarrow 2 x \longrightarrow x$. So in this case, $2 x$ can't be an escapee, and neither can $4 x$. The next case is $x \equiv 0 \pmod{3}$. Write $x = 12k+3$. Using the lemma, $4x \implies 27k+7$. Observe that $x > 9k+2 \rightarrow 27k+7$ if $k$ is odd. Now if $x \equiv 15 \pmod{24}$, we also have that $4x \implies 1$. It's also possible that $x \equiv 3 \pmod{24}$. In that case, we have $x = 24 j + 3$ and $x \longrightarrow 27j+4$. This means $j$ cannot be even. So we drill down again, with $x = 48i + 27 \longrightarrow 81i+49$. This means $i$ cannot be odd. But also, $4x = 4(48i+27) \implies 81i + 92$, so if $i$ is even, the next step brings us under $x$. This covers all the $x \equiv 0 \pmod{3}$ cases. So far, our result is that if $x \implies 4x$, then $x \equiv 7 \pmod{12}$. Now I'll handle that case. Suppose $x \implies 4x$ and $x = 24k + 7$. Then we have: $4x = 4(24k+7) \implies 9(24k+7)+1 = 216k+64 \longrightarrow 108k+32 \longrightarrow 54k+16$. But also, $18k+5 \longrightarrow 54k+16$. This means $18k+5$ is an escapee too, but it can't be, since it's less than $x$. So the hypothesis is false and we have $x = 24k + 19$ instead. Now, consider: $4x = 4(24k+19) \implies 216k + 172 \longrightarrow 108k + 86 \longrightarrow 54k + 43 \longrightarrow 162k + 130 \longrightarrow 81k+65$. By similar reasoning, $k \neq 3 \pmod{4}$, or else we can continue the chain two more steps to get an escapee less than $x$. Finally, $x = 24k+19 \longrightarrow 72k+58 \longrightarrow 36k + 29 \longrightarrow 108k + 88 \longrightarrow 54k + 44 \longrightarrow 27k + 22$. shows that $k$ can't be even, and substituting $k = 4 j + 1$: $27k + 22 = 108j + 49 \longrightarrow 324j + 148 \longrightarrow 162j + 74 \longrightarrow 81j + 37 < x$ completes the proof, since that's all the cases. $\square$ Corollary: if the Collatz conjecture is false and the smallest counterexample leads back to itself then that number is not a counterexample to the relaxed Collatz conjecture. In other words, if $y$ is the smallest number satisfying $\neg(y \longrightarrow 1)$, then $y \longrightarrow y$ implies $y \implies 1$. This is a weaker version of what I originally thought I had proved. Corollary: If $4x$ is an escapee, then there is an escapee smaller than $x$. This suggests to me a recursive descent strategy where we attempt to show $z \implies 4k < 4x$. But it's not clear if number-crunching will get us anywhere further.
This situation pose a simple mathematical problem while the physical situation occurs in cases where a specific flow rate is required with a given pressure ratio (range) (this problem was considered by some to be somewhat complicated). The specific flow rate can be converted to entrance Mach number and this simplifies the problem. Thus, the problem is reduced to find for given entrance Mach, \(M_1\), and given pressure ratio calculate the flow parameters, like the exit Mach number, \(M_2\). The procedure is based on the fact that the entrance star pressure ratio can be calculated using \(M_1\). Thus, using the pressure ratio to calculate the star exit pressure ratio provide the exit Mach number, \(M_2\). An example of such issue is the following example that combines also the "Naughty professor'' problems. Example 11.21 Calculate the exit Mach number for \(P_2/P_1 =0.4\) and entrance Mach number \(M_1 = 0.25\). Solution 11.21 The star pressure can be obtained from a table or Potto-GDC as Fanno Flow Input: \(M_1\) k = 1.4 \(M_1\) \(\dfrac{4\,f\,L}{D}\) \(\dfrac{P}{P^{\star}}\) \(\dfrac{P_0} (click for details)\) Callstack: at (Bookshelves/Chemical_Engineering/Map:_Fluid_Mechanics_(Bar-Meir)/11:_Compressible_Flow_One_Dimensional/11.7:_Fanno_Flow/11.7.11:_Subsonic_Fanno_Flow_for_a_Given_\(M_1\)_and_Pressure_Ratio), /content/body/div[2]/div/table[1]/tbody/tr[2]/td[4]/span, line 1, column 4 \(\dfrac{\rho}{\rho^{\star}}\) \(\dfrac{U}{U^{\star}}\) \(\dfrac{T}{T^{\star}}\) 0.2500 8.4834 4.3546 2.4027 3.6742 0.27217 1.1852 And the star pressure ratio can be calculated at the exit as following \begin{align*} {P_2 \over P^{*} } = {{P_2 \over P_1 } {P_1 \over P^{*} } } = 0.4 \times 4.3546 = 1.74184 \end{align*} And the corresponding exit Mach number for this pressure ratio reads Fanno Flow Input: \(\dfrac{P}{P^{\star}}\) k = 1.4 \(M_1\) \(\dfrac{4\,f\,L}{D}\) \(\dfrac{P}{P^{\star}}\) \(\dfrac{P_0} (click for details)\) Callstack: at (Bookshelves/Chemical_Engineering/Map:_Fluid_Mechanics_(Bar-Meir)/11:_Compressible_Flow_One_Dimensional/11.7:_Fanno_Flow/11.7.11:_Subsonic_Fanno_Flow_for_a_Given_\(M_1\)_and_Pressure_Ratio), /content/body/div[2]/div/table[2]/tbody/tr[2]/td[4]/span, line 1, column 4 \(\dfrac{\rho}{\rho^{\star}}\) \(\dfrac{U}{U^{\star}}\) \(\dfrac{T}{T^{\star}}\) 0.60694 0.46408 1.7418 1.1801 1.5585 0.64165 1.1177 A bit show off the Potto–GDC can carry these calculations in one click as Fanno Flow Input: (\M_1\) and \(\dfrac{P_2}{P_1}\) k = 1.4 \(M_1\) \(M_2\) \(\dfrac{4\,f\,L}{D}\) \(\dfrac{P_2}{P_1}\) 0.250 0.60693 8.0193 0.400 As it can be seen for the Figure 11.38 the dominating parameter is \(\dfrac{4\,f\,L}{D}\). The results are very similar for isothermal flow. The only difference is in small dimensionless friction, \(\dfrac{4\,f\,L}{D}\). Contributors Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
Notre Dame Journal of Formal Logic Notre Dame J. Formal Logic Volume 46, Number 1 (2005), 19-50. Definable Types Over Banach Spaces Abstract We study connections between asymptotic structure in a Banach space and model theoretic properties of the space. We show that, in an asymptotic sense, a sequence $(x_n)$ in a Banach space X generates copies of one of the classical sequence spaces $\ell_p$ or $c_0$ inside X (almost isometrically) if and only if the quantifier-free types approximated by $(x_n)$ inside X are quantifier-free definable. More precisely, if $(x_n)$ is a bounded sequence X such that no normalized sequence of blocks of $(x_n)$ converges, then the following two conditions are equivalent. (1) There exists a sequence $(y_n)$ of blocks of $(x_n)$ such that for every finite dimensional subspace E of X, every quantifier-free type over $E +\overline{\rm span}\{y_n\mid n\in \mathbb{N}\}$ is quantifier-free definable. (2) One of the following two conditions holds: (a) there exists $1\le p< \infty$ such that for every $\epsilon>0$ and every finite dimensional subspace E of X there exists a sequence of blocks of $(x_n)$ which is $(1+\epsilon)$equivalent over E to the standard unit basis of $\ell_p$; (b) for every $\epsilon>0$ and every finite dimensional subspace E of X there exists a sequence of blocks of $(x_n)$ which is $(1+\epsilon)$-equivalent over E to the standard unit basis of $c_0$. Several byproducts of the proof are analyzed. Article information Source Notre Dame J. Formal Logic, Volume 46, Number 1 (2005), 19-50. Dates First available in Project Euclid: 31 January 2005 Permanent link to this document https://projecteuclid.org/euclid.ndjfl/1107220672 Digital Object Identifier doi:10.1305/ndjfl/1107220672 Mathematical Reviews number (MathSciNet) MR2131545 Zentralblatt MATH identifier 1082.46010 Citation Iovino, José. Definable Types Over Banach Spaces. Notre Dame J. Formal Logic 46 (2005), no. 1, 19--50. doi:10.1305/ndjfl/1107220672. https://projecteuclid.org/euclid.ndjfl/1107220672
If we have two sets $A$ and $B$ such that these two sets contain the same number of elements (the same cardinal number or cardinality), then both these sets are equivalent. The order of listing is not important, e.g. $$\{a, b, c\} = \{b, a, c\}.$$ Nevertheless, they are not necessarily equal. A multiple of sets are equal if every element in one set is also a member of every other set. Therefore, every pair of sets that are equal are also equivalent. A multiple of sets that have the same number of elements, but not the same elements, are equivalent sets. We arrive at the conclusion that every equal set is equivalent, but not every equivalent set is equal, and that the two sets shown in the example above are equal. In comparison, every prime number greater than $2$ is odd, but not every odd number greater than $2$ is prime. Below is an example of equivalent but not equal sets. $$\{a, b, c\} \leftrightarrow \{1, 2, 3\}.$$ Sometimes the equivalence operation $\leftrightarrow$ is replaced with $\sim$, but I will use the former. The above expression is also known as an equivalence relation, and since these sets are equivalent, but not equal, they are referred to as showing a one-to-one correspondence. $$\underbrace{n(A) = n(B)\Rightarrow A\operatorname*{\longleftrightarrow}^{\text{strictly}} B\Leftrightarrow \exists x\in A\ \land \ \nexists x\in B.}_{\text{Equivalence Axiom.}}$$ Thus, if we have three sets $A =\{3, 4\}$, $B =\{4, 3\}\cup \varnothing$ and $C = \{4, 3\}\cup\{\varnothing\}$ then we first note that all sets are equivalent except $C$ because it has three elements whilst the other sets $A$ and $B$ have only two elements. $$A\leftrightarrow B\not\leftrightarrow C.$$ Since $A$ and $B$ contain the same elements, they are equal. $$A = B.$$ The order of listing does not matter, so $\{3, 4\} = \{4, 3\}$ and so the above equation remains valid, and these sets both show a one-to-one correspondence. If confused about the union of empty sets, go here.
Elyase İskender posted in his blog an interesting plot correlating Professors Salary and Alumni Earnings Relation in US Universities. Is there a good way to establish causality here?Did the highest ... Suppose I have a set of units $n_i, i = 1,...,N$. One unit was receiving a treatment $n_j = 1$ at times $t=1$ to $t=\tau$. All other units never received the treatment so $\forall t, n_i=0,i\neq j$. ... The Partial least square structural equation modeling (PLS-PM, PLS-SEM) method to structural equation modeling allows estimating complex cause-effect relationship models with latent variables. This ... I have a data set of car accidents. It includes information on accidents (like date and time), vehicles (like make and year), and drivers (like age and gender).The goal is to estimate whether a new ... I have a data about customers and their activity on a website for a two-year period. Also, I have a customer support work evaluation data for a shorter period of that two-year period. The question is:...
Analyzing the Viscous and Thermal Damping of a MEMS Micromirror Micromirrors have two key benefits: low power consumption and low manufacturing costs. For this reason, many industries use micromirrors for a wide range of MEMS applications. To save time and money when designing micromirrors, engineers can accurately account for thermal and viscous damping and analyze device performance via the COMSOL Multiphysics® software. The Many Applications of Micromirrors Picture a micromirror as a single string on a guitar. The string is so light and thin that when you pluck it, the surrounding air dampens the string’s motion, bringing it to a standstill. Micromirrors have a wide variety of potential applications. For instance, these mirrors can be used to control optic elements, an ability that makes them useful in the microscopy and fiber optics fields. Micromirrors are found in scanners, heads-up displays, medical imaging, and more. Additionally, MEMS systems sometimes use integrated scanning micromirror systems for consumer and telecommunications applications. When developing a micromirror actuator system, engineers need to account for its dynamic vibrating behavior and damping, both of which greatly affect the operation of the device. Simulation provides a way to analyze these factors and accurately predict system performance in a timely and cost-efficient manner. To perform an advanced MEMS analysis, you can combine features in the Structural Mechanics Module and Acoustics Module, two add-on products to the COMSOL Multiphysics simulation platform. Let’s take a look at frequency-domain (time-harmonic) and transient analyses of a vibrating micromirror. Performing a Frequency-Domain Analysis of a Vibrating Micromirror We model an idealized system that consists of a vibrating silicon micromirror — which is 0.5 by 0.5 mm with a thickness of 1 μm — surrounded by air. A key parameter in this model is the penetration depth; i.e., the thickness of the viscous and thermal boundary layers. In these layers, energy dissipates via viscous drag and thermal conduction. The thickness of the viscous and thermal layers is characterized by the following penetration depth scales: where f is the frequency, \rho is the fluid density, \mu is the dynamic viscosity, \kappa is the coefficient of thermal conduction, C_\textrm{p} is the heat capacity at constant pressure, and \textrm{Pr} is the nondimensional Prandtl number. For air, when the system is excited at a frequency of 10 kHz (which is typical for this model), the viscous and thermal scales are 22 µm and 18 µm, respectively. These are comparable to the geometric scales, like the mirror thickness, meaning that thermal and viscous losses must be included. Moreover, in real systems, the mirrors may be located near surfaces or in close proximity to each other, creating narrow regions where the damping effects are accentuated. The frequency-domain analysis provides insight into the frequency response of the system, including the location of the resonance frequencies, Q-factor of the resonance, and damping of the system. The micromirror model geometry, showing the symmetry plane, fixed constraint, and torquing force components. In this example, we use three separate interfaces: The Shellinterface to model the solid micromirror, available in the Structural Mechanics Module The Thermoviscous Acoustics, Frequency Domaininterface to model the air domain around the mirror, available in the Acoustics Module The Pressure Acoustics, Frequency Domaininterface to truncate the computational domain, available in the Acoustics Module By modeling the detailed thermoviscous acoustics and using the Thermoviscous Acoustics, Frequency Domain interface, we can explicitly include thermal and viscous damping while solving the full linearized Navier-Stokes, continuity, and energy equations. In doing so, we accomplish one of the main goals for this model: accurately calculating the damping experienced by the mirror. To set up and combine the three interfaces, we use the Acoustics-Thermoviscous Acoustics Boundary and Thermoviscous-Acoustics-Structure Boundary multiphysics couplings. We then solve the model using a frequency-domain sweep and an eigenfrequency study. These analyses enable us to study the resonance frequency of the mirror under a torquing load in the frequency domain. Results of the Frequency-Domain Analysis Let’s take a look at the displacement of the micromirror for a frequency of 10 kHz and when exposed to the torquing force. In this scenario, the displacement mainly occurs at the edges of the device. To view displacement in a different way, we also plot the response at the tip of the micromirror over a range of frequencies. Micromirror displacement at 10 kHz for phase 0 (left) and the absolute value of the z -component of the displacement field at the micromirror tip (right). Next, let’s view the acoustic temperature variations (left image below) and acoustic pressure distribution (right image below) in the micromirror for a frequency of 11 kHz. As we can see, the maximum and minimum temperature fluctuations occur opposite to one another and there is an antisymmetric pressure distribution. The temperature fluctuations are closely related to the pressure fluctuations through the equation of state. Note that the temperature fluctuations fall to zero at the surface of the mirror, where an isothermal condition is applied. The temperature gradient near the surface gives rise to the thermal losses. Temperature fluctuation field within the thermoviscous acoustics domain (left) and the pressure isosurfaces (right). The two animations below show a dynamic extension of the frequency-domain data using the time-harmonic nature of the solution. Both animations depict the mirror movement in a highly exaggerated manner, with the first one showing an instantaneous velocity magnitude in a cross section and the second showing the acoustic temperature fluctuations. These results indicate that there are high-velocity regions close to the edge of the micromirror. We determine the extent of this region into the air via the scale of the viscous boundary layer (viscous penetration depth). We can also identify the thermal boundary layer or penetration depth using the same method. Animation of the time-harmonic variation in the local velocity. Animation of the time-harmonic variation in the acoustic temperature fluctuations. When the problem is formulated in the frequency domain, eigenmodes or eigenfrequencies can also be identified. From the eigenfrequency study (also performed in the model), we can determine the vibrating modes, shown in the animation below (only half the mirror is shown as symmetry applies). Our results show that the fundamental mode is around 10.5 kHz, with higher modes at 13.1 kHz and 39.5 kHz. The complex value of the eigenfrequency is related to the Q-factor of the resonance and thus the damping. (This relationship is discussed in detail in the Vibrating Micromirror model documentation.) Animation of the first three vibrating modes of the micromirror. Transient Analysis of Viscous and Thermal Damping in a Micromirror As of version 5.3a of the COMSOL® software, a different take on this example solves for the transient behavior of the micromirror. Using the same geometry, we extend the frequency-domain analysis into a transient analysis. To achieve this, we swap the frequency-domain interfaces with their corresponding transient interfaces and adjust the settings of the transient solver. In the simulation, the micromirror is actuated for a short time and exhibits damped vibrations. The resulting model includes some of the most advanced air and gas damping mechanisms that COMSOL Multiphysics has to offer. For instance, the Thermoviscous Acoustics, Transient interface generates the full details for the viscous and thermal damping of the micromirror from the surrounding air. In addition, by coupling the transient perfectly matched layer capabilities of pressure acoustics to the thermoviscous acoustics domain, we can create efficient nonreflecting boundary conditions (NRBCs) for this model in the time domain. Results of the Transient Analysis Let’s start with the displacement results. The 3D results (left image below) visualize the displacement of the micromirror and the pressure distribution at a given time. We also generate a plot (right image below) to illustrate the damped vibrations caused by thermal and viscous losses. The green curve represents the undamped response of the micromirror when the surrounding air is not coupled to the mirror movement. The time-domain simulations make it possible to study transients of the system, like the decay time, and the response of the system to an anharmonic forcing. Micromirror displacement and pressure distribution (left) and the transient evolution of the mirror displacement (right). We can also examine the acoustic temperature variations surrounding the micromirror. The isothermal condition at the micromirror surface produces an acoustic thermal boundary layer. As with the frequency-domain example, the highest and lowest temperatures are located opposite to one another. In addition, by calculating the acoustic velocity variations of the micromirror, we see that a no-slip condition at the micromirror surface results in a viscous boundary layer. Acoustic temperature variations (left) as well as acoustic velocity variations for the x -component (center) and z -component (right). Next Steps These examples demonstrate that we can analyze micromirrors using advanced modeling features available in the Acoustics Module in combination with the Structural Mechanics Module. For more details on modeling micromirrors, check out the tutorials below. Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
A regular hexagon inscribed in a circle has an area of $$54*3^\frac{1}{3} \text{sq.in}$$ Prove that the circumference of a circle is $$25\pi$$ closed as off-topic by Lord Shark the Unknown, Ewan Delanoy, WaveX, postmortes, Hans Lundmark Jun 24 at 6:18 This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Lord Shark the Unknown, WaveX, postmortes, Hans Lundmark Hint: The area of a regular $n $-gon inscribed in a circle of radius $R$ is: $$ A_n=\frac n2 R^2\sin\frac{2\pi}n. $$ Can you take it from here? Warning: the answer will appear to be different from $25\pi$ (in), since the latter value is wrong.
Differentiable Function is Continuous Jump to navigation Jump to search Theorem Let $x_0 \in I$ such that $f$ is differentiable at $x_0$. Then $f$ is continuous at $x_0$. Proof By hypothesis, $f' \left({x_0}\right)$ exists. We have: \(\displaystyle f \left({x}\right) - f \left({x_0}\right)\) \(=\) \(\displaystyle \frac {f \left({x}\right) - f \left({x_0}\right)} {x - x_0} \cdot \left({x - x_0}\right)\) \(\displaystyle \) \(\to\) \(\displaystyle f' \left({x_0}\right) \cdot 0\) as $x \to x_0$ Thus: $f \left({x}\right) \to f \left({x_0}\right)$ as $x \to x_0$ or in other words: $\displaystyle \lim_{x \to x_0} \ f \left({x}\right) = f \left({x_0}\right)$ The result follows by definition of continuous. $\blacksquare$
Is there an injective lattice homomorphism $\varphi: \text{Top}(\kappa)\to \text{Top}^{T_1}(\kappa)$? The answer is Yes, there is such an embedding. I will argue that if $\kappa$ is aninfinite cardinal, then there is a complete lattice embedding$\varphi: \text{Top}(\kappa)\to \text{Top}^{T_1}(\kappa\times\kappa)$.This is enough to answer the question, for the following reason.Any bijection $\beta:\kappa\times\kappa\to\kappa$ induces a lattice isomorphism $\overline{\beta}: \text{Top}(\kappa\times\kappa)\to \text{Top}(\kappa)$ which maps the cofinite topology on $\kappa\times\kappa$ to the cofinite topology on $\kappa$. A topology is $T_1$ iff it contains the cofinite topology, so $\overline{\beta}$ restricts to a lattice isomorphism from $\text{Top}^{T_1}(\kappa\times\kappa)$ to $\text{Top}^{T_1}(\kappa)$.Thus, any (complete) lattice embedding $\text{Top}(\kappa)\to \text{Top}^{T_1}(\kappa\times\kappa)$ can be altered to a (complete) lattice embedding $\text{Top}(\kappa)\to \text{Top}^{T_1}(\kappa)$ by composing with such a $\overline{\beta}$. If $U\subseteq \kappa$, then by a cofinite extension of $U$ I mean a subset$X\subseteq U\times \kappa$ where, for each $u\in U$, the set$\{\lambda<\kappa\;|\;(u,\lambda)\in X\}$ is cofinite in $\kappa$.To make sure this is clear, let me explain this a second wayusing the projection maps $\pi_1, \pi_2\colon \kappa\times\kappa\to\kappa$.$X$ is a cofinite extension of $U$ if(i) $\pi_1(X)=U$, and (ii) for every $u\in U$ we have$\pi_2((\{u\}\times\kappa)\cap X)$ is cofinite in $\kappa$. If $\tau$ is a topology, let $\widehat{\tau}$be the collection of all cofinite extensionsof sets in $\tau$. I claim that I. For any topology $\tau$ on $\kappa$, $\widehat{\tau}$ is a $T_1$ topology on $\kappa\times\kappa$. II. The map $\tau\mapsto \widehat{\tau}$ is a complete lattice embedding of $\text{Top}(\kappa)$ into $\text{Top}^{T_1}(\kappa\times\kappa)$. These are not hard to prove and they establish the result. In the following justifications, if $X$ is a cofiniteextension of $U$, then I may refer to the fibersof $X$, by which I mean fibers of $X$ underthe first projection $\pi_1$. If $x\in\pi_1(X)$,then the fiber of $X$ over $x$ is $(\{x\}\times\kappa)\cap X$,which is a subset of $\kappa\times\kappa$.(So, a cofinite extension of $U\subseteq \kappa$is a subset if $U\times \kappa$ with cofinite fibers.) Sketch of proof of I.(Least and largest subsets)The least and largestsubsets $\emptyset$ and $\kappa\times\kappa$ of the set$\kappa\times\kappa$ are cofinite extensions of theleast and largest subsets$\emptyset$ and $\kappa$ of $\kappa$. (Finite intersection)If $X, Y\in \widehat{\tau}$, then they are cofiniteextensions of some $\pi_1(X)=U, \pi_1(Y)=V\in\tau$.Then $X\cap Y$ is a cofinite extension of $U\cap V\in\tau$,so $X\cap Y\in\widehat{\tau}$. (Arbitrary union)If $X_i\in\widehat{\tau}$, then they are cofinite extensions ofsome $U_i\in\tau$.Then $\cup X_i$ is a cofinite extension of $\cup U_i\in\tau$,so $\cup X_i\in\widehat{\tau}$. ($T_1$)Every cofinite subset of $\kappa\times\kappa$is a cofinite extension of $\kappa$,so any topology of the form$\widehat{\tau}$ on $\kappa\times\kappa$contains all cofinite sets.This means that any such topology is $T_1$. \\\ Sketch of proof of II.Given topologies $\tau_i$ on $\kappa$ we must argue that (Inj) the map $\tau\mapsto \widehat{\tau}$ is injective, (M) $\widehat{\bigcap \tau_i}=\bigcap\widehat{\tau_i}$, and (J) $\widehat{\bigvee \tau_i}=\bigvee\widehat{\tau_i}$. The map $\tau\mapsto \widehat{\tau}$ isis easily seen to be order-preserving (and 1-1), so I focus on the claims (M)' $\widehat{\bigcap \tau_i}\supseteq\bigcap\widehat{\tau_i}$, and (J)' $\widehat{\bigvee \tau_i}\subseteq\bigvee\widehat{\tau_i}$. Let's start with (M)'. Choose a set$X\in \bigcap\widehat{\tau_i}$ and let $U=\pi_i(X)$.Then $U\in \bigcap \tau_i$ for all $i$, and $X$ is a cofinite extension of $U$,so $X\in \widehat{\bigcap\tau_i}$. Now (J)'. Suppose that $X\in \widehat{\bigvee\tau_i}$.Then $X$ is a cofinite extension of some set in $\bigvee\tau_i$,and a typical such set has the form$\bigcup_i (U_{i1}\cap \cdots\cap U_{ik_i})$ where $U_{ij}\in\tau_j$.In will now suffice for us to show that $X$ can also be representedin the form$\bigcup_i (\overline{U}_{i1}\cap \cdots\cap \overline{U}_{ik_i})$where $\overline{U}_{ij}$ is a cofiniteextension of some set in some $\tau_j$.Of course, we will choose$\overline{U}_{ij}$ to be acofinite extension of the set $U_{ij}\in\tau_j$,but we must explain how to choose the fibersof $\overline{U}_{ij}$.If some $x\in U_{ij}$ also belongsto $\pi_1(X)$, then choose the fiberover $x$ in $\overline{U}_{ij}$so that it agrees with the fiber over $x$ in $X$,which must be cofinite in $\kappa$.For any other $x\in U_{ij}$ it doesn't matter how you choosethe fiber over $x$ in $\overline{U}_{ij}$except that it must be cofinite. (To be specific, choose this fiber to be all of $\kappa$.) We have now chosen $\overline{U}_{ij}\in\widehat{\tau_j}$so that $\bigcup (\overline{U}_{i1}\cap \cdots\cap \overline{U}_{ik_i})$has the same first projectionand the same fibers as $X$, hence it equals $X$.This represents $X$ as an element of $\bigvee \widehat{\tau_i}$. \\\
Definition:Multiplication/Natural Numbers Contents Definition Let $\N$ be the natural numbers. Multiplication on $\N$ is the basic operation $\times$ everyone is familiar with. For example: $3 \times 4 = 12$ $13 \times 7 = 91$ The same holds for any construction of $\N$ in an ambient theory. Let $+$ denote addition. $\forall m, n \in \N: \begin{cases} m \times 0 & = 0 \\ m \times \left({n + 1}\right) & = m \times n + m \end{cases}$ This operation is called multiplication. Equivalently, multiplication can be defined as: $\forall m, n \in \N: m \times n := +^n m$ where $+^n m$ denotes the $n$th power of $m$ under $+$. Also see Sources 1951: Nathan Jacobson: Lectures in Abstract Algebra: I. Basic Concepts... (previous) ... (next): Introduction $\S 4$: The natural numbers 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 13$: Arithmetic 1974: Murray R. Spiegel: Theory and Problems of Advanced Calculus(SI ed.) ... (previous) ... (next): Chapter $1$: Numbers: Real Numbers: $1$ 1981: Murray R. Spiegel: Theory and Problems of Complex Variables(SI ed.) ... (previous) ... (next): Chapter $1$: Complex Numbers: The Real Number System: $1$
Let me tell you about a fascinating paradox arising in certain infinitary two-player games of perfect information. The paradox, namely, is that there are games for which our judgement of who has a winning strategy or not depends on whether we insist that the players play according to a deterministic computable procedure. In the the space of computable play for these games, one player has a winning strategy, but in the full space of all legal play, the other player can ensure a win. The fundamental theorem of finite games, proved in 1913 by Zermelo, is the assertion that in every finite two-player game of perfect information — finite in the sense that every play of the game ends in finitely many moves — one of the players has a winning strategy. This is generalized to the case of open games, games where every win for one of the players occurs at a finite stage, by the Gale-Stewart theorem 1953, which asserts that in every open game, one of the players has a winning strategy. Both of these theorems are easily adapted to the case of games with draws, where the conclusion is that one of the players has a winning strategy or both players have draw-or-better strategies. Let us consider games with a computable game tree, so that we can compute whether or not a move is legal. Let us say that such a game is computably paradoxical, if our judgement of who has a winning strategy depends on whether we restrict to computable play or not. So for example, perhaps one player has a winning strategy in the space of all legal play, but the other player has a computable strategy defeating all computable strategies of the opponent. Or perhaps one player has a draw-or-better strategy in the space of all play, but the other player has a computable strategy defeating computable play. Examples of paradoxical games occur in infinite chess. We described such a paradoxical position in my paper Transfinite games values in infinite chess by giving a computable infinite chess position with the property that both players had drawing strategies in the space of all possible legal play, but in the space of computable play, then white had a computable strategy defeating any particular computable strategy for black. For a related non-chess example, let $T$ be a computable subtree of $2^{<\omega}$ having no computable infinite branch, and consider the game in which black simply climbs in this tree as white watches, with black losing whenever he is trapped in a terminal node, but winning if he should climb infinitely. This game is open for white, since if white wins, this is known at a finite stage of play. In the space of all possible play, Black has a winning strategy, which is simply to climb the tree along an infinite branch, which exists by König’s lemma. But there is no computable strategy to find such a branch, by the assumption on the tree, and so when black plays computably, white will inevitably win. For another example, suppose that we have a computable linear order $\lhd$ on the natural numbers $\newcommand\N{\mathbb{N}}\N$, which is not a well order, but which has no computable infinite descending sequence. It is a nice exercise in computable model theory to show that such an order exists. If we play the count-down game in this order, with white trying to build a descending sequence and black watching. In the space of all play, white can succeed and therefore has a winning strategy, but since there is no computable descending sequence, white can have no computable winning strategy, and so black will win every computable play. There are several proofs of open determinacy (and see my MathOverflow post outlining four different proofs of the fundamental theorem of finite games), but one of my favorite proofs of open determinacy uses the concept of transfinite game values, assigning an ordinal to some of the positions in the game tree. Suppose we have an open game between Alice and Bob, where the game is open for Alice. The ordinal values we define for positions in the game tree will measure in a sense the distance Alice is away from winning. Namely, her already-won positions have value $0$, and if it is Alice’s turn to play from a position $p$, then the value of $p$ is $\alpha+1$, if $\alpha$ is minimal such that she can play to a position of value $\alpha$; if it is Bob’s turn to play from $p$, and all the positions to which he can play have value, then the value of $p$ is the supremum of these values. Some positions may be left without value, and we can think of those positions as having value $\infty$, larger than any ordinal. The thing to notice is that if a position has a value, then Alice can always make it go down, and Bob cannot make it go up. So the value-reducing strategy is a winning strategy for Alice, from any position with value, while the value-maintaining strategy is winning for Bob, from any position without a value (maintaining value $\infty$). So the game is determined, depending on whether the initial position has value or not. What is the computable analogue of the ordinal-game-value analysis in the computably paradoxical games? If a game is open for Alice and she has a computable strategy defeating all computable opposing strategies for Bob, but Bob has a non-computable winning strategy, then it cannot be that we can somehow assign computable ordinals to the positions for Alice and have her play the value-reducing strategy, since if those values were actual ordinals, then this would be a full honest winning strategy, even against non-computable play. Nevertheless, I claim that the ordinal-game-value analysis does admit a computable analogue, in the following theorem. This came out of a discussion I had recently with Noah Schweber during his recent visit to the CUNY Graduate Center and Russell Miller. Let us define that a computable open game is an open game whose game tree is computable, so that we can tell whether a given move is legal from a given position (this is a bit weaker than being able to compute the entire set of possible moves from a position, even when this is finite). And let us define that an effective ordinal is a computable relation $\lhd$ on $\N$, for which there is no computable infinite descending sequence. Every computable ordinal is also an effective ordinal, but as we mentioned earlier, there are non-well-ordered effective ordinals. Let us call them computable pseudo-ordinals. Theorem. The following are equivalent for any computable game, open for White. White has a computable strategy defeating any computable play by Black. There is an effective game-value assignment for white into an effective ordinal $\lhd$, giving the initial position a value. That is, there is a computable assignment of some positions of the game, including the first position, to values in the field of $\lhd$, such that from any valued position with White to play, she can play so as to reduce value, and with Black to play, he cannot increase the value. Proof. ($2\to 1$) Given the computable values into an effective ordinal, then the value-reducing strategy for White is a computable strategy. If Black plays computably, then together they compute a descending sequence in the $\lhd$ order. Since there is no computable infinite descending sequence, it must be that the values hit zero and the game ends with a win for White. So White has a computable strategy defeating any computable play by Black. ($1\to 2$) Conversely, suppose that White has a computable strategy $\sigma$ defeating any computable play by Black. Let $\tau$ be the subtree of the game tree arising by insisting that White follow the strategy $\sigma$, and view this as a tree on $\N$, a subtree of $\N^{<\omega}$. Imagine the tree growing downwards, and let $\lhd$ be the Kleene-Brouwer order on this tree, which is the lexical order on incompatible positions, and otherwise longer positions are lower. This is a computable linear order on the tree. Since $\sigma$ is computably winning for White, the open player, it follows that every computable descending sequence in $\tau$ eventually reaches a terminal node. From this, it follows that there is no computable infinite descending sequence with respect to $\lhd$, and so this is an effective ordinal. We may now map every node in $\tau$, which includes the initial node, to itself in the $\lhd$ order. This is a game-value assignment, since on White’s turn, the value goes down, and it doesn’t go up on Black’s turn. QED Corollary. A computable open game is computably paradoxical if and only if it admits an effective game value assignment for the open player, but only with computable pseudo-ordinals and not with computable ordinals. Proof. If there is an effective game value assignment for the open player, then the value-reducing strategy arising from that assignment is a computable strategy defeating any computable strategy for the opponent. Conversely, if the game is paradoxical, there can be no such ordinal-assignment where the values are actually well-ordered, or else that strategy would work against all play by the opponent. QED Let me make a few additional observations about these paradoxical games. Theorem. In any open game, if the closed player has a strategy defeating all computable opposing strategies, then in fact this is a winning strategy also against non-computable play. Proof. If the closed player has a strategy $\sigma$ defeating all computable strategies of the opponent, then in fact it defeats all strategies of the opponent, since any winning play by the open player against $\sigma$ wins in finitely many moves, and therefore there is a computable strategy giving rise to the same play. QED Corollary. If an open game is computably paradoxical, it must be the open player who wins in the space of computable play and the closed player who wins in the space of all play. Proof. The theorem shows that if the closed player wins in the space of computable play, then that player in fact wins in the space of all play. QED Corollary. There are no computably paradoxical clopen games. Proof. If the game is clopen, then both players are closed, but we just argued that any computable strategy for a closed player winning against all computable play is also winning against all play. QED
Compute $$\int_0^{\infty} \frac{e^{-ax}}{1+x^2}\,\,dx,\;\; a>0$$ using complex analysis. but with no luck. Edit The answers are great but lets keep this question open until someone maybe finds a way to solve this integral using complex analysis. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Compute $$\int_0^{\infty} \frac{e^{-ax}}{1+x^2}\,\,dx,\;\; a>0$$ using complex analysis. but with no luck. Edit The answers are great but lets keep this question open until someone maybe finds a way to solve this integral using complex analysis. As I already mentioned in the comments, I did not achieve the result through complex-analysis. Instead I used the following real-analysis approach. Let $$f(a)=\int_0^{\infty} \frac{e^{-ax}}{1+x^2}\,\,dx,\;\; a>0.$$ Then, $$f''(a)=\int_0^{\infty} \frac{x^2\cdot e^{-a\cdot x}}{1+x^2}\ dx.$$ It follows that $$f''(a)+f(a)=\int_0^{\infty}\ e^{-a\cdot x}\ dx.$$ Evaluate the integral: $$f''(a)+f(a)=\frac{1}{a}.$$ A general solution to this differential equation can be defined as the sum of a complementary solution and particular solution. Find the complementary solution by solving the equation: $$f''(a)+f(a)=0.$$ It can be derived that the complementary solution is given by $$f_\text{c}(a)=C_{1}\cdot\cos(a)+C_{2}\cdot\sin(a).$$ Find the particular solution to $$f''(a)+f(a)=\frac{1}{a}$$ by variation of parameters. List the basis solutions in $f_{\text{c}}(a)$: $$f_{b,1}(a)=\cos(a),\\ f_{b,2}(a)=\sin(a).$$ Determine the Wronskian of $f_{b,1}(a)$ and $f_{b,2}(a)$: $$W(a)= \begin{vmatrix} \cos(a) & \sin(x) \\ \frac{d}{da}\cos(a) & \frac{d}{da}\sin(a) \\ \end{vmatrix} = \begin{vmatrix} \cos(a) & \sin(x) \\ -\sin(a) & \cos(a) \\ \end{vmatrix} =\cos^2(a)+\sin^2(a)=1.$$ Let $$g(a)=\frac{1}{a},\\ h_{1}(a)=-\int \frac{g(a)\cdot f_{b,2}(a)}{W(a)}\ da,\\ h_{2}(a)=\int \frac{g(a)\cdot f_{b,1}(a)}{W(a)}\ da.$$ The particular solution will be given by $$f_{\text{p}}(a)=h_{1}(a)\cdot f_{b,1}(a)+h_{2}(a)\cdot f_{b,2}(a).$$ Determine $h_{1}(a)$ and $h_{2}(a)$: $$h_{1}(a)=-\int \frac{\sin(a)}{a}\ da=-\text{Si}(a),\\ h_{2}(a)=\int \frac{\cos(a)}{a}\ da=\text{Ci}(a).$$ The particular equation is given by $$f_{\text{p}}(a)=\text{Ci}(a)\cdot\sin(a)-\text{Si}(a)\cdot \cos(a).$$ A general solution is defined as the sum of the complementary solution and particular solution: $$f(a)=C_{1}\cdot\cos(a)+C_{2}\cdot\sin(a)+\text{Ci}(a)\cdot\sin(a)-\text{Si}(a)\cdot \cos(a).$$ From the above expression for $f(a)$ can be derived that $\lim \limits_{a \to \ 0} f(a)=C_{1}$. From the integral form of $f(a)$ can be derived that $\lim \limits_{a \to \ 0} f(a)=\frac{\pi}{2}$. Therefore, $C_{1}=\frac{\pi}{2}$. From the above expression for $f(a)$ can be derived that $\lim \limits_{a \to \ \infty} f(a)=C_{2}\cdot \sin(\infty)$. From the integral form of $f(a)$ can be derived that $\lim \limits_{a \to \ \infty} f(a)=0$. Therefore, $C_{2}=0$. Thus, an expression in closed form for your integral is given by $$f(a)=\frac{\pi}{2}\cdot\cos(a)+\text{Ci}(a)\cdot\sin(a)-\text{Si}(a)\cdot \cos(a),\;\; a>0.$$ Not sure how you evaluate numerically this integral, but if you are happy with a special function take the exponential integral defined by $$ E_1(z)=\int_z^{\infty}\frac{e^{-z}}{z}dz \space\ \space\ \space\ \left| Arg(z)\right|<\pi $$ Then complexifying \begin{align*} \int_0^{\infty} \frac{e^{-az}}{(z-i)(z+i)}dz &=-\int_0^{\infty}\frac{i}{2}\frac{e^{-az}}{z-i}dz+ \int_{0}^{\infty}\frac{i}{2}\frac{e^{-az}}{z+i}dz=-\int_{-ia}^{\infty}\frac{ia}{2}\frac{e^{-a(z+i)}}{az}dz + \\ &+\int_{ia}^{\infty}\frac{ia}{2}\frac{e^{-a(z-i)}}{az}dz= \frac{ia}{2} \left( -e^{-ia}E_1(-ia) + e^{ia}E_1(ia) \right) \end{align*}
Science Advisor Gold Member 2,216 1,247 Summary Wigner's friend seems to lead to certainty in two complimentary contexts Summary:Wigner's friend seems to lead to certainty in two complimentary contexts This is probably pretty dumb, but I was just thinking about Wigner's friend and wondering about the two contexts involved. The basic set up I'm wondering about is as follows: The friend does a spin measurement in the ##\left\{|\uparrow_z\rangle, |\downarrow_z\rangle\right\}## basis, i.e. of ##S_z## at time ##t_1##. And let's say the particle is undisturbed after that. For experiments outside the lab Wigner considers the lab to be in the basis: $$\frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle + |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right)$$ He then considers a measurement of the observable ##\mathcal{X}## which has eigenvectors: $$\left\{\frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle + |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right), \frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle - |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right)\right\}$$ with eigenvalues ##\{1,-1\}## respectively. At time ##t_2## the friend flips a coin and either he does a measurement of ##S_z## or Wigner does a measurement of ##\mathcal{X}## However if the friend does a measurement of ##S_z## he knows for a fact he will get whatever result he originally got. However he also knows Wigner will obtain the ##1## outcome with certainty. However ##\left[S_{z},\mathcal{X}\right] \neq 0##. Thus the friend seems to be predicting with certainty observables belonging to two separate contexts. Which is not supposed to be possible in the quantum formalism. What am I missing?
If I have two functions in a convolution like $$X*Y=1$$ $$X*Z=1$$ then it means (trivially) $Y=Z$. Is this correct or are there subtleties in the convolution theorem where $Y=Z$ isn't always true? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If I have two functions in a convolution like $$X*Y=1$$ $$X*Z=1$$ then it means (trivially) $Y=Z$. Is this correct or are there subtleties in the convolution theorem where $Y=Z$ isn't always true? Here is a counter example. The functions $X,Y,Z$ below will satisfy that $X * Y = X * Z = 1$ but $ Y \not= Z$: $$ X(t) := 1, \forall \,t \in \mathbb{R},$$ $$ Y(t) := \begin{cases} \frac{1}{C_Y} e^{-\frac{1}{1-t^2}} \quad \text{if $-1 \leq t\leq 1$}\\ 0 \quad \text{otherwise,} \end{cases} $$ where $C_Y$ is a (normalization) constant such that the integral of $Y$ over $[-1,1]$ equals 1, and $$ Z(t) := \begin{cases} \frac{1}{C_Z} e^{-\frac{1}{2^2-t^2}} \quad \text{if $-2 \leq t\leq 2$}\\ 0 \quad \text{otherwise,} \end{cases} $$ where, similar to $C_Y$, $C_Z$ is a (normalization) constant such that the integral of $Z$ over $[-2,2]$ equals 1. Those are concrete functions as an explicit example. Any normalized (integral equals 1) function $f$ that has compact support will satisfy that $X * f = 1$. From there you can choose any $Y$ and $Z$ you want as long as they are different. Good Question : Here we can use the concept of Laplace Transformation $$X(s) Y(s) = X(s) Z(s)$$ $$Y(s) = Z(s)$$ We know that "Laplace Transform of two signals can be same but their ROC will be definitely different" Here is a example : $$e^{-at} u(t) \rightleftharpoons \frac{1}{s+a}$$ $$-e^{-at} u(-t) \rightleftharpoons \frac{1}{s+a}$$ HENCE now it is proved that it is not necessary that $$Y = Z$$
The general trick to calculating such odds is that the probability of rolling a result that matches some criterion equals the number of possible matching rolls, divided by the total number of possible rolls. (By "roll", here, we mean a sequence of numbers obtained by rolling a certain number (e.g. 6) of a certain kind of fair dice (e.g. d6) in sequence. The important feature here is that each such roll, by itself, is equally likely, which is why the simple formula above works. If the rolls were not all equally like, we'd have to resort to more complicated maths.) For 6d6, the total number of possible rolls is \$6^6\$ = 46,656. (More generally, for Nd X, the total number of possible rolls is \$X^N\$.) Next, we just need to figure out in how many ways we can roll each of the results we're interested in. Straights For example, let's look at straights first. A straight on 6d6 obviously consists of the numbers 1, 2, 3, 4, 5 and 6, in any order. How many ways are there to order them? Well, imagine that we have six dice, each showing one of the numbers from 1 to 6, and six positions marked 1 to 6 on the table that we want to put the dice in. For the first position, we can choose any of the dice, so we have 6 choices there; for the second position, we only have five dice left, so the number of possible choices we can make for the second die is 5, giving us a total of 6 × 5 = 30 possible choices for the first two dice. Continuing in this manner, we find that the total number of different orders in which we can set down the six distinct dice is \$ 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 720 \$. (Mathematicians have a specific name and a notation for such products, because they come up pretty often in math: they call them factorials, and write them by putting an exclamation point after the upper limit, as in 6! = 720.) Thus, the probability of rolling a straight on 6d6 is 6! out of \$6^6\$, or \$720 \div 46,656 \approx 0.0154 = 1.54\%\$. (For straights shorter than 6 dice, things get more complicated; see the computer results below.) \$n\$ of a kind What about \$n\$ of a kind? Well, it's pretty obvious that there are exactly six ways to roll 6 of a kind — either all 1, all 2, all 3, all 4, all 5 or all 6. Thus, the probability of rolling six of a kind is \$ 6 \div 6^6 \approx 0.00013 = 0.013\%\$. This is just about the rarest kind of combination you can get. 5 of a kind For five of a kind, we clearly have six choices for the number that occurs five times, and five choices for the single mismatched roll (or vice versa; it really doesn't matter which way you count them, since the result is the same), for a total of \$6 \times 5 = 30\$ possibilities. But since we're considering ordered die rolls (which we must do, to ensure that every roll is equally likely), we also have six choices for the position of the mismatched die in the sequence, giving us a total of \$30 \times 6 = 180\$ ways to roll 5-of-a-kind on 6d6, and thus a probability of \$180 \div 6^6 \approx 0.00386 = 0.386\%\$. 4 of a kind How about four of a kind? Again, we have six choices for the matched dice, but now there are more possibilities for the mismatched ones. We could consider the cases where the two mismatched dice are the same or different separately, but that quickly gets a bit complicated. The easy way here is to first assign the two mismatched dice into specific positions in the sequence; we can put the first one in any of 6 positions, and the second in any of the remaining 5, for a total of 30 choices — but, since we haven't yet assigned values for those dice, they're identical, and so we need to divide by 2 to avoid counting identical positions twice (because putting the first mismatched die in position 1, and the second in position 2, gives the same result as putting the first in position 2 and the second in position 1), giving us 15 ways to place the mismatched dice into the sequence of 6 rolls. Having done that, we just need to pick arbitrary values for those two die rolls; they can be identical, but neither of them can equal the four matched dice, so we have \$5 \times 5 = 25\$ choices total here. Putting this together with the 6 choices for the matched dice, and the 15 ways of picking the positions of the mismatched dice, and we get \$6 \times 15 \times 25 = 2,250\$ ways of rolling 4-of-a-kind on 6d6, with a probability of \$2,250 \div 6^6 \approx 0.0482 = 4.82\%\$, or slightly under one in 20—a lot more likely than 5-of-a-kind. 3 of a kind We could do the same thing for three-of-a-kind, but that gets even more complicated, mainly because it's now also possible to roll two different sets of three in a single 6d6 roll. Counting the possible combinations, in a similar manner as above, isn't really difficult as such, but it does get tedious and error-prone. ...and so on Fortunately, we can cheat and use a computer! Since there are only about 47 thousand possible 6d6 rolls, a computer can loop through all of them in a fraction of a second, and count how many times the most common die occurs in each of them. We can also do the same for straights, counting the longest sequence of consecutive dice rolled: Using the dice_pool() helper function (which enumerates all possible sorted outcomes of rolling Nd X dice and their respective probabilities) from this answer, here's a simple Python program to calculate the probabilities of various groups and straights: # generate all possible sorted NdD rolls and their probabilities # see http://en.wikipedia.org/wiki/Multinomial_distribution for the math factorial = [1.0] def dice_pool(n, d): for i in range(len(factorial), n+1): factorial.append(factorial[i-1] * i) nom = factorial[n] / float(d)**n for roll, den in _dice_pool(n, d): yield roll, nom / den def _dice_pool(n, d): if d > 1: for i in range(0, n+1): pair = (d, i) for roll, den in _dice_pool(n-i, d-1): yield roll + (pair,), den * factorial[i] else: yield ((d, n),), factorial[n] # the actual calculation and output code starts here groups = {} straights = {} for roll, prob in dice_pool(6, 6): # find largest n-of-a-kind: largest = max(count for num, count in roll) if largest not in groups: groups[largest] = 0.0 groups[largest] += prob # find longest straight: longest = length = 0 for num, count in roll: if count > 0: length += 1 else: length = 0 if longest < length: longest = length if longest not in straights: straights[longest] = 0.0 straights[longest] += prob # print out results for n in groups: print("max %d of a kind: %9.6f%%" % (n, 100*groups[n])) for n in straights: print("max %d in a row: %9.6f%%" % (n, 100*straights[n])) And here's the output: max 1 of a kind: 1.543210% max 2 of a kind: 61.728395% max 3 of a kind: 31.507202% max 4 of a kind: 4.822531% max 5 of a kind: 0.385802% max 6 of a kind: 0.012860% max 1 in a row: 5.971365% max 2 in a row: 34.615055% max 3 in a row: 32.407407% max 4 in a row: 17.746914% max 5 in a row: 7.716049% max 6 in a row: 1.543210% Note that this output doesn't distinguish e.g. two or three pairs from a single pair, or a triple and a pair from just a triple. If you know some Python, it would not be difficult to modify the program to check for those as well. Also note that it's actually really hard to get no more than one die of each kind (since that actually requires rolling a perfect straight), and also pretty hard to get no more than one in a row (although still a lot easier than getting six of a kind, since e.g. rolling 1,1,3,3,5,5 also counts). Three in a row is also only slightly less likely than two in a row (although some of the rolls counted as three in a row by the program actually include both), but larger groups and straights show the expected downward trend in probability as the group size increases.
A rooted binary tree is a type of graph that is particularly ofinterest in some areas of computer science. A typical rooted binarytree is shown in figure 3.5.1.The root is the topmost vertex. The vertices below a vertex andconnected to it by an edge are the children of the vertex. It is abinary tree because all vertices have 0, 1, or 2 children.How many different rooted binary trees are there with $n$ vertices? Let us denote this number by $C_n$; these are the Catalan numbers. For convenience, we allow a rootedbinary tree to be empty, and let $C_0=1$. Then it is easy to see that$C_1=1$ and $C_2=2$, and not hard to see that $C_3=5$. Notice that anyrooted binary tree on at least one vertex can be viewed as two(possibly empty) binary trees joined into a new tree by introducing anew root vertex and making the children of this root the two roots ofthe original trees; see figure 3.5.2. (Tomake the empty tree a child of the new vertex, simply do nothing, thatis, omit the corresponding child.) Thus, to make all possible binary trees with $n$ vertices, we start with a root vertex, and then for its two children insert rooted binary trees on $k$ and $l$ vertices, with $k+l=n-1$, for all possible choices of the smaller trees. Now we can write $$ C_n=\sum_{i=0}^{n-1} C_iC_{n-i-1}. $$ For example, since we know that $C_0=C_1=1$ and $C_2=2$, $$ C_3 = C_0C_2 + C_1C_1+C_2C_0 = 1\cdot2 + 1\cdot1 + 2\cdot1 = 5, $$ as mentioned above. Once we know the trees on 0, 1, and 2 vertices, we can combine them in all possible ways to list the trees on 3 vertices, as shown in figure 3.5.3. Note that the first two trees have no left child, since the only tree on 0 vertices is empty, and likewise the last two have no right child. Now we use a generating function to find a formula for $C_n$. Let $f=\sum_{i=0}^\infty C_ix^i$. Now consider $f^2$: the coefficient of the term $x^n$ in the expansion of $f^2$ is $\sum_{i=0}^{n} C_iC_{n-i}$, corresponding to all possible ways to multiply terms of $f$ to get an $x^n$ term: $$ C_0\cdot C_nx^n + C_1x\cdot C_{n-1}x^{n-1} + C_2x^2\cdot C_{n-2}x^{n-2} +\cdots+C_nx^n\cdot C_0. $$ Now we recognize this as precisely the sum that gives $C_{n+1}$, so $f^2 = \sum_{n=0}^\infty C_{n+1}x^n$. If we multiply this by $x$ and add 1 (which is $C_0$) we get exactly $f$ again, that is, $xf^2+1=f$ or $xf^2-f+1=0$; here 0 is the zero function, that is, $xf^2-f+1$ is 0 for all x. Using the Pythagorean theorem, $$ f={1\pm\sqrt{1-4x}\over 2x}, $$ as long as $x\not=0$. It is not hard to see that as $x$ approaches 0, $$ {1+\sqrt{1-4x}\over 2x} $$ goes to infinity while $$ {1-\sqrt{1-4x}\over 2x} $$ goes to 1. Since we know $f(0)=C_0=1$, this is the $f$ we want. Now by Newton's Binomial Theorem 3.1.1, we can expand $$ \sqrt{1-4x} = (1+(-4x))^{1/2} =\sum_{n=0}^\infty {1/2\choose n}(-4x)^n. $$ Then $$ {1-\sqrt{1-4x}\over 2x} = \sum_{n=1}^\infty -{1\over 2}{1/2\choose n}(-4)^nx^{n-1} = \sum_{n=0}^\infty -{1\over 2}{1/2\choose n+1}(-4)^{n+1}x^n. $$ Expanding the binomial coefficient $1/2\choose n+1$ and reorganizing the expression, we discover that $$ C_n = -{1\over 2}{1/2\choose n+1}(-4)^{n+1} = {1\over n+1}{2n\choose n}. $$ In exercise 7 in section 1.2, we saw that the number of properly matched sequences of parentheses of length $2n$ is ${2n\choose n}-{2n\choose n+1}$, and called this $C_n$. It is not difficult to see that $$ {2n\choose n}-{2n\choose n+1}={1\over n+1}{2n\choose n}, $$ so the formulas are in agreement. Temporarily let $A_n$ be the number of properly matched sequences of parentheses of length $2n$, so from the exercise we know $A_n={2n\choose n}-{2n\choose n+1}$. It is possible to see directly that $A_0=A_1=1$ and that the numbers $A_n$ satisfy the same recurrence relation as do the $C_n$, which implies that $A_n=C_n$, without manipulating the generating function. There are many counting problems whose answers turns out to be theCatalan numbers. Enumerative Combinatorics: Volume 2, byRichard Stanley, contains a large number of examples. Exercises 3.5 Ex 3.5.1Show that $${2n\choose n}-{2n\choose n+1}={1\over n+1}{2n\choose n}.$$ Ex 3.5.2Find a simple expression $f(n)$ so that $C_{n+1}=f(n)C_n$. Use this tocompute $C_1,\ldots,C_6$ from $C_0$. Ex 3.5.3Show that if $A_n$ is the number of properly matchedsequences of parentheses of length $2n$, then$$A_n=\sum_{i=0}^{n-1} A_iA_{n-i-1}.$$Do this in the same style that we used for the number of rooted binarytrees: Given all the sequences of shorter length, explain how tocombine them to produce the sequences of length $2n$, in such a waythat the sum clearly counts the number of sequences. Hint: Prove thefollowing lemma: If $s$ is a properly matched sequence of parentheses oflength $2n$, $s$ may be written uniquely in the form $(s_1)s_2$, where$s_1$ and $s_2$ are properly matched sequences of parentheses whoselengths add to $2n-2$. For example, $(())()= ([()])[()]$ and$()(())=([\,])[(())]$, with the sequences $s_1$ and $s_2$ indicated by $[\,]$. Note that $s_1$ and $s_2$ are allowed to be empty sequences,with length 0. Ex 3.5.4Consider a "staircase'' as shown below. A path from $A$ to$B$ consists of a sequence of edges starting at $A$, ending at $B$,and proceeding only up or right; all paths are of length 6. One such path is indicated by arrows. The staircase shown is a "$3 \times 3$'' staircase.How many paths are there in an $n\times n$ staircase? Ex 3.5.5A convex polygon with $n\ge3$ sides can be divided into triangles byinserting $n-3$ non-intersecting diagonals. In how many different wayscan this be done? The possibilities for $n=5$ are shown. Ex 3.5.6A partition of a set $S$ is acollection of non-empty subsets $A_i\subseteq S$, $1\le i\le k$ (the parts of the partition), such that $\bigcup_{i=1}^k A_i=S$ andfor every $i\not=j$, $A_i\cap A_j=\emptyset$. For example, onepartition of $\{1,2,3,4,5\}$ is $\{\{1,3\}$, $\{4\}$, $\{2,5\}\}$. Suppose the integers $1,2,\ldots,n$ are arranged on a circle, in orderaround the circle. A partition of $\{1,2,\ldots,n\}$ is a non-crossing partition if itsatisfies this additional property: If $w$ and $x$ are in some part$A_i$, and $y$ and $z$ are in a different part $A_j$, then the linejoining $w$ to $x$ does not cross the line joining $y$ to $z$. The partition above, $\{1,3\}$, $\{4\}$, $\{2,5\}$, is not anon-crossing partition, as the the line 1–3 crosses the line 2–5. Find the number of non-crossing partitions of $\{1,2,\ldots,n\}$. Recall from section 1.4 that the Bell numbers count all of the partitions of $\{1,2,\ldots,n\}$. Hence, this exercise gives us a lower bound on the total number of partitions. Ex 3.5.7Consider a set of $2n$ people sitting around atable. In how many ways can we arrange for each person to shake handswith another person at the table such that no two handshakes cross?
Bubble tower solutions of slightly supercritical elliptic equations and application in symmetric domains 1. Laboratoire d’Analyse et de Mathématiques Appliquées, CNRS UMR 8050, Département de Mathématiques, Université Paris XII-Val de Marne, 61 avenue du Général de Gaulle, 94010 Créteil Cedex, France, France 2. Department of Mathematics, East China Normal University, 200062 Shanghai, China $\Delta u+ |u|^{p-1}u+$ε 1/2 f = 0 in Ω u=ε 1/2 g on $\partial$Ω in a bounded smooth domain $\Omega \subset \R^N$ $(N\geq 3)$, when the exponent $p$ is supercritical and close enough to $\frac{N+2}{N-2}$. As $p\rightarrow \frac{N+2}{N-2}$, the solutions have multiple blow up at finitely many points which are the critical points of a function whose definition involves Green's function. As applications, we will give some existence results, in particular, when $\O$ are symmetric domains perforated with the small hole and when $f=0$ and $g=0$. Mathematics Subject Classification:35J60, 35J2. Citation:Yuxin Ge, Ruihua Jing, Feng Zhou. Bubble tower solutions of slightly supercritical elliptic equations and application in symmetric domains. Discrete & Continuous Dynamical Systems - A, 2007, 17 (4) : 751-770. doi: 10.3934/dcds.2007.17.751 [1] [2] M. L. Miotto. Multiple solutions for elliptic problem in $\mathbb{R}^N$ with critical Sobolev exponent and weight function. [3] Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. [4] [5] Yanbing Yang, Runzhang Xu. Nonlinear wave equation with both strongly and weakly damped terms: Supercritical initial energy finite time blow up. [6] Binbin Shi, Weike Wang. Existence and blow up of solutions to the $ 2D $ Burgers equation with supercritical dissipation. [7] Futoshi Takahashi. An eigenvalue problem related to blowing-up solutions for a semilinear elliptic equation with the critical Sobolev exponent. [8] Guangyu Xu, Jun Zhou. Global existence and blow-up of solutions to a singular Non-Newton polytropic filtration equation with critical and supercritical initial energy. [9] Olivier Druet, Emmanuel Hebey and Frederic Robert. A $C^0$-theory for the blow-up of second order elliptic equations of critical Sobolev growth. [10] [11] [12] [13] Virginia Agostiniani, Rolando Magnanini. Symmetries in an overdetermined problem for the Green's function. [14] [15] [16] [17] Manuel del Pino, Jean Dolbeault, Monica Musso. Multiple bubbling for the exponential nonlinearity in the slightly supercritical case. [18] [19] Zhi-Min Chen. Straightforward approximation of the translating and pulsating free surface Green function. [20] Claudia Bucur. Some observations on the Green function for the ball in the fractional Laplace framework. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Problem: There was a typo in the original statement. I fixed it now!! Two runners start running (from the same point) in opposite directions along a circular path of radius $100\ m$ at a speed of $5\ m/s$. At what rate is the shortest distance between these two runners growing after $10$ seconds? \ Here is my attempt: Assume that both runners start running from the point $A$. After $10$ seconds, the first runner reaches the point $B$ and the second runner reaches the point $C$. Since the speed of both runners is the same $5\ m/s$, the arc $AB$ is equal to the arc $AC$ (actually the length of the arc $AB=AC=D=5*10=50\ meter$. Let's call $d$ the length of the shortest distance between $B$ and $C$ (which is the segment $[BC]$). Let's call $\theta$ the angle between $[OA]$ and $[OB]$ where $O$ is the center of the circle (i.e, $\theta$ is the angle swapped when the first runner runs from $A$ to $B$). By using the law of cosines on the triangle $OBC$, we get: $d^2=100^2+100^2-2(100)(100)cos(2\theta)$ .....................(**) By differentiating the above equation with respect to $t$, we get:\ $d \cdot \frac{\partial d }{\partial t}=2\cdot10^4 \cdot sin(2\theta) \cdot \frac{\partial \theta }{\partial t}$ .................(1) Let's call $D$ the distance traversed along the path by each runner and $\theta $ the angle traversed (in radians). We have: $D=100\cdot \theta$. Then $\frac{\partial D }{\partial t}=5\ m/s=\ 100 \frac{\partial \theta}{\partial t}$ $\Rightarrow$ $\frac{\partial \theta }{\partial t}= 0.05\ rad/sec$................(2) When $t=10\ sec$, $D=50\ meters$. Using the equation $D=100\ theta$, we get $\theta= 50/100=0.5 radians$.............(3) By replacing the value of $\theta $ in $(**)$, we will get $d=100\sqrt{2-2cos(1)}$...........(4) By pluggin-in $(2)$, $(3)$ and $(4)$ in $(1)$, we get $\frac{\partial d}{\partial t}=\frac{10sin(1)}{\sqrt{2-2cos(1)}}$. Please let me know if there is any mistake in my proof, and whether there is an easier way to do this problem. Thanks!
I see often that people have troubles with the rule of product in combinatorics. I also see often that people who claim to not have troubles with it and try to explain it to the aforementioned clueless people just end up saying "it's just a formula, don't worry about where it came from and why we do it, just memorize it!"--not a good tactic at all. I will say I myself don't have a deep understanding of it and will at the end of this answer tell you it is just a formula, but I hope I can help you understand the formula a bit more. Recall from (likely) grade school where you made tree diagrams to express combinatorics problems visually. Take the following simplified question: How many 'words' can you arrange out of the letters in the set $\{x,y,z\}$? Our tree diagram will look like this: Now if we count each step in our tree, we see in the first step of our tree (far left), we select one of three letters. Each is a root for its own subtree. That's $3$ trees started. So we have: $$(3\ trees).$$Now in the second step (middle), we select one of the letters we have left that isn't the original we started with, leaving us with $2$ new choices per tree. So that is: $$(3\ trees) \times (2\ choices) = 6 \ total.$$We now consider the final step where we choose the remaining letter. However, this step doesn't further split our tree's branches per se, it just extends them. So we get: $$(3\ trees) \times (2\ choices) \times (1\ more\ choice) = 6 \ total,$$ which is identical to our exact formula we had previously memorized, that is: $$3! = 3 \times 2 \times 1 = 6.$$ Now suppose we change around the question a little. Suppose this new condition: After choosing $z$, choose $z$ to be an element from the set $\{a,b,c\}$. (imagine this choice is like choosing a different configuration of AUE, but in this case instead of $3!$ choices for the configuration, we have only $3$). Our tree diagram looks like this: which is quite similar to before (notice we just appended the new choice onto the end since even if we put it directly after every $z$, it would have the same number of branches (try this yourself if you want!)) but with an additional step. As with the other steps, we just multiplied the number of choices we already had by the number of new choices like so: $$(3\ trees) \times (2\ choices) \times (1\ more\ choice) \times (3\ choices\ for\ z)= 18\ total.$$ So perhaps now you have a better understanding of why we use the rule of product in the case of your question. As you can tell, it still is sort of a formula you have to know. When you have $\alpha$ ways of doing one task and $\beta$ ways of doing another, you then have $\alpha \times \beta$ ways of doing both. For the case for addition, note very well that it is more associated with the word "or" than with the word "and". So if you want to calculate the number of ways to do $\alpha$ or $\beta$, then you will have to think about using addition, but even then it isn't so straightforward sometimes (google for inclusion-exclusion principle, among other things for more information on it).
Why not look at Stasheff's original paper? He does give a point-set model (where $K_{n+2}$ is a compact convex semialgebraic subset of $\mathbb{R}^n$) and describes explicitly the substitution maps $\text{sub}_i: K_m \times K_n \to K_{m+n-1}$ which are collectively tantamount to the operad structure. James Dillon Stasheff, Homotopy Associativity of H-Spaces I, Transactions of the American Mathematical Society Vol. 108, No. 2 (Aug., 1963), pp. 275-292. There are other descriptions which realize the associahedra as linear polytopes (which I find more pleasant); the name of the late J.-L. Loday sticks prominently in my mind in this regard. See here for instance. Edit: The universal property description of the associahedral operad (or Stasheff operad; I've used both terms), mentioned in my comment under the question, might not be well-known; it might be good to describe intuitively what is going on there. I should add that I have no idea where this observation might appear in the literature, but that I would also have a hard time believing that the observation is originally due to me. :-) See this $n$-Category Café post for some related background. The statement is that the Stasheff operad is initial among non-permutative non-unital operads $M$ such that each $M_n$, for $n \geq 2$, carries a basepoint and an $I$-module map $I \times M_n \to M_n$ (for the multiplicative monoid $I = [0, 1]$) that contracts $M_n$ to that basepoint. It should be noted that the correct definition of non-unital operad is one that uses operations $sub_i: M_m \times M_n \to M_{m+n-1}$ for $1 \leq i \leq m$, not a less expressive definition which merely involves a structure $M \circ M \to M$ where $\circ$ is the well-known substitution product on graded spaces (see Tom Leinster's book for details on such substitution products, monoids of which being unital operads). In the Stasheff model, we could take the barycenter of each associahedron as its basepoint $p$; the contracting homotopy is given by $x \mapsto t x + (1-t)p$, taking advantage of convexity for the Stasheff model. In the initial such structure, there is no operad identity, so $M_1$ is empty, and so is $M_0$. For $M_2$, all we know is that it has a basepoint $m_2$ (so $M_2$ is the cone on an empty space). For $M_3$, all we know is that there are points $l = sub_1(m, m)$ (think $(xy)z$) and $r = sub_2(m, m)$ (think $x(yz)$) plus a contracting homotopy to a basepoint $m_3$, so $M_3$ is a cone on the space $\{l, r\}$. This cone is of course an interval. Continuing up the inductive ladder, $M_4$ is obtained by taking the cone of a space obtained by pasting together five spaces corresponding to images $sub_1(M_2 \times M_3)$, $sub_2(M_2 \times M_3)$, $sub_1(M_3 \times M_2)$, $sub_2(M_3 \times M_2)$, $sub_3(M_3 \times M_2)$; the pasting is in accordance with generalized operadic associativity. The pasting gives the boundary of a pentagon; the cone itself is the famous Stasheff pentagon. And so on.
Variable Importance in Random Forests can suffer from severe overfitting Predictive vs. interpretational overfitting There appears to be broad consenus that random forests rarely suffer from “overfitting” which plagues many other models. (We define overfitting as choosing a model flexibility which is too high for the data generating process at hand resulting in non-optimal performance on an independent test set.) By averaging many (hundreds) of separately grown deep trees -each of which inevitably overfits the data – one often achieves a favorable balance in the bias variance tradeoff. For similar reasons, the need for careful parameter tuning also seems less essential than in other models. This post does not attempt to contribute to this long standing discussion (see e.g. https://stats.stackexchange.com/questions/66543/random-forest-is-overfitting) but points out that random forests’ immunity to overfitting is restricted to the predictions only and not to the default variable importance measure! We assume the reader is familiar with the basic construction of random forests which are averages of large numbers of individually grown regression/classification trees. The random nature stems from both “row and column subsampling’’: each tree is based on a random subset of the observations, and each split is based on a random subset of candidate variables. The tuning parameter – which for popular software implementations has the default \(\lfloor p/3 \rfloor\) for regression and \(\sqrt{p}\) for classification trees – can have profound effects on prediction quality as well as the variable importance measures outlined below. At the heart of the random forest library is the CART algorithm which chooses the split for each node such that maximum reduction in overall node impurity is achieved. Due to the CART bootstrap row sampling, \(36.8\%\) of the observations are (on average) not used for an individual tree; those “out of bag” (OOB) samples can serve as a validation set to estimate the test error, e.g.: \[\begin{equation} E\left( Y – \hat{Y}\right)^2 \approx OOB_{MSE} = \frac{1}{n} \sum_{i=1}^n{\left( y_i – \overline{\hat{y}}_{i, OOB}\right)^2} \end{equation}\] where \(\overline{\hat{y}}_{i, OOB}\) is the average prediction for the \(i\)th observation from those trees for which this observation was OOB. Variable Importance The default method to compute variable importance is the mean decrease in impurity (or gini importance) mechanism: At each split in each tree, the improvement in the split-criterion is the importance measure attributed to the splitting variable, and is accumulated over all the trees in the forest separately for each variable. Note that this measure is quite like the \(R^2\) in regression on the training set. The widely used alternative as a measure of variable importance or short permutation importance is defined as follows: \[\begin{equation} \label{eq:VI} \mbox{VI} = OOB_{MSE, perm} – OOB_{MSE} \end{equation}\] Gini importance can be highly misleading We use the well known titanic data set to illustrate the perils of putting too much faith into the Gini importance which is based entirely on training data – not on OOB samples – and makes no attempt to discount impurity decreases in deep trees that are pretty much frivolous and will not survive in a validation set. In the following model we include passengerID as a feature along with the more reasonable Age, Sex and Pclass: randomForest(Survived ~ Age + Sex + Pclass + PassengerId, data=titanic_train[!naRows,], ntree=200,importance=TRUE,mtry=2) The figure below shows both measures of variable importance and surprisingly passengerID turns out to be ranked number 2 for the Gini importance (right panel). This unexpected result is robust to random shuffling of the ID. The permutation based importance (left panel) is not fooled by the irrelevant ID feature. This is maybe not unexpected as the IDs shold bear no predictive power for the out-of-bag samples. Noise Feature Let us go one step further and add a Gaussian noise feature, which we call PassengerWeight: titanic_train$PassengerWeight = rnorm(nrow(titanic_train),70,20) rf4 =randomForest(Survived ~ Age + Sex + Pclass + PassengerId + PassengerWeight, data=titanic_train[!naRows,], ntree=200,importance=TRUE,mtry=2) Again, the blatant “overfitting” of the Gini variable importance is troubling whereas the permutation based importance (left panel) is not fooled by the irrelevant features. (Encouragingly, the importance measures for ID and weight are even negative!) In the remainder we investigate if other libraries suffer from similar spurious variable importance measures. h2o library Unfortunately, the h2o random forest implementation does not offer permutation importance: Coding passenger ID as integer is bad enough: Coding passenger ID as factor makes matters worse: Let’s look at a single tree from the forest: If we scramble ID, does it hold up? partykit conditional inference trees are not being fooled by ID: And the variable importance in cforest is indeed unbiased python’s sklearn Unfortunately, like h2o the python random forest implementation offers only Gini importance, but this insightful post offers a solution: Gradient Boosting Boosting is highly robust against frivolous columns: mdlGBM = gbm(Survived ~ Age + Sex + Pclass + PassengerId +PassengerWeight, data= titanic_train, n.trees = 300, shrinkage = 0.01, distribution = "gaussian") Conclusion Sadly, this post is 12 years behind: It has been known for while now that the Gini importance tends to inflate the importance of continuous or high-cardinality categorical variables: the variable importance measures of Breiman’s original Random Forest method … are not reliable in situations where potential predictor variables vary in their scale of measurement or their number of categories. Single Trees I am still struggling with the extent of the overfitting. It is hard to believe that passenger ID could be chosen as a split point early in the tree building process given the other informative variables! Let us inspect a single tree ## rowname left daughter right daughter split var split point status ## 1 1 2 3 Pclass 2.5 1 ## 2 2 4 5 Pclass 1.5 1 ## 3 3 6 7 PassengerId 10.0 1 ## 4 4 8 9 Sex 1.5 1 ## 5 5 10 11 Sex 1.5 1 ## 6 6 12 13 PassengerId 2.5 1 ## prediction ## 1 <NA> ## 2 <NA> ## 3 <NA> ## 4 <NA> ## 5 <NA> ## 6 <NA> This tree splits on passenger ID at the second level !! Let us dig deeper: The help page states For numerical predictors, data with values of the variable less than or equal to the splitting point go to the left daughter node. So we have the 3rd class passengers on the right branch. Compare subsequent splits on (i) sex, (ii) Pclass and (iii) passengerID: Starting with a parent node Gini impurity of 0.184 Splitting on sex yields a Gini impurity of 0.159 1 2 0 72 303 1 71 50 Splitting on passengerID yields a Gini impurity of 0.183 FALSE TRUE 0 2 373 1 3 118 And how could passenger ID accrue more importance than sex ? // add bootstrap table styles to pandoc tables function bootstrapStylePandocTables() { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); } $(document).ready(function () { bootstrapStylePandocTables(); });
I recently stumbled across the following statement: If $G$ and $H$ are groups and $\phi : H \rightarrow G$ and $\psi : G \rightarrow H$ are homomorphisms with $\psi \circ \phi = Id_{H}$, then $G \approx H \oplus \ker \psi$. I believe I have a proof, but my proof relies on $G$ being Abelian. In the context I encountered this, every group under consideration was Abelian, but this statement was given without specifying that either group was Abelian. My question is whether the Abelian condition is necessary. If not, can someone provide a proof of this more general fact, and if so, can someone provide a counter-example? If I'm correct that $G$ being Abelian is necessary, then does anyone have a cleaner proof than the nastiness I've provided below? A quick sketch of my proof is included below. Note that I'm a little loose with notation using $+$ for both the operations in $G$ and $H$ and I haven't written out all the details. I hope this is still clear enough to get the idea: Consider the map $f : H \oplus \ker \psi \rightarrow G$ defined by $f(\left<h, k\right>) = \phi(h) + k$. I claim $f$ is an isomorphism. First, if $\left<h_1, k_1\right>$ and $\left< h_2, k_2 \right>$ are elements of $H \oplus \ker \psi$, then $f(\left<h_1, k_1\right> + \left< h_2, k_2 \right>) = f(\left< h_1 + h_2, k_1 + k_2\right>) = \phi(h_1 + h_2) + k_1 + k_2 = \phi(h_1) + \phi(h_2) + k_1 + k_2$. If $G$ is Abelian, we can continue: $\phi(h_1) + \phi(h_2) + k_1 + k_2 = \phi(h_1) + k_1 + \phi(h_2) + k_2 = f(\left<h_1, k_1\right>) + f(\left<h_2, k_2\right>)$, so $f$ is a homomorphism. Further, $f$ is injective since $\phi(h) + k = 0$ implies $\phi(h) = -k$, so $\psi(\phi(h)) = \psi(-k) = 0 = h$ using that $\psi \circ \phi = Id_{h}$ and $k \in \ker \psi$. Finally, $f$ is surjective by the First Isomorphism Theorem (without writing out the details, the FIT tells us that there is one-to-one correspondence between cosets of $\ker \psi$ and elements of $\psi(G) = H$. Using this we can show that $\phi(H)$ contains one representative from each coset of $\ker \psi$ in $G$ and the result follows). Thanks for any help!
In the problem set 2 of Rubinsteins Microeconomics (btw is there a comparably nice written book on macroeconomics?) there is the following question: Let $\succ_n$ be the preference relations defined on $\mathbb{R}^2_+$ by the utility $x_1^n + x_2^n$. Let the preference relations $\succ$ be defined by the utility $\max\{x_1, x_2\}$. Show that $\succ_n$ converge to $\succ$. Preference relations are said to converge if for $a \succ b$ we have that $a \succ_m b$ for sufficiently large $m$. I am indeed able to prove that. Now w.l.o.g. $x_1 = \max\{x_1, x_2\}$ and $y_1 = \max\{y_1, y_2\}$. Assuming that $x_1 = y_1$ we have $x \sim y$. But the only case when $x \sim_m y$ is when also $x_2 = y_2$. In other cases we would have either $x \succ_m y$ or $y \succ_m x$ for all $m$ (depending on $x_2$ and $y_2$, the "smaller component". These are actually lexicographic preferences!) Is there a reason that the convergence of preference relations is done in this way to ignore such subtilities? In my opinion we should have that $\succeq = \lim_{n \to \infty} \succeq_n$ iff $a \succeq b = \lim_{n \to \infty} a \succeq_n b$.
How can I plot: $$\sum_{n=1}^{400}\frac1{n^3\sin^2(n)}$$ without using the Table function? I am looking for a plot similar to the discrete plot in Mathematica. I am new to this software, I was able to complete the job with Maple and MATLAB but can't figure out how to do so in Mathematica. Thank you for any help! $\textbf{MAPLE CODE:}$ L:=ListTools:-PartialSums([seq(1/((n^3)*(sin(n)^2)),n=1..400)]):plots:-listplot(L,style=point); $\textbf{MAPLE OUTPUT:}$ $\textbf{MATLAB OUTPUT:}$
Let $G$ be a complex affine reductive algebraic group, $B\subseteq G$ a Borel with maximal torus $T$ and unipotent radical $U$. Let $w\in\operatorname N_G(T)$ be a representative of the longest Weyl element. I am wondering whether the big open Bruhat cell $BwB\subseteq G$ is a principal open set, i.e. whether there is a regular function $f\in\mathbb C[G]$ with $BwB=\{ g\in G\mid f(g)\ne 0 \}$. This is true for $G=\operatorname{GL}_n(\mathbb C)$, because the open Bruhat cell is the set of all invertible matrices with nonvanishing principal minors, in this case $f$ would be the product of those. A little more generally, this is true when $\mathbb C[G]$ is factorial: The complement of any affine in a noetherian, normal and separated scheme is pure of codimension one. Since algebraic groups are smooth and the open cell is isomorphic to the affine variety $B\times U$, its complement is pure of codimension one. Each codimension one subvariety of $G$ will be the vanishing set of a single regular function because $\mathbb C[G]$ is a UFD. Hence, the product of these functions will cut out the complement of the open cell set-theoretically. I do not see how I would go about proving the statement in the general case, though - and I am not sure if is correct at all.
I have this very simple ODE-contrained optimization problem: $h(x,x',p,t) = x'-A(p)x-b(p) = 0$, the constraint $g(x(0)) = x_0$, the initial condition with no parameters involved $F = \int (X-X_{obs})^2 dt$, the objective equation According to adjoint method, I need to Integrate constraint equation: $$x'=A(p)x+b(p)$$ Integrate adjoint equation and reverse $\lambda$ in $t$: $$\lambda'= A(p)^T \lambda-(X-X_{obs})$$ Calculate $\frac{dF}{dp}$: $$\frac{dF}{dp} = \int \lambda^T \frac{\partial h}{\partial p} dt,$$ since $$\frac{\partial f}{\partial p} = \frac{\partial g}{\partial p}=0$$ But for parameters only show up in $\mathbf{b(p)}$ term, derivatives from adjoint method is inconsistent with derivatives estimated using $\frac{\partial F}{\partial p}$ directly while derivatives for other parameters seem OK. I’m thinking that this inconsistency maybe due to the fact that parameters in $b(p)$ doesn’t affect the calculation of $\lambda$ directly, namely it doesn’t show up in the adjoint equation? But there is also the possibility that I did something wrong in coding. Any body have any similar experience? Thanks!
First, you're taking the question backwards. Inertia is the default position. You shouldn't be looking for reasons not to switch, but for reasons to switch. If there are no strong reasons to switch, nobody will switch.Security is not a reason. Between SHA-2 and SHA3, there is no reason to believe that one is more secure than the other. It isn't like when ... Edit: I have made some tests and I found something weird. See at the end.Initial answer:At least the Koblitz curves (K-163, K-233... in NIST terminology) cannot have been specially "cooked", since the whole process is quite transparent:Begin with a binary field $GF(2^m)$. For every m there is only one such field (you can have several representations, ... Your question is at least partially answered in FIPS 186-3 itself…Appendix A describes how to start with a seed and use an iterative process involving SHA-1 until a valid elliptic curve is found.Appendix D contains the NIST recommended curves and includes the seed used to generate each one according to the procedure in Appendix A.So to believe that NSA ... Is this number specified anywhere?It was formally specified in this RFC as the 1536 bit MODP group (although its use predates that RFC). However, from what I've seen, the 2048 bit MODP group from that same document is actually more popular.Why was this particular number picked?Well, it's a safe prime; in addition, the leading 64 bits and the ... When NIST introduced SHA-0 in 1993, they – for the first time – switched their naming convention from MD-n to SHA-nActually, MD-n was not NIST's naming conventions; it was RSA Security's (a private company) naming convention. Before SHA (which was the original name; SHA-0 is retroactive terminology given to distinguish the original proposal from what was ... I do worry, but not for the resistance of SHA-3; I worry for its acceptance.Technically, what NIST wants to do is sound. They do want to somehow "break" a traditional rule, which is that a hash function with an output of n bits ought to resist collisions with strength 2n/2, and preimages (first and second) with strength 2n. Instead, NIST wants harmonized ... I'd say that the whole argument hinges around a "secret attack" that possibly the NSA may know of, enabling them to break some instances of elliptic curves that the rest of the World considers as safe, because the secret attack is, well, secret.This yields to the only possible answer to your question: since secret attacks are secret, they are not known to ... I would characterize the service as similar to a trusted time-stamping service. Except they do not do the time-stamping, but just provide the "key". This allows a user to decide what do to with it, such as using it as a private key to sign something, or an HMAC key, proving the signature is "not older" than the timestamp. If the signature is published to a ... With any $n$ bit hash it is possible to:Find preimages with work $2^n$ on classical computers and $2^{n/2}$ using quantum computersFind collisions with work $2^{n/2}$ on classical computers and $2^{n/3}$ using quantum computersI want to emphasize that these are generic attacks that always work, no matter which concrete hashfunction is used. Grover's ... No they did not, the internals and security levels have not been changed from the draft Keccak submission, only the padding rule has changed.The padding change is the only difference, this allows future tree hashing modes as well as the current SHAKE outputs to generate different digests given the same security parameters and message inputs. Up to 4 ... The standard in question was the Dual Elliptic Curve Deterministic Random Bit Generator (Dual_EC_DRBG), standardized in NIST Special Publication 800-90. In this case, it was not a protocol, but instead a random number generator. It wasn't exactly "broken"; instead, it was proven that there existed a "master key", if you will, that would allow someone to ... 512 bits (rounded down from the 664 bits or 200 digits in the patent) was recommended from its conception in 1974 and throughout the 1980s. Indeed, 463 bits was considered sufficient in the mid-1990s for the RSA-140 challenge. Whether key strengths as low as 100 digits (330 bits) were ever used in the early 1980s embedded systems is unclear; but probable ... I'm not aware of any official NIST policy on the matter, so I can only make educated guesses.I guess new algorithms have sprung up and are already in place. ChaCha20 is used in TLS 1.2 and 1.3. For hash functions, neither SHA-2 nor SHA-3 are depending on AES in any way. The sponge function in Keccak (SHA-3) can also be used as a symmetric cipher (Ketje, ... Plenty of ciphers come out of the USA from government research or selection competitions. AES and DES are examples.Indeed, the US is known from some crypto-related competitions that were/are open to anyone and they surely will do ample of government research related to cryptology, but you need to be sure that you differ between “they selected it” and “they ... Since this question is asking about opinions, it's hard to give the correct answer (alternatively, all possible answers are correct, because they're an opinion). However, my opinion:I believe that there are several aspects contributing to it:Most application designers (that is, the people who use crypto to actually solve a problem) generally don't ... There does appear to be some confusion with point 1.The confusion probably stems from the fact that Keccak has an output size number and a capacity. Output size has little to no effect on security strength. Capacity is what really determines the security strength. So when the post says NIST will only standardize two security levels it is correct (as far as ... The security level of an elliptic curve group is approximately $\log_2{0.886\sqrt{2^n}}$. You can use this to approximate the security level of a $n$-bit key, eg:$\log_2{0.886\sqrt{2^{571}}} = 285.32537860389294$The real computation (at least for curves over a finite field defined by a prime $p$) is $ \log_2{\sqrt{\pi/4}\sqrt{ℓ}} $, where $ℓ$ is the ... Please check https://tools.ietf.org/search/rfc4492 - espessially, the "Appendix A. Equivalent Curves (Informative)" part.For example: NIST P-256 is refered to as secp256r1 and prime256v1. Different names, but they are all the same. The answer is yes, non-US ciphers exist and are in fact very popular.Actually, some who are looking for alternatives, opt for non-NSA/NIST ciphers, for instance Salsa/ChaCha from DJB (who is US citizen).A lot of ciphers have been developed in EU and Japan.China definitely has developed ciphers for its own use, just like many other countries.But long ... This closure is a rather stupid thing, because the Web site is not closed: indeed, there still is a machine, somewhere, which responds to HTTP requests and returns the "we are closed" page. It would have cost zero effort, and zero extra money, to simply let the Web site run and keep on serving PDF files.For crypto development, this means that until the US ... If the NSA knew a sufficiently large weak class of elliptic curves, it is possible for them to have chosen weak curves and have them standardized.As far as I can tell, there is no hint about any sufficiently large class of curves being weak.Regarding choosing the curves: It would have been better if NIST had used an "obvious" string as the seed, e.g. "... The origin is set theory and not programming languages. In the context of cryptography, I could describe a set that is$$x_1 \parallel x_2 \parallel \dots \parallel x_n$$as a concatenation of the series described by$$\parallel_{i=1}^n x_i.$$Furthermore, it's worth noting that + to a mathematician would suggest that it is a commutative, which might not ... The answer is, yes, you can get FIPS certification even if you don't implement every approved cryptographical primitive, or if you don't implement every possible option of those primitives.When you undergo FIPS testing, they ask you to fill out an "information form" that asks for the details of what cryptography you claim to implement. These includes ... The pqRSA proposal technically complies with the NIST rules for the competition, and, as all governmental organizations, NIST tends to be stickler for rules.Now of course it's a sort of joke (whether it is a good one, or whether it was taken a bit too far, is a matter of taste). From a pure cryptographic point of view, it might be useful as an illustration ... This has been basically asked already: Should we trust the NIST recommended ECC parameters?HistoryOnce it was found that NSA allegedly had inserted backdoor to a cryptographic standard, people started thinking what standard it was.The most common guess is that the Dual EC DRBG is the backdoored standard. However, some amount of (possibly justified) ... Both ideas are safe if the hash function behaves like a random oracle and has a large enough output (in particular for "idea 1": with a function which outputs n bits at a time, it is expected that the state will enter a cycle of length about 2n/2, after about 2n/2 steps, so if you want "128-bit security", you will need n = 256 or more).However, we do not ... As an Iranian Cryptology student in one of the most well-known Iranian Universities called Sharif University of Technology, I want to add this to the answers.There doesn't seem to be any National Standard Cipher here in Iran. But It doesn't mean that there shouldn't be any classified cipher being used by the military or the revolutionary guards. As I am ... Reading the CHES'13 presentation by John Kelsey does make things clearer.Basically, the whole thing (with the output lengths and capacities) seems to come down to the fact that NIST wants to standardize two versions of the underlying sponge function, SHAKE256 and SHAKE512, with respective capacities of 256 and 512 bits, and then define the actual SHA3 hash ... [source of information: my interpretation of multiple hallway chats I've had with DJB and Tanja Lange at conferences]The actual NIST PQC submission was for two reasons:A joke. Evidence1: DJB yelling from the back of the room "How much RAM does the NIST benchmarking machine have??" Dustin Moody replying "Dan, we're not benchmarking pqRSA!". Evidence2: DJB ... Some languages like PL/I and Oracle Database SQL indeed use || for string concatenation.One reason is maybe that + might be confusing when talking about fundamental cryptography, since there is a lot of math involved. The mathematical notation for 'OR' would be reversed caret $\lor$ and the exclusive 'OR', better known as 'XOR' is a circled plus $\oplus$....
Introduction The College Board releases a list of topics on the AP Physics Exam. In this study guide, I will be going over each one, including a potential problem that could come up. Electrostatics Charge and Coulomb's Law Possible Problem Three charges in a line. What's the force on the first one? When doing this, keep in mind that force is a vector, so focus on the direction. Electric Field and Electric Potential Gauss's Law Other times, it is seen as $\int E \cdot da = \dfrac{q_{enc}}{\epsilon_0}$, but the first expression is sufficient for the AP. Fields and Potentials of Other Charge Distributions I am fairly sure this is referring to a uniform electric field. This means instead of extending radially, the field is created by two parallel plates, and it is uniform inside. Conductors, Capacitors, Dielectrics Electrostatics with Conductors A conductor is a material that allows the free flow of electrons. Electrostatic equilibrium means the charges are stable. It is important to know that a conductor in electrostatic equilibrium has no electric field. This is due to Gauss's Law. If there were an electric field, then there must be charge inside. Because there is an electric field and charge inside, then those charges will move, but that means it's not in electrostatic equilibrium. Capacitors Capacitors store charge. There are three main types
Here are two interesting questions involving derivatives: 1. Suppose two different functions have the same derivative; what can you say about the relationship between the two functions? 2. Suppose you drive a car from toll booth on a toll road to another toll booth at an average speed of 70 miles per hour. What can be concluded about your actual speed during the trip? In particular, did you exceed the 65 mile per hour speed limit? While these sound very different, it turns out that the two problems are very closely related. We know that "speed'' is really the derivative by a different name; let's start by translating the second question into something that may be easier to visualize. Suppose that the function $f(t)$ gives the position of your car on the toll road at time $t$. Your change in position between one toll booth and the next is given by $\ds f(t_1)-f(t_0)$, assuming that at time $\ds t_0$ you were at the first booth and at time $\ds t_1$ you arrived at the second booth. Your average speed for the trip is $\ds (f(t_1)-f(t_0))/(t_1-t_0)$. If we think about the graph of $f(t)$, the average speed is the slope of the line that connects the two points $\ds (t_0,f(t_0))$ and $\ds (t_1,f(t_1))$. Your speed at any particular time $t$ between $\ds t_0$ and $\ds t_1$ is $f'(t)$, the slope of the curve. Now question (2) becomes a question about slope. In particular, if the slope between endpoints is 70, what can be said of the slopes at points between the endpoints? As a general rule, when faced with a new problem it is often a good idea to examine one or more simplified versions of the problem, in the hope that this will lead to an understanding of the original problem. In this case, the problem in its "slope'' form is somewhat easier to simplify than the original, but equivalent, problem. Here is a special instance of the problem. Suppose that $\ds f(t_0)=f(t_1)$. Then the two endpoints have the same height and the slope of the line connecting the endpoints is zero. What can we say about the slope between the endpoints? It shouldn't take much experimentation before you are convinced of the truth of this statement: Somewhere between $\ds t_0$ and $\ds t_1$ the slope is exactly zero, that is, somewhere between $\ds t_0$ and $\ds t_1$ the slope is equal to the slope of the line between the endpoints. This suggests that perhaps the same is true even if the endpoints are at different heights, and again a bit of experimentation will probably convince you that this is so. But we can do better than "experimentation''—we can prove that this is so. We start with the simplified version: Proof. We know that $f(x)$ has a maximum and minimum value on $[a,b]$ (because it is continuous), and we also know that the maximum and minimum must occur at an endpoint, at a point at which the derivative is zero, or at a point where the derivative is undefined. Since the derivative is never undefined, that possibility is removed. If the maximum or minimum occurs at a point $c$, other than an endpoint, where $f'(c)=0$, then we have found the point we seek. Otherwise, the maximum and minimum both occur at an endpoint, and since the endpoints have the same height, the maximum and minimum are the same. This means that $f(x)=f(a)=f(b)$ at every $x \in [a,b]$, so the function is a horizontal line, and it has derivative zero everywhere in $(a,b)$. Then we may choose any $c$ at all to get $f'(c)=0$. Perhaps remarkably, this special case is all we need to prove the more general one as well. Theorem 6.5.2 (Mean Value Theorem) Suppose that $f(x)$ has a derivative on the interval $(a,b)$ and is continuous on the interval $[a,b]$. Then at some value $c\in (a,b)$, $\ds f'(c)={f(b)-f(a)\over b-a}$. Proof. Let $\ds m={f(b)-f(a)\over b-a}$, and consider a new function $g(x)=f(x) - m(x-a)-f(a)$. We know that $g(x)$ has a derivative everywhere, since $g'(x)=f'(x)-m$. We can compute $g(a)=f(a)- m(a-a)-f(a) =0$ and $$\eqalign{ g(b)=f(b)-m(b-a)-f(a)&=f(b)-{f(b)-f(a)\over b-a}(b-a)-f(a)\cr &=f(b)-(f(b)-f(a))-f(a)=0.\cr }$$ So the height of $g(x)$ is the same at both endpoints. This means, by Rolle's Theorem, that at some $c$, $g'(c)=0$. But we know that $g'(c)=f'(c)-m$, so $$0=f'(c)-m=f'(c)-{f(b)-f(a)\over b-a},$$ which turns into $$f'(c)={f(b)-f(a)\over b-a},$$ exactly what we want. Returning to the original formulation of question (2), we see that if $f(t)$ gives the position of your car at time $t$, then the Mean Value Theorem says that at some time $c$, $f'(c)=70$, that is, at some time you must have been traveling at exactly your average speed for the trip, and that indeed you exceeded the speed limit. Now let's return to question (1). Suppose, for example, that two functions are known to have derivative equal to 5 everywhere, $f'(x)=g'(x)=5$. It is easy to find such functions: $5x$, $5x+47$, $5x-132$, etc. Are there other, more complicated, examples? No—the only functions that work are the "obvious'' ones, namely, $5x$ plus some constant. How can we see that this is true? Although "5'' is a very simple derivative, let's look at an even simpler one. Suppose that $f'(x)=g'(x)=0$. Again we can find examples: $f(x)=0$, $f(x)=47$, $f(x)=-511$ all have $f'(x)=0$. Are there non-constant functions $f$ with derivative 0? No, and here's why: Suppose that $f(x)$ is not a constant function. This means that there are two points on the function with different heights, say $f(a)\not=f(b)$. The Mean Value Theorem tells us that at some point $c$, $f'(c)=(f(b)-f(a))/(b-a)\not=0$. So any non-constant function does not have a derivative that is zero everywhere; this is the same as saying that the only functions with zero derivative are the constant functions. Let's go back to the slightly less easy example: suppose that $f'(x)=g'(x)=5$. Then $(f(x)-g(x))' = f'(x)-g'(x) = 5 -5 =0$. So using what we discovered in the previous paragraph, we know that $f(x)-g(x)=k$, for some constant $k$. So any two functions with derivative 5 must differ by a constant; since $5x$ is known to work, the only other examples must look like $5x+k$. Now we can extend this to more complicated functions, without any extra work. Suppose that $f'(x)=g'(x)$. Then as before $(f(x)-g(x))' = f'(x)-g'(x) =0$, so $f(x)-g(x)=k$. Again this means that if we find just a single function $g(x)$ with a certain derivative, then every other function with the same derivative must be of the form $g(x)+k$. Example 6.5.3 Describe all functions that have derivative $5x-3$. It's easy to find one: $\ds g(x)=(5/2)x^2-3x$ has $g'(x)=5x-3$. The only other functions with the same derivative are therefore of the form $\ds f(x)=(5/2)x^2-3x+k$. Alternately, though not obviously, you might have first noticed that $\ds g(x)=(5/2)x^2-3x+47$ has $g'(x)=5x-3$. Then every other function with the same derivative must have the form $\ds f(x)=(5/2)x^2-3x+47+k$. This looks different, but it really isn't. The functions of the form $\ds f(x)=(5/2)x^2-3x+k$ are exactly the same as the ones of the form $\ds f(x)=(5/2)x^2-3x+47+k$. For example, $\ds (5/2)x^2-3x+10$ is the same as $\ds (5/2)x^2-3x+47+(-37)$, and the first is of the first form while the second has the second form. This is worth calling a theorem: Theorem 6.5.4 If $f'(x)=g'(x)$ for every $x\in (a,b)$, then for some constant $k$, $f(x)=g(x)+k$ on the interval $(a,b)$. Example 6.5.5 Describe all functions with derivative $\sin x$. One such function is $-\cos x$, so all such functions have the form $-\cos x+k$. Exercises 6.5 Ex 6.5.1Let $\ds f(x) = x^2$.Find a value $c\in (-1,2)$ so that $f'(c)$ equals the slope betweenthe endpoints of $f(x)$ on $[-1,2]$.(answer) Ex 6.5.2Verify that $f(x) = x/(x+2)$ satisfies the hypotheses of the Mean Value Theorem on the interval $[1,4]$ and then find all of the values, $c$, that satisfy the conclusion of the theorem.(answer) Ex 6.5.3Verify that $f(x) = 3x/(x+7)$ satisfies the hypotheses of the Mean Value Theorem on the interval $[-2 , 6]$ and then find all of the values, $c$, that satisfy the conclusion of the theorem. Ex 6.5.4Let $f(x) = \tan x $. Show that $f(\pi ) = f(2\pi)=0$ butthere is no number $c\in (\pi,2\pi)$ such that $f'(c) =0$. Why doesthis not contradict Rolle's theorem? Ex 6.5.5Let $\ds f(x) = (x-3)^{-2}$. Show that there is no value $c\in (1,4)$ such that $f'(c) = (f(4)-f(1))/(4-1)$. Why isthis not a contradiction of the Mean Value Theorem? Ex 6.5.6Describe all functions with derivative $\ds x^2+47x-5$.(answer) Ex 6.5.7Describe all functions with derivative $\sin(2x)$.(answer) Ex 6.5.8Show that the equation $\ds 6x^4 -7x+1 =0$ does not have morethan two distinct real roots. Ex 6.5.9Let $f$ be differentiable on $\R$. Suppose that $f'(x) \neq0$ for every $x$. Prove that $f$ has at most one real root. Ex 6.5.10Prove that for all real $x$ and $y$$|\cos x -\cos y | \leq |x-y|$.State and prove an analogous result involving sine. Ex 6.5.11Show that$\ds \sqrt{1+x} \le 1 +(x/2)$ if $-1< x< 1$.
This is the third in a series of posts I’ve made recently concerning what I call the universal algorithm, which is a program that can in principle compute any function, if only you should run it in the right universe. Earlier, I had presented a few elementary proofs of this surprising theorem: see Every function can be computable! and A program that accepts exactly any desired finite set, in the right universe. $\newcommand\PA{\text{PA}}$Those arguments established the universal algorithm, but they fell short of proving Woodin’s interesting strengthening of the theorem, which explains how the universal algorithm can be extended from any arithmetic universe to a larger one, in such a way so as to extend the given enumerated sequence in any desired manner. Woodin emphasized how his theorem raises various philosophical issues about the absoluteness or rather the non-absoluteness of finiteness, which I find extremely interesting. Woodin’s proof, however, is a little more involved than the simple arguments I provided for the universal algorithm alone. Please see the paper Blanck, R., and Enayat, A. Marginalia on a theorem of Woodin, The Journal of Symbolic Logic, 82(1), 359-374, 2017. doi:10.1017/jsl.2016.8 for a further discussion of Woodin’s argument and related results. What I’ve recently discovered, however, is that in fact one can prove Woodin’s stronger version of the theorem using only the method of the elementary argument. This variation also allows one to drop the countability requirement on the models, as was done by Blanck and Enayat. My thinking on this argument was greatly influenced by a comment of Vadim Kosoy on my original post. It will be convenient to adopt an enumeration model of Turing computability, by which we view a Turing machine program as providing a means to computably enumerate a list of numbers. We start the program running, and it generates a list of numbers, possibly finite, possibly infinite, possibly empty, possibly with repetition. This way of using Turing machines is fully Turing equivalent to the usual way, if one simply imagines enumerating input/output pairs so as to code any given computable partial function. Theorem.(Woodin) There is a Turing machine program $e$ with the following properties. $\PA$ proves that $e$ enumerates a finite sequence of numbers. For any finite sequence $s$, there is a model $M$ of $\PA$ in which program $e$ enumerates exactly $s$. For any model $M$ in which $e$ enumerates a (possibly nonstandard) sequence $s$ and any $t\in M$ extending $s$, there is an end-extension $N$ of $M$ in which $e$ enumerates exactly $t$. It is statement (3) that makes this theorem stronger than merely the universal algorithm that I mentioned in my earlier posts and which I find particularly to invite philosophical speculation on the provisional nature of finiteness. After all, if in one universe the program $e$ enumerates a finite sequence $s$, then for any $t$ extending $s$ — we might imagine having painted some new pattern $t$ on top of $s$ — there is a taller universe in which $e$ enumerates exactly $t$. So we need only wait long enough (into the next universe), and then our program $e$ will enumerate exactly the sequence $t$ we had desired. Proof. This is the new elementary proof. Let’s begin by recalling the earlier proof of the universal algorithm, for statements (1) and (2) only. Namely, let $e$ be the program that undertakes a systematic exhaustive search through all proofs from $\PA$ for a proof of a statement of the form, “program $e$ does not enumerate exactly the sequence $s$,” where $s$ is an explicitly listed finite sequence of numbers. Upon finding such a proof (the first such proof found), it proceeds to enumerate exactly the numbers appearing in $s$. Thus, at bottom, the program $e$ is a petulant child: it searches for a proof that it shouldn’t behave in a certain way, and then proceeds at once to behave in exactly the forbidden manner. (The reader may notice an apparent circularity in the definition of program $e$, since we referred to $e$ when defining $e$. But this is no problem at all, and it is a standard technique in computability theory to use the Kleene recursion theorem to show that this kind of definition is completely fine. Namely, we really define a program $f(e)$ that performs that task, asking about $e$, and then by the recursion theorem, there is a program $e$ such that $e$ and $f(e)$ compute the same function, provably so. And so for this fixed-point program $e$, it is searching for proofs about itself.) It is clear that the program $e$ will enumerate a finite list of numbers only, since either it never finds the sought-after proof, in which case it enumerates nothing, and the empty sequence is finite, or else it does find a proof, in which case it enumerates exactly the finitely many numbers explicitly appearing in the statement that was proved. So $\PA$ proves that in any case $e$ enumerates a finite list. Further, if $\PA$ is consistent, then you will not be able to refute any particular finite sequence being enumerated by $e$, because if you could, then (for the smallest such instance) the program $e$ would in fact enumerate exactly those numbers, and this would be provable, contradicting $\text{Con}(\PA)$. Precisely because you cannot refute that statement, it follows that the theory $\PA$ plus the assertion that $e$ enumerates exactly $s$ is consistent, for any particular $s$. So there is a model $M$ of $\PA$ in which program $e$ enumerates exactly $s$. This establishes statements (1) and (2) for this program. Let me now modify the program in order to achieve the key third property. Note that the program described above definitely does not have property (3), since once a nonempty sequence $s$ is enumerated, then the program is essentially finished, and so running it in a taller universe $N$ will not affect the sequence it enumerates. To achieve (3), therefore, we modify the program by allowing it to add more to the sequence. Specfically, for the new modified version of the program $e$, we start as before by searching for a proof in $\PA$ that the list enumerated by $e$ is not exactly some explicitly listed finite sequence $s$. When this proof is found, then $e$ immediately enumerates the numbers appearing in $s$. Next, it inspects the proof that it had found. Since the proof used only finitely many $\PA$ axioms, it is therefore a proof from a certain fragment $\PA_n$, the $\Sigma_n$ fragment of $\PA$. Now, the algorithm $e$ continues by searching for a proof in a strictly smaller fragment that program $e$ does not enumerate exactly some explicitly listed sequence $t$ properly extending the sequence of numbers already enumerated. When such a proof is found, it then immediately enumerates (the rest of) those numbers. And now simply iterate this, looking for new proofs in still-smaller fragments of $\PA$ that a still-longer extension is not the sequence enumerated by $e$. Succinctly: the program $e$ searches for a proof, in a strictly smaller fragment of $\PA$ each time, that $e$ does not enumerate exactly a certain explicitly listed sequence $s$ extending whatever has been already enumerated so far, and when found, it enumerates those new elements, and repeats. We can still prove in $\PA$ that $e$ enumerates a finite sequence, since the fragment of $\PA$ that is used each time is going down, and $\PA$ proves that this can happen only finitely often. So statement (1) holds. Again, you cannot refute that any particular finite sequence $s$ is the sequence enumerated by $e$, since if you could do this, then in the standard model, the program would eventually find such a proof, and then perhaps another and another, until ultimately, it would find some last proof that $e$ does not enumerate exactly some finite sequence $t$, at which time the program will have enumerated exactly $t$ and never afterward add to it. So that proof would have proved a false statement. This is a contradiction, since that proof is standard. So again, precisely because you cannot refute these statements, it follows that it is consistent with $\PA$ that program $e$ enumerates exactly $s$, for any particular finite $s$. So statement (2) holds. Finally, for statement (3), suppose that $M$ is a model of $\PA$ in which $e$ enumerates exactly some finite sequence $s$. If $s$ is the empty sequence, then $M$ thinks that there is no proof in $\PA$ that $e$ does not enumerate exactly $t$, for any particular $t$. And so it thinks the theory $\PA+$ “$e$ enumerates exactly $t$” is consistent. So in $M$ we may build the Henkin model $N$ of this theory, which is an end-extension of $M$ in which $e$ enumerates exactly $t$, as desired. If alternatively $s$ was nonempty, then it was enumerated by $e$ in $M$ because in $M$ there was ultimately a proof in some fragment $\PA_n$ that it should not do so, but it never found a corresponding proof about an extension of $s$ in any strictly smaller fragment of $\PA$. So $M$ has a proof from $\PA_n$ that $e$ does not enumerate exactly $s$, even though it did. Notice that $n$ must be nonstandard, because $M$ has a definable $\Sigma_k$-truth predicate for every standard $k$, and using this predicate, $M$ can see that every $\PA_k$-provable statement must be true. Precisely because the model $M$ lacked the proofs from the strictly smaller fragment $\PA_{n-1}$, it follows that for any particular finite $t$ extending $s$ in $M$, the model thinks that the theory $T=\PA_{n-1}+$ “$e$ enumerates exactly $t$” is consistent. Since $n$ is nonstandard, this theory includes all the actual $\PA$ axioms. In $M$ we can build the Henkin model $N$ of this theory, which will be an end-extension of $M$ in which $\PA$ holds and program $e$ enumerates exactly $t$, as desired for statement (3). QED Corollary. Let $e$ be the universal algorithm program $e$ of the theorem. Then For any infinite sequence $S:\mathbb{N}\to\mathbb{N}$, there is a model $M$ of $\PA$ in which program $e$ enumerates a (nonstandard finite) sequence starting with $S$. If $M$ is any model of $\PA$ in which program $e$ enumerates some (possibly nonstandard) finite sequence $s$, and $S$ is any $M$-definable infinite sequence extending $s$, then there is an end-extension of $M$ in which $e$ enumerates a sequence starting with $S$. Proof. (1) Fix $S:\mathbb{N}\to\mathbb{N}$. By a simple compactness argument, there is a model $M$ of true arithmetic in which the sequence $S$ is the standard part of some coded nonstandard finite sequence $t$. By the main theorem, there is some end-extension $M^+$ of $M$ in which $e$ enumerates $t$, which extends $S$, as desired. (2) If $e$ enumerates $s$ in $M$, a model of $\PA$, and $S$ is an $M$-infinite sequence definable in $M$, then by a compactness argument inside $M$, we can build a model $M’$ inside $M$ in which $S$ is coded by an element, and then apply the main theorem to find a further end-extension $M^+$ in which $e$ enumerates that element, and hence which enumerates an extension of $S$. QED
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Outline Introduction Derivation Example Conclusion References Introduction Hello! My name is Ryan Johnson! You might be wondering what a slecture is! A slecture is a student lecture that gives a brief overview about a particular topic! In this slecture, I will discuss the relationship between an original signal, x(t), and a sampling of that original signal, x_s(t). We will also take a look at how this relationship translates to the frequency domain, (X(f) & X_s(f)). Derivation F = Fourier transform $ \begin{align} comb_T(x(t)) &= x(t) \times p_T(t) &= x_s(t)\\ \end{align} $ The comb of a signal is equal to the signal multiplied by an impulse train which is equal to the sampled signal. Essentially, the comb is grabbing points on the graph x(t) at a set interval, T. $ \begin{align} X_s(f) &= F(x_s(t)) = F(comb_T(x(t))\\ &= F(x(t)p_T(t))\\ &= X(f) * F(p_T(f)\\ \end{align} $ Multiplication in time is equal to convolution in frequency. $ \begin{align} X_s(f)&= X(f)*\frac{1}{T}\sum_{n = -\infty}^\infty \delta(f-\frac{n}{T})\\ \end{align} $ definition of the Fourier transform of an impulse train. $ \begin{align} &= \frac{1}{T}X(f)*p_\frac{1}{T}(f)\\ &= \frac{1}{T}rep_\frac{1}{T}X(f)\\ \end{align} $ The result is a repetition of the Fourier transformed signal. Conclusion Xs(f) is a rep of X(f) in the frequency domain with amplitude of 1/T and period of 1/T. Questions If you have any questions, comments, etc. please post them on this page. References [1] Mireille Boutin, "ECE 438 Digital Signal Processing with Applications," Purdue University. October 6, 2014. Back to ECE438 slectures, Fall 2014
The following equations are taken from Ravenna, walsh: "Optimal Monetary Policy with Unemployment and Sticky Prices" (2011). (i)$$\frac{Z_{t}}{\mu_{t}} = w_t + \frac{\kappa}{q_{t}} - (1-\rho)E_{t}(\frac{1}{R_{t}})(\frac{\kappa}{q_{t+1}}) $$ (ii) $$ w_{t} = w^u + (\frac{b_t}{1-b_t})(\frac{\kappa}{q_t}) - (1-\rho)E_t(\frac{1}{R_t})(1-p_{t+1})(\frac{b_{t+1}}{1-b_{t+1}})(\frac{\kappa}{q_{t+1}}) $$ $ \phi $... wage replacement rate $b_t$... surplus bargaining share of a worker $Z_t$... exogenous productivity shock ($\bar{Z}=1$) $\mu_t$... retail price mark up $\kappa$... post of posting a vacancy $\rho$... Share of matches $N_{t-1}$ that loses its jobs in t $q_t$... probability that a firm will fill its vacancy $\lambda_t$... marginal utility of consumption $\frac{1}{R_t} \equiv \beta(\frac{\lambda_{t+1}}{\lambda_t})$ $p_{t+1} \equiv \frac{m_t}{u_t}$... job finding probability with $m_t$ matches and $u_t$ unemployed ones $\beta$... discount factor $w^u = \phi w $ Given the last identity, (i) and (ii) could be used to jointly solve for $\kappa$ and $w$. At least this is suggested in the paper by the atuhors. (ii) $w_t = \phi w_t + (\frac{b_t}{1-b_t})(\frac{\kappa}{q_t}) - (1-\rho)E_t(\frac{1}{R_t})(1-p_{t+1})(\frac{b_{t+1}}{1-b_{t+1}})(\frac{\kappa}{q_{t+1}})$ <=> $w_t = \frac{(\frac{b_t}{1-b_t})(\frac{\kappa}{q_t}) - (1-\rho)E-t(\frac{1}{R_t})(1-p_{t+1})(\frac{b_{t+1}}{1-b_{t+1}})(\frac{\kappa}{q_{t+1}})}{1-\phi}$ Put into (i) to eliminate $w_t$ $\frac{Z_{t}}{\mu_{t}} = \frac{(\frac{b_t}{1-b_t})(\frac{\kappa}{q_t}) - (1-\rho)E_t(\frac{1}{R_t})(1-p_{t+1})(\frac{b_{t+1}}{1-b_{t+1}})(\frac{\kappa}{q_{t+1}})}{1-\phi} +\frac{\kappa}{q_{t}} - (1-\rho)E_{t}(\frac{1}{R_{t}})(\frac{\kappa}{q_{t+1}}) $ <=> $\frac{Z_{t}}{\mu_{t}}(1-\phi) = (\frac{b_t}{1-b_t})(\frac{\kappa}{q_t}) - (1-\rho)E_t(\frac{1}{R_t})(1-p_{t+1})(\frac{b_{t+1}}{1-b_{t+1}})(\frac{\kappa}{q_{t+1}}) +\frac{\kappa}{q_{t}}(1-\phi) - (1-\rho)E_{t}(\frac{1}{R_{t}})(\frac{\kappa}{q_{t+1}})(1-\phi) $ Rearrange and solve for $\kappa$: $\frac{Z_{t} (1-\phi)}{\mu_{t} ((\frac{b_t}{1-b_t})(\frac{1}{q_t}) - (1-\rho)E_t(\frac{1}{R_t})(1-p_{t+1})(\frac{b_{t+1}}{1-b_{t+1}})(\frac{1}{q_{t+1}}) +\frac{1}{q_{t}}(1-\phi) - (1-\rho)E_{t}(\frac{1}{R_{t}})(\frac{1}{q_{t+1}})(1-\phi)} = \kappa $ <=> $\kappa = \frac{Z_{t} (1-\phi)}{\mu_{t} \big(\big(\frac{1}{q_t}\big)\big[(\frac{b_t}{1-b_t})+(1-\phi)\big] - (1-\rho)\big(\frac{1}{q_{t+1}}\big)E_t\big(\frac{1}{R_t}\big)\Big[(1-p_{t+1})\big(\frac{b_{t+1}}{1-b_{t+1}}\big)+(1-\phi)\big]\big)}$ Evaluating this formula arount the steady state yields $\bar{\kappa} = \frac{(1-\phi)}{\bar{\mu} \big(\big(\frac{1}{\bar{Q}}\big)\big[(\frac{b}{1-b})+(1-\phi)\big] - (1-\rho)\big(\frac{1}{\bar {Q}}\big)\beta\Big[(1-\frac{\bar{M}}{\bar{U}})\big(\frac{b}{1-b}\big)+(1-\phi)\big]\big)}$ Since all this parameters and steady state values are well defined I am able to calculate the steady state value of $\kappa$ and hence the one of $w_t$ as well. I wanted to simulate the related model in dynare. Having a look at the dynare code provided by the authors, I noticed that they made use of a totally diferent approach to calculate the steady state value of $\kappa$: $AAk = \left[{\begin{array}{cc} 1-\phi(1-b) & -\phi b(1-\rho)\beta \bar{\theta}\\ 1-b & (1-\beta(1-\rho))(\frac{1}{\bar{Q}})+ b\beta(1-\rho)\bar{\theta} \end{array} } \right]$ $BBk = \left[ {\begin{array}{cc} \frac{\phi b}{\bar{\mu}} \\ \frac{(1-b)}{\bar{\mu}} \end{array} } \right] $ $CCk = AAk^{-1}BBk$ $\bar{\kappa}=CCk(2)$ Where $\theta_t=\frac{v_t}{u_t}$ measures the labor market tightness (relation of vacancies and job seekers) I would like to know whether my derivation of the steady state value for $\kappa$ does make sense and how the authors of the paper derive it? I have read the appendix to the paper aswell as the dynare code, but to be hones I do not have an idea of how they calculate $\kappa$. The results of my calculation and the one of Ravenna and Walsh differ significantly.
NP by definition is the class of decision problems solvable in polynomial-time on a non-deterministic Turing machine. So, by definition, NP is your $A$ set. NP can't also be $B$ for superficial reasons: NP is a subset of functions $2^*\to 2$, and so it simply doesn't contain any functions $2^*\to2^*$ whose output isn't always one bit long. There is a separate class FNP for the class of search problems solvable in polynomial-time on a non-deterministic Turing machine. A decision problem $\varphi$ is in NP when $$\forall x\in 2^*.\varphi(x)\iff \exists u\in2^{p(|x|)}.M(x,u)=1$$ for some polynomial $p$ and polynomial-time (deterministic) Turing machine $M$. A function $f$ is in FNP if given $x$ it finds a $u$ such that $M(x,u)=1$. Note, there is no constraint on how much time $f$ takes to produce a $u$ from $x$. That said, on a non-deterministic Turing machine, we can just guess the bits of $u$ and then check them against $M$ in polynomial-time. (We can cast an arbitrary function computed in polynomial-time on an NDTM $F$ as the relation $R(x,\langle u,y\rangle)\iff F(x,u)=y$ where $u$ represents the non-deterministic choices which is calculable in polynomial-time simply by simulating $F$ with choices $u$.) As mentioned on the Complexity Zoo page, The Complexity of Decision Versus Search shows that if EE $\neq$ NEE or (citing a different result) if NE $\neq$ coNE then there are problems in FNP that can't be reduced to corresponding decision problems in NP in the sense if for any suitable machine $M$ we're provided an oracle for $\varphi(x)\equiv\exists u\in 2^{p(|x|)}.M(x,u)=1$ we could compute in polynomial-time some $u$ for the particular $x$ we were given such that $M(x,u)=1$. Any decision problem whose corresponding search problem is reducible to the decision problem can't be NP-complete: given an oracle for an NP-complete problem, we'd then be able to efficiently compute SAT whose search problem we can reduce to the corresponding decision problem, and which we can then use to compute the $u$ for the original problem by reducing it to SAT. So the focus on decision problems doesn't seem completely innocuous, but we do have P = NP $\iff$ FP = FNP, so for at least the P = NP problem it doesn't matter. This can be partially seen from the above since if P = NP then every problem in NP is NP-complete and the counter-example to the reduction of search to decision doesn't exist.
Introduction to the Classical Discrete Fourier transform: The DFT transforms a sequence of $N$ complex numbers $\{\mathbf{x}_n\}:=x_0,x_1,x_2,...,x_{N-1}$ into another sequence of complex numbers $\{\mathbf{X}_k\}:=X_0,X_1,X_2,...$ which is defined by $$X_k=\sum_{n=0}^{N-1}x_n.e^{\pm\frac{2\pi i k n}{N}}$$ We might multiply by suitable normalization constants as necessary. Moreover, whether we take the plus or minus sign in the formula depends on the convention we choose. Suppose, it's given that $N=4$ and $\mathbf{x}=\begin{pmatrix} 1 \\ 2-i \\ -i \\ -1+2i \end{pmatrix}$. We need to find the column vector $\mathbf{X}$. The general method is already shown on the Wikipedia page. But we will develop a matrix notation for the same. $\mathbf{X}$ can be easily obtained by pre multiplying $\mathbf{x}$ by the matrix: $$M=\frac{1}{\sqrt{N}}\begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & w & w^{ 2 } & w^{ 3 } \\ 1 & w^ 2 & w^4 & w^6 \\ 1 & w^3 & w^6 & w^9 \end{pmatrix}$$ where $w$ is $e^{\frac{-2\pi i}{N}}$. Each element of the matrix is basically $w^{ij}$. $\frac{1}{\sqrt{N}}$ is simply a normalization constant. Finally, $\mathbf{X}$ turns out to be: $\frac{1}{2}\begin{pmatrix} 2 \\ -2-2i \\ -2i \\ 4+4i \end{pmatrix}$. Now, sit back for a while and notice a few important properties: All the columns of the matrix $M$ are orthogonal to each other. All the columns of $M$ have magnitude $1$. If you post multiply $M$ with a column vector having lots of zeroes (large spread) you'll end up with a column vector with only a few zeroes (narrow spread). The converse also holds true. (Check!) It can be very simply noticed that the classical DFT has a time complexity $\mathcal O(N^2)$. That is because for obtaining every row of $\mathbf{X}$, $N$ operations need to be performed. And there are $N$ rows in $\mathbf{X}$. The Fast fourier transform: Now, let us look at the Fast fourier transform. The fast Fourier transform uses the symmetry of the Fourier transform to reduce the computation time. Simply put, we rewrite the Fourier transform of size $N$ as two Fourier transforms of size $N/2$ - the odd and the even terms. We then repeat this over and over again to exponentially reduce the time. To see how this works in detail, we turn to the matrix of the Fourier transform. While we go through this, it might be helpful to have $\text{DFT}_8$ in front of you to take a look at. Note that the exponents have been written modulo $8$, since $w^8 = 1$. Notice how row $j$ is very similar to row $j + 4$. Also, notice how column $j$is very similar to column $j + 4$. Motivated by this, we are going to split theFourier transform up into its even and odd columns. In the first frame, we have represented the whole Fourier transform matrixby describing the $j$th row and $k$th column: $w^{jk}$. In the next frame, we separate the odd and even columns, and similarly separate the vector that is to be transformed. You should convince yourself that the first equality really isan equality. In the third frame, we add a little symmetry by noticing that$w^{j+N/2} = −w^j$ (since $w^{n/2} = −1$). Notice that both the odd side and even side contain the term $w^{2jk}$. Butif $w$ is the primitive Nth root of unity, then $w^2$ is the primitive $N/2$ nd root of unity. Therefore, the matrices whose $j$, $k$th entry is $w^{2jk}$ are really just $\text{DFT}_{(N/2)}$! Now we can write $\text{DFT}_N$ in a new way:Now suppose we are calculating the Fourier transform of the function $f(x)$.We can write the above manipulations as an equation that computes the jthterm $\hat{f}(j)$. Note: QFT in the image just stands for DFT in this context. Also, M refers to what we are calling N. This turns our calculation of $\text{DFT}_N$ into two applications of $\text{DFT}_{(N/2)}$. Wecan turn this into four applications of $\text{DFT}_{(N/4)}$, and so forth. As long as $N = 2n$ for some $n$, we can break down our calculation of $\text{DFT}_N$ into $N$calculations of $\text{DFT}_1 = 1$. This greatly simplifies our calculation. In case of the Fast fourier transform the time complexity reduces to $\mathcal{O}(N\log(N))$ (try proving this yourself). This is a huge improvement over the classical DFT and pretty much the state-of-the-art algorithm used in modern day music systems like your iPod! The Quantum Fourier transform with quantum gates: The strength of the FFT is that we are able to use the symmetry of thediscrete Fourier transform to our advantage. The circuit application of QFTuses the same principle, but because of the power of superposition QFT iseven faster. The QFT is motivated by the FFT so we will follow the same steps, butbecause this is a quantum algorithm the implementation of the steps will bedifferent. That is, we first take the Fourier transform of the odd and evenparts, then multiply the odd terms by the phase $w^{j}$. In a quantum algorithm, the first step is fairly simple. The odd and eventerms are together in superposition: the odd terms are those whose leastsignificant bit is $1$, and the even with $0$. Therefore, we can apply $\text{QFT}_{(N/2)}$ to both the odd and even terms together. We do this by applying we will simply apply $\text{QFT}_{(N/2)}$ to the $n-1$ most significant bits, and recombine the odd and even appropriately by applying the Hadamard to the least significant bit. Now to carry out the phase multiplication, we need to multiply each oddterm $j$ by the phase $w^{j}$ . But remember, an odd number in binary ends with a $1$ while an even ends with a $0$. Thus we can use the controlled phase shift, where the least significant bit is the control, to multiply only the odd terms by the phase without doing anything to the even terms. Recall that the controlled phase shift is similar to the CNOT gate in that it only applies a phase to the target if the control bit is one. Note: In the image M refers to what we are calling N. The phase associated with each controlled phase shift should be equal to$w^{j}$ where $j$ is associated to the $k$-th bit by $j = 2k$.Thus, apply the controlled phase shift to each of the first $n − 1$ qubits,with the least significant bit as the control. With the controlled phase shiftand the Hadamard transform, $\text{QFT}_N$ has been reduced to $\text{QFT}_{(N/2)}$. Note: In the image, M refers to what we are calling N. Example: Lets construct $\text{QFT}_3$. Following the algorithm, we will turn $\text{QFT}_3$ into $\text{QFT}_2$and a few quantum gates. Then continuing on this way we turn $\text{QFT}_2$ into$\text{QFT}_1$ (which is just a Hadamard gate) and another few gates. Controlledphase gates will be represented by $R_\phi$. Then run through another iteration to get rid of $\text{QFT}_2$. You should now be able to visualize the circuit for $\text{QFT}$ on more qubits easily. Furthermore, you can see that the number of gates necessary to carry out $\text{QFT}_N$ it takes is exactly $$\sum_{i=1}^{\log(N)} i=\log(N)(\log(N)+1)/2 = \mathcal{O}(\log^2 N)$$ Sources: https://en.wikipedia.org/wiki/Discrete_Fourier_transform https://en.wikipedia.org/wiki/Quantum_Fourier_transform Quantum Mechanics and Quantum Computation MOOC (UC BerkeleyX) - Lecture Notes : Chapter 5 P.S: This answer is in its preliminary version. As @DaftWillie mentions in the comments, it doesn't go much into " any insight that might give some guidance with regards to other possible algorithms". I encourage alternate answers to the original question. I personally need to do a bit of reading and resource-digging so that I can answer that aspect of the question.
tl;dr- Quantum computers can't really help us to simulate the whole universe as the universe is likely vastly more complex than even quantum mechanics can capture, plus we can't even begin to guess how big it is or many other basic fundamental features. In short, simulating the whole universe is beyond sci-fi.We can't really simulate the entire universe, ... It seems this problem is open.Watrous [J. Comp. Sys. Sci. 59, (pp. 281-326), 1999]proved that any space $s$ bounded quantum Turing Machine (for space constructible $s(n)>\Omega(\log n)$) can be simulated by deterministic Turing machine with $O(s^2)$ space.With the assumption $\mathsf{P \neq SC}$ (where $\mathsf{SC \subseteq P}$ is defined as ... It's not so much a matter of big data, but that of saving data. Quantum storage is still (much like the rest of the field) in its infancy.(Take what I write with a grain of salt. It's likely to change rapidly.)There are a few theories on how quantum computers might be able to hold "memory".One of these is using nuclear spin. E.g. using long-lived ... This doesn't exactly answer your question, but it may aid you in understanding the problem and possibly the solution:In their paper "Breaking the 49-Qubit Barrier in the Simulation of Quantum Circuits" (arXiv:1710.05867), the authors describe simulating a 49-qubit and a 56-qubit quantum computer. According to the paper, they required 4.5 Terrabytes of RAM ...
The binomial coefficients count the subsets of a given set; the sets themselves are worth looking at. First some convenient notation: Definition 1.7.1 Let $[n]=\{1,2,3,\ldots,n\}$. Then $\ds 2^{[n]}$ denotes the set of all subsets of $[n]$, and $\sbs{n}{k}$ denotes the set of subsets of $[n]$ of size $k$. Example 1.7.2 Let $n=3$. Then $$\eqalign{ \sbs{n}{0}&=\{\emptyset\}\cr \sbs{n}{1}&=\{\{1\},\{2\},\{3\}\}\cr \sbs{n}{2}&=\{\{1,2\},\{1,3\},\{2,3\}\}\cr \sbs{n}{3}&=\{\{1,2,3\}\}\cr }$$ Definition 1.7.3 A chain in $\ds 2^{[n]}$ is a set of subsets of $\ds 2^{[n]}$ that are linearly ordered by inclusion. An anti-chain in $\ds 2^{[n]}$ is a set of subsets of $\ds 2^{[n]}$ that are pairwise incomparable. Example 1.7.4 In $\ds 2^{[3]}$, $\ds \{ \emptyset,\{1\},\{1,2,3\} \}$ is a chain, because $\ds \emptyset\subseteq \{1\}\subseteq \{1,2,3\}$. Every $\sbs{n}{k}$ is an anti-chain, as is $\ds \{ \{1\},\{2,3\} \}$. The set $\ds \{ \{1\},\{1,3\},\{2,3\} \}$ is neither a chain nor an anti-chain. Because of theorem 1.3.4 we know that among all anti-chains of the form $\sbs{n}{k}$ the largest are the "middle'' ones, namely $\sbs{n}{\lfloor n/2\rfloor}$ and $\sbs{n}{\lceil n/2\rceil}$ (which are the same if $n$ is even). Remarkably, these are the largest of all anti-chains, that is, strictly larger than every other anti-chain. When $n=3$, the anti-chains $\sbs{3}{1}$ and $\sbs{3}{2}$ are the only anti-chains of size 3, and no anti-chain is larger, as you can verify by examining all possibilities. Before we prove this, a bit of notation. Definition 1.7.5 If $\sigma\colon A\to A$ is a bijection, then $\sigma$ is called a permutation. This use of the word permutation is different than our previous usage, but the two are closely related. Consider such a function $\sigma\colon [n]\to[n]$. Since the set $A$ in this case is finite, we could in principle list every value of $\sigma$: $$\sigma(1),\sigma(2),\sigma(3),\ldots,\sigma(n).$$ This is a list of the numbers $\{1,\ldots,n\}$ in some order, namely, this is a permutation according to our previous usage. We can continue to use the same word for both ideas, relying on context or an explicit statement to indicate which we mean. Proof. First we show that no anti-chain is larger than these two. We attempt to partition $\ds 2^{[n]}$ into $k={n\choose \lfloor n/2\rfloor}$ chains, that is, to find chains $$\eqalign{ &A_{1,0}\subseteq A_{1,1}\subseteq A_{1,2}\subseteq\cdots\subseteq A_{1,m_1}\cr &A_{2,0}\subseteq A_{2,1}\subseteq A_{2,2}\subseteq\cdots\subseteq A_{2,m_2}\cr &\vdots\cr &A_{k,0}\subseteq A_{k,1}\subseteq A_{k,2}\subseteq\cdots\subseteq A_{k,m_k}\cr }$$ so that every subset of $[n]$ appears exactly once as one of the $\ds A_{i,j}$. If we can find such a partition, then since no two elements of an anti-chain can be in the same chain, no anti-chain can have more than $k$ elements. For small values of $n$ this can be done by hand; for $n=3$ we have $$\eqalign{ &\emptyset\subseteq\{1\}\subseteq\{1,2\}\subseteq\{1,2,3\}\cr &\{2\}\subseteq\{2,3\}\cr &\{3\}\subseteq\{1,3\}\cr }$$ These small cases form the base of an induction. We will prove that any $\ds 2^{[n]}$ can be partitioned into such chains with two additional properties: 1. Each set in a chain contains exactly one element more than the next smallest set in the chain. 2. The sum of the sizes of the smallest and largest element in the chain is $n$. Note that the chains for the case $n=3$ have both of these properties. The two properties taken together imply that every chain "crosses the middle'', that is, every chain contains an element of $\sbs{n}{n/2}$ if $n$ is even, and an element of both $\sbs{n}{\lfloor n/2\rfloor}$ and $\sbs{n}{\lceil n/2\rceil}$ if $n$ is odd. Thus, if we succeed in showing that such chain partitions exist, there will be exactly $n\choose \lfloor n/2\rfloor$ chains. For the induction step, we assume that we have partitioned $\ds 2^{[n-1]}$ into such chains, and construct chains for $\ds 2^{[n]}$. First, for each chain $A_{i,0}\subseteq A_{i,1}\subseteq\cdots\subseteq A_{i,m_i}$ we form a new chain $A_{i,0}\subseteq A_{i,1}\subseteq\cdots\subseteq A_{i,m_i}\subseteq A_{i,m_i}\cup \{n\}$. Since $|A_{i,0}|+|A_{i,m_i}|=n-1$, $|A_{i,0}|+|A_{i,m_i}\cup \{n\}|=n$, so this new chain satisfies properties (1) and (2). In addition, if $m_i>0$, we form a new chain $A_{i,0}\cup \{n\}\subseteq A_{i,1}\cup \{n\}\subseteq\cdots\subseteq A_{i,m_i-1}\cup \{n\}$. Now $$\eqalign{ |A_{i,0}\cup \{n\}|+|A_{i,m_i-1}\cup \{n\}|&= |A_{i,0}|+1+|A_{i,m_i-1}|+1\cr &=|A_{i,0}|+1+|A_{i,m_i}|-1 +1\cr &=n-1+1=n\cr }$$ so again properties (1) and (2) are satisfied. Because of the first type of chain, all subsets of $[n-1]$ are contained exactly once in the new set of chains. Also, we have added the element $n$ exactly once to every subset of $[n-1]$, so we have included every subset of $[n]$ containing $n$ exactly once. Thus we have produced the desired partition of $\ds 2^{[n]}$. Now we need to show that the only largest anti-chains are $\sbs{n}{\lfloor n/2\rfloor}$ and $\sbs{n}{\lceil n/2\rceil}$. Suppose that $A_1,A_2,…,A_m$ is an anti-chain; then $A_1^c,A_2^c,…,A_m^c$ is also an anti-chain, where $A^c$ denotes the complement of $A$. Thus, if there is an anti-chain that contains some $A$ with $|A|>\lceil n/2\rceil$, there is also one containing $A^c$, and $|A^c|< \lfloor n/2\rfloor$. Suppose that some anti-chain contains a set $A$ with $|A|< \lfloor n/2\rfloor$. We next prove that this anti-chain cannot be of maximum size. Partition $\ds 2^{[n]}$ as in the first part of the proof. Suppose that $A$ is a subset of the elements of a one or two element chain $C$, that is, a chain consisting solely of a set $S_1$ of size $n/2$, if $n$ is even, or of sets $S_1$ and $S_2$ of sizes $\lfloor n/2\rfloor$ and $\lceil n/2\rceil$, with $A\subseteq S_1\subseteq S_2$, if $n$ is odd. Then no member of $C$ is in the anti-chain. Thus, the largest possible size for an anti-chain containing $A$ is ${n\choose \lfloor n/2\rfloor}-1$. If $A$ is not a subset of the elements of such a short chain, we now prove that there is another chain partition of $\ds 2^{[n]}$ that does have this property. Note that in the original chain partition there must be a chain of length 1 or 2, $C_1$, consisting of $S_1$ and possibly $S_2$; if not, every chain would contain a set of size $\lfloor n/2\rfloor -1$, but there are not enough such sets to go around. Suppose then that $A=\{x_1,\ldots,x_k\}$ and the set $S_1$ in $C_1$ is $S_1=\{x_1,\ldots,x_q,y_{q+1},\ldots y_l\}$, where $0\le q< k$ and $l>k$. Let $\sigma$ be the permutation of $[n]$ such that $\sigma(x_{q+i})=y_{q+i}$ and $\sigma(y_{q+i})=x_{q+i}$, for $1\le i\le k-q$, and $\sigma$ fixes all other elements. Now for $U\subseteq [n]$, let $\overline U=\sigma(U)$, and note that $U\subseteq V$ if and only if $\overline U\subseteq\overline V$. Thus every chain in the original chain partition maps to a chain. Since $\sigma$ is a bijection, these new chains also form a partition of $\ds 2^{[n]}$, with the additional properties (1) and (2). By the definition of $\sigma$, $A\subseteq\overline S_1$, and $\{\overline S_1,\overline S_2\}$ is a chain, say $\overline C_1$. Thus, this new chain partition has the desired property: $A$ is a subset of every element of the 1 or 2 element chain $\overline C_1$, so $A$ is not in an anti-chain of maximum size. Finally, we need to show that if $n$ is odd, no anti-chain of maximum size contains sets in both $\sbs{n}{\lfloor n/2\rfloor}$ and $\sbs{n}{\lceil n/2\rceil}$. Suppose there is such an anti-chain, consisting of sets $A_{k+1},\ldots,A_l$ in $\sbs{n}{\lceil n/2\rceil}$, where $l={n\choose \lceil n/2\rceil}$, and $B_1,\ldots,B_k$ in $\sbs{n}{\lfloor n/2\rfloor}$. The remaining sets in $\sbs{n}{\lceil n/2\rceil}$ are $A_1,\ldots,A_k$, and the remaining sets in $\sbs{n}{\lfloor n/2\rfloor}$ are $B_{k+1},\ldots,B_l$. Each set $B_i$, $1\le i\le k$, is contained in exactly $\lceil n/2\rceil$ sets in $\sbs{n}{\lceil n/2\rceil}$, and all must be among $A_1,\ldots,A_k$. On average, then, each $A_i$, $1\le i\le k$, contains $\lceil n/2\rceil$ sets among $B_1,\ldots,B_k$. But each set $A_i$, $1\le i\le k$, contains exactly $\lceil n/2\rceil$ sets in $\sbs{n}{\lfloor n/2\rfloor}$, and so each must contain exactly $\lceil n/2\rceil$ of the sets $B_1,\ldots,B_k$ and none of the sets $B_{k+1},\ldots,B_l$. Let $A_1=A_{j_1}=\{x_1,\ldots,x_r\}$ and $B_{k+1}=\{x_1,\ldots,x_s,y_{s+1},\ldots,y_{r-1}\}$. Let $B_{i_m}=A_{j_m}\backslash\{x_{s+m}\}$ and $A_{j_{m+1}}=B_{i_m}\cup\{y_{s+m}\}$, for $1\le m\le r-s-1$. Note that by the preceding discussion, $1\le i_m\le k$ and $1\le j_m\le k$. Then $A_{j_{r-s}}=\{x_1,\ldots,x_s,y_{s+1},\ldots,y_{r-1},x_r\}$, so $A_{j_{r-s}}\supseteq B_{k+1}$, a contradiction. Hence there is no such anti-chain. Exercises 1.7 Ex 1.7.1Sperner's Theorem (1.7.6) tells us that $\sbs{6}{3}$, with size 20, is the unique largest anti-chain for$2^{[6]}$. The next largest anti-chains of the form $\sbs{6}{k}$ are$\sbs{6}{2}$ and $\sbs{6}{4}$, with size 15. Find a maximal anti-chain withsize larger than 15 but less than 20. (As usual, maximal here meansthat the anti-chain cannot be enlarged simply by adding elements. Soyou may not simply use a subset of $\sbs{6}{3}$.)
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
I have an existing integration based on CSOM, which I need to move from one .NET solution to another. In the process, I was made aware that Microsoft now has a graph API. I’ve tried to research recommendations about whether to use the Graph API or CSOM, but I haven’t found documentation that I’d thought was quite definitive enough on the issue. The use case is fetching calendar events from O365. Should we opt to use the Microsoft Graph API or the CSOM NuGet package? Or is there some other alternative that should be used to do the integration? I don’t want to just use CSOM because we have it, if it is considered a somewhat outdated approach. I am inexperienced when it comes to integrations with Sharepoint/O365. My question is in context with the Serverless Architecture (e.g. AWS Lambda) and how does one interact with the Databases in this system. Typically in a 3 Tier architecture, we have a web service which interacts with the Database. The idea here is to ensure that one database table is owned by one component. So changes in there, does not require changes in multiple places and there is also a clear sense of ownership so scaling and security are easier to manage. However, moving to serverless architecture, this ownership is no more clear and exposing a web service to access a database and having a Lambda use this web service does not make sense to me. I would like to know a bit on the common patterns and practices around this. Vue.js has some modules that can help me to build my web app using Django. Is there any way to to integrate Vue.js with Django (Python)? Both Scrapy and Django Frameworks are standalone best framework of Python to build crawler and web applications with less code, Though still whenever You want to create a spider you always have to generate new code file and have to write same piece of code(though with some variation.) I was trying to integrate both. But stuck at a place where i need to send the status 200_OK that spider run successfully, and at the same time spider keep running and when it finish off it save data to database. Though i know the API are already available with scrapyd. But i Wanted to make it more versatile. That lets you create crawler without writing multiple file. I thought The Crawlrunner https://docs.scrapy.org/en/latest/topics/practices.html would help in this,therefor try this thing also t Easiest way to run scrapy crawler so it doesn't block the script but it give me error that the builtins.ValueError: signal only works in main thread Even though I get the response back from the Rest Framework. But Crawler failed to run due to this error does that mean i need to switch to main thread? I am doing this with a simple piece of code spider = GeneralSpider(pk) runner = CrawlerRunner() d = runner.crawl(GeneralSpider, pk) d.addBoth(lambda _: reactor.stop()) reactor.run() Hi Below is the code I have adapted from https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/03.a.host-with-react <!DOCTYPE html> <html lang="en-US"> <head> <title>Web Chat: Integrate with React</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <!-- For simplicity and code clarity, we are using Babel and React from unpkg.com. --> <script src="https://unpkg.com/babel-standalone@6/babel.min.js"></script> <script src="https://unpkg.com/react@16.5.0/umd/react.development.js"></script> <script src="https://unpkg.com/react-dom@16.5.0/umd/react-dom.development.js"></script> <!-- For demonstration purposes, we are using the development branch of Web Chat at "/master/webchat.js". When you are using Web Chat for production, you should use the latest stable release at "/latest/webchat.js", or lock down on a specific version with the following format: "/4.1.0/webchat.js". --> <script src="https://cdn.botframework.com/botframework-webchat/master/webchat.js"></script> <style> html, body { height: 100% } body { margin: 0 } #webchat { height: 100%; width: 100%; } </style> </head> <body> <div id="webchat" role="main"></div> <script type="text/babel"> (async function () { // In this demo, we are using Direct Line token from MockBot. // To talk to your bot, you should use the token exchanged using your Direct Line secret. // You should never put the Direct Line secret in the browser or client app. const headers = {"Authorization", "Bearer <'secret key'>"} const res = await fetch('https://directline.botframework.com/v3/directline/tokens/generate', { method: 'POST' }, {Headers:headers} ); const { token } = await res.json(); const { ReactWebChat } = window.WebChat; window.ReactDOM.render( <ReactWebChat directLine={ window.WebChat.createDirectLine({ token }) } />, document.getElementById('webchat') ); document.querySelector('#webchat > *').focus(); })().catch(err => console.error(err)); </script> </body> </html> Below is the Error I am getting I am planning to create a custom UI Bot using ReactJS and have it on the Home page(NOT modern) of our SharePoint site. After some research, i found out we can use it with SPFX webpart and possibly creating it using Visual Studio nodejs react app. The functionality of the bot should be something similar to the verizon bot shown in the images. When you click on the button the bot pops up. My question is which is the better approach or feasible approach out of the two to achieve something similar in the images. I have a little side-project where I’m faced with the task of integrating a cloud solution in azure with a database running locally on site. The database in question runs SQL Server 2008 and collects data in real-time from several cash desks on the site. The machine running the server has internet access but the network is not configured for accessing the server from the internet. I have complete access to the database locally (at least with SSMS). What is the best way to sync the local data to the cloud? Mirroring? Some other way? Azure Data Sync? Since I’m designing a close to real-time system changes in the local database needs to be reflected as fast as possible. I’m kind of a newbie when it comes to database management so any nudge in the right direction is much appreciated. I’ve been able to integrate Duo so that I get a push request on my iPhone when logging into the WebUI; however, this is not quite what I was looking for. When I actually request access to the VPN via the application (I use macOS) using my autologin profile (or any profile) I want it to send a push request for authentication. Is this possible? $ x = 1 + 2y^2$ , $ 1 \leq y \leq 2$ I sketched out a graph plotted some points to get a rough picture of the graph. So first things first, I think I need to find the x bounds, so I plug in y=1 and y=2 to find x. $ x = 1 + 2(1)^2 = \textbf{3}$ and $ x = 1 + 2(2)^2 = \textbf{5}$ Here is where I get confused setting up the integral. I know that the surface area formula is $ $ S = \int 2\pi r ds$ $ where $ 2\pi r$ represents the circumference of the revolution around the x-axis and $ ds$ is the “thickness” of the same circle. So I reason the formula to be setup like this $ $ S = 2\pi \int_{3}^{5} x \frac{dx}{dy} =2\pi \int_{3}^{5} (1 + 2y^2)\sqrt{1 + (4y)^2}dy$ $ . Is this correct? If so, this looks kind of difficult to integrate. Would integration by parts be my best choice here? I have a requirement to find the most frequently fired rules in the rule engine and place them at first in the rule hierarchy for a product configurator by using the features of machine learning. I have not found any information regarding this , would like to know if someone have any idea or have tried something similar to this.
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
You’ve done some nice work in finding the Cartesian equation for the parabola, but it think it might be easier overall to work with the parametric form $\mathbf r:t\mapsto(x(t),y(t))$. (I’ve omitted most of the tedious details of the algebraic manipulations in the following.) The first order of business is to find the angle of the parabola’s axis of symmetry. One way to do this is to use the fact that for any two points on a parabola, the line defined by their midpoint and the intersection of the tangents at the two points is parallel to this axis. Taking $t=\pm1$ is reasonably convenient. The midpoint is simply $(\mathbf r(1)+\mathbf r(-1))/2$. For the intersection of the tangents, we have $\mathbf r(1)+s\mathbf r'(1)=\mathbf r(-1)+t\mathbf r'(-1)$, which expands into the system $$\begin{align}s(2a+b)+a+b+c &= t(-2a+b)+a-b+c \\s(2a'+b')+a'+b'+c'&=t(-2a'+b')+a'-b'+c\end{align}$$ with solution $s=-1$, $t=1$. The direction vector for the parabola’s axis is thus $$\begin{align}\frac12(\mathbf r(1)+\mathbf r(-1))-(\mathbf r(-1)+\mathbf r'(-1)) &= \frac12(\mathbf r(1)-\mathbf r(-1))-\mathbf r'(-1) \\&=(b,b')-(b-2a,b'-2a') \\ &=(2a,2a'),\end{align}$$ so we can take $\mathbf n=(a,a')$ as the direction of the axis. Next, we find the parabola’s vertex. The tangent to the parabola at the vertex is orthogonal to its axis, which gives rise to the equation $$\mathbf n\cdot\mathbf r'(t)=(a,a')\cdot(at+b,a't+b')=a(at+b)+a'(a't+b')=0.$$ Solving for $t$ we get $$t_c=-\frac12{ab+a'b'\over a^2+a'^2}\tag{1}.$$ You can verify that the function $\mathbf r$ is symmetric with respect to this point in the sense that the chord defined by the points $\mathbf r(t_c\pm\Delta t)$ is orthogonal to the parabola’s axis, so the two points are equidistant from the vertex. The tangents at the ends of the latus rectum meet the axis at a 45° angle, which means that they are orthogonal to each other. This fact leads to the equation $$\mathbf r'(t_c+\Delta t)\cdot\mathbf r'(t_c-\Delta t)=(2a(t_c+\Delta t)+b)(2a(t_c-\Delta t)+b)+(2a'(t_c+\Delta t)+b')(2a'(t_c-\Delta t)+b')=0$$ for the ends of the latus rectum, which has the solutions $$\Delta t=\pm\frac12{ab'-a'b\over a^2+a'^2}\tag{2}.$$ Because of the symmetry noted previously, we know that both of these solutions represent the same pair of points. Finally, we compute the length of the latus rectum: $$\begin{align}\|\mathbf r(t_c+\Delta t)-\mathbf r(t_c-\Delta t)\|^2 &= (\mathbf r(t_c+\Delta t)-\mathbf r(t_c-\Delta t))\cdot(\mathbf r(t_c+\Delta t)-\mathbf r(t_c-\Delta t)) \\&= \left({a'(ab'-a'b)^2\over(a^2+a'^2)^2}\right)^2+\left({a(ab'-a'b)^2\over(a^2+a'^2)^2}\right)^2 \\&={(ab'-a'b)^4\over(a^2+a'^2)^3},\end{align}$$ so the length of the latus rectum is $${(ab'-a'b)^2\over(a^2+a'^2)^{3/2}}.\tag{3}$$ With this distance in hand, you can now easily find the parabola’s focus and directrix, if needed. Incidentally, this is another path to a Cartesian equation for this parabola. If you have the directrix given by an equation in the form $\mathbf n\cdot\mathbf x=d$ and the focus $\mathbf f$, then an equation of the parabola is $$\left({d-\mathbf n\cdot\mathbf x\over\|\mathbf n\|}\right)^2=(\mathbf x-\mathbf f)\cdot(\mathbf x-\mathbf f).$$ To continue working with the Cartesian equation instead, I’d take a slightly different approach than in the cited paper. First look at the equation of a parabola with axis parallel to the $y$-axis, $y=ax^2+bx+c$. For such a parabola, the length of the latus rectum is simply $|1/a|$. For the general parabola $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$, we take Erick Wong’s suggestion to rotate so as to eliminate the quadratic terms involving $y$. The equation will then be of the form $A'x'^2+D'x'+E'y'+F=0$ (note that the constant term is unchanged by a rotation), with latus rectum length $|E'/A'|$. Invariance of the trace tells us that $A'=A+C$, but finding $E'$ will take a a bit more work. For a parabola, $B^2=4AC$, so we can rewrite the general equation as $(\alpha x+\beta y)^2+Dx+Ey+F=0$, first multiplying through by $-1$ if necessary to make $A$ and $C$ positive. In matrix form this is $\begin{bmatrix}x&y&1\end{bmatrix}M\begin{bmatrix}x&y&1\end{bmatrix}^T=0$, with $$M=\begin{bmatrix}\alpha^2&\alpha\beta&D/2\\\alpha\beta&\beta^2&E/2\\D/2&E/2&F\end{bmatrix}.$$ The direction of the parabola’s axis is given by an eigenvector of $0$ (i.e., an element of the kernel) of the quadratic part of this matrix, one of which is $\begin{bmatrix}-\beta&\alpha\end{bmatrix}^T$. We want to rotate so as to bring this vector parallel to the $y$-axis. The appropriate rotation is $$R=\begin{bmatrix}{\alpha\over\sqrt{\alpha^2+\beta^2}}&-{\beta\over\sqrt{\alpha^2+\beta^2}}&0\\{\beta\over\sqrt{\alpha^2+\beta^2}}&{\alpha\over\sqrt{\alpha^2+\beta^2}}&0\\0&0&1\end{bmatrix}$$ and $$R^TMR = \begin{bmatrix}\alpha^2+\beta^2&0&\frac12{\alpha D+\beta E\over\sqrt{\alpha^2+\beta^2}} \\0&0&\frac12{\alpha E-\beta D\over\sqrt{\alpha^2+\beta^2}} \\\frac12{\alpha D+\beta E\over\sqrt{\alpha^2+\beta^2}} & \frac12{\alpha E-\beta D\over\sqrt{\alpha^2+\beta^2}} & F\end{bmatrix}.$$ Comparing this to the result in the first paragraph above, we find that the latus rectum length for the general equation is $${|\alpha E-\beta D|\over(\alpha^2+\beta^2)^{3/2}} = {|E\sqrt A\mp D\sqrt C|\over(A+C)^{3/2}}.$$ Choose the sign opposite to that of $B$. Many of the terms in the Cartesian equation you derived are constant, so pulling the coefficients of $x$ and $y$ out of it to construct $D$ and $E$ doesn’t look too bad.
If the degree $d=1$, then it's not true for $K_2$. Otherwise the graph is disconnected. Now assume $d \geq 2$. If there is a bipartite graph with $\geq 2$ biconnected components then there is a bridge $e$. If we delete $e$, then we separate the graph into two disjoint components. Take one of these components and call it $G$. We know $G$ is a bipartite graph, so call the distinct parts $V_1$ and $V_2$, and assume the edge $e$ has an endpoint in $V_2$. In $G$: The number of edges coming out of $V_1$ into $V_2$ is $dv_1$ where $v_1=|V_1|$. The number of edges going into $V_2$ from $V_1$ is $d(v_2-1)+(d-1)$, since one vertex in $V_2$ was an endpoint of $e$ in the original graph, but $e$ has now been deleted. Hence $dv_1=d(v_2-1)+(d-1)$ has an integer solution, implying $0 \equiv -1 \pmod d$, giving a contradiction (since $d \geq 2$). Without the bipartite condition, we can find non-biconnected non-bipartite connected regular graph, such as the following:
What does it mean when the price elasticity of demand %Qd/%P is greater than one? Typically I hear that it means the demand is elastic since if, say, the price decreases by 1% the demand for the good increases by more than 1%. But what happens if %P is +1% and %Qd still increases by more? Sure, this is elastic, but does it give us any more information? I can't see why %Qd/%P > 1 makes sense in this case. Except of the purely descriptive aspect, "elastic demand", or more accurately, regions of the demand schedule where demand elasticity with respect to price is higher than unity, in absolute terms, is linked to the basic monopoly theory, since the monopolist maximizes profits at a point of the demand schedule where "demand is elastic". Define the demand point-elasticity with respect to price as $$\eta = \frac {\partial Q }{ \partial P}\cdot \frac {P}{Q} \Rightarrow \frac {\partial Q }{ \partial P} = \eta \cdot \frac {Q}{P} \tag{1}$$ Note that algebraically, the elasticity is a negative number, with the sign indicating the direction of influence, since $\partial Q / \partial P <0$. The profit function of a monopolist is $$\pi = P\cdot Q(P) - C(Q(P)) \tag{2}$$ The first-order condition for a maximum with respect to price is $$\frac {\partial \pi}{\partial P} = 0 \Rightarrow Q + P\frac {\partial Q }{ \partial P} - MC\cdot \frac {\partial Q }{ \partial P} = 0 \tag{3}$$ Inserting $(1)$ into $(3)$ we have $$Q + P\cdot \eta \cdot \frac {Q}{P} - MC\cdot \eta \cdot \frac {Q}{P} = 0$$ $$\Rightarrow 1 + \eta - \eta \cdot \frac {MC}{P} =0$$ $$\Rightarrow - \eta \cdot \frac {MC}{P} = -\eta -1 $$ $$\Rightarrow |\eta| \cdot \frac {MC}{P} = |\eta| -1$$ $$\Rightarrow P^* = \frac {|\eta|}{|\eta|-1} MC \tag{4}$$ $(4)$ is essentially an implicit relation since $\eta$ is a function of price also, but it provides a specific insight: Since we naturally expect that price will be positive, we see that we must have $|\eta| >1$: the price will necessarily be set at a level where "demand is elastic", i.e. at a point on the demand schedule where the point price elasticity of demand is higher than unity, in absolute terms. Yes, if elasticity is greater than one we say that demand is elastic. This means that the percentage change in quantity demanded is greater than (in magnitude) the percentage change in price. More generally, if elasticity is e then the percentage change in quantity demanded is e times the percentage change in price. For example, if e=0.5, the percentage change in quantity demanded is half the percentage change in price. If it's more than one, it means that the variation of the selled quantity is greater that the variation of the price. In other words a variation in price cause a bigger variation in production. So if price goes down, consumers will increase purchases in a bigger variation: the demand of that good is more than elastic to the price. If it's 1 it's elastic. If it's less than one, it is not very elastic or unelastic.
Answer $-0.25$ Work Step by Step The denominator can be expanded to 100... and the fraction is easily written as a decimal number $-\displaystyle \frac{1}{4}=-\frac{1\times 25}{4\times 25}=-\frac{25}{100}=-0.25$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Contents Ray-Tracing a Polygon Mesh As we suggested before, ray-tracing a polygon mesh which has been triangulated is really simple. We already have a routine to compute the intersection of rays and triangles. Therefore, to test if a ray intersects a polygon mesh, all we need to do is loop over all the triangles in the mesh and test each individual triangle against the ray. However a ray may intersect more than one triangle from the mesh therefore we also need to keep track of the nearest intersection distance as we iterate over the triangles. This problem is similar to the one described in the lesson A Minimal Ray-Tracer where we had to keep track of the nearest intersected object. We will use the same technique here. In pseudo code the intersection routine looks like this (check the last chapter for the C++ implementation): We need the three vertices making up each triangle in the mesh (lines 7-9). We then pass these three points to the ray-triangle intersection routine which returns true if the ray intersects the triangle and false otherwise (we will use the Möller-Trumbore ray-triangle algorithm studied in the previous lesson). In case on an intersection, the variable \(tNear\) is set with the intersection distance (the distance from the ray origin to the intersection point on the triangle) and the barycentric coordinates of the hit point (u & v). We also keep track of the triangle index that the ray intersected as well as the barycentric coordinates of the intersected point on the triangle (lines 13-15). This will be needed later on to compute the normal and the texture coordinates at the hit point. Creating a Poly Sphere The number of divisions represents the number of stacks and slices along and around the y axis. For example the sphere in figure 1 uses five divisions. We start by creating all the vertices. The total number of points in this example is 22 (2 for the top and bottom of the spheres plus 5 times 4 rings of points). We create the position for these vertices using trigonometric functions (line 20-22). To complete the mesh description, we need to provide some information about the faces connectivity. The faces at the top and bottom of the spheres are triangles (the top and bottom cap make what we call a triangle fan: all the triangles share one central vertex). They are defined by three vertices (lines 42-44 and 49-51). The others are quads (line xx). Finally, the faces are created by connecting vertices in a precise order. In figure 1 we are showing the vertex indexes for the sixth face in the sphere (lines 56-59). Note that in the code we also generate vertex normals and texture coordinates. Normals are simple to compute. Since the sphere is centered around the world origin the normal at the vertex is simply the normalized vertex position. For the texture coordinates all we need to need to do is normalise the parameters u (the horizontal angle \(\phi\)) and v (the vertical angle \(\theta\)) which lie in the range \([-\pi,\pi]\) and \([-\pi/2,\pi/2]\) respectively (lines 24-25). The intersect() method implements the Möller-Trumbore algorithm for the ray-triangle intersection test. Images in figure 2 were obtained by increasing the number of divisions for the polygon sphere. The complete source code for ray-tracing apply sphere can be found in the Source Code chapter of this lesson. Performances If we regularly increase the number of faces making the sphere and measure the time it takes to render a frame, we can see in figure 3 that the render time increases linearly with the number of triangles in the scene (there is a linear dependence between the render time and the number of triangles in the scene). We can already realize from the numbers we get for a simple scene, that ray-tracing is "slow" (and processors today are incredibly faster than in the early days of ray-tracing). A scene containing approximately 200 polygons takes around 2.5 seconds to render on a computer equipped with a 2.5 GHz processor. Can this problem be solved? In fact, ray-tracing will always be slower than rasterization, but things can be improved quite significantly with the help of acceleration structures. The problem with ray-tracing is that each ray needs to be tested against every single triangle in the scene. No matter how small the object are in the scene (see the next chapter), the time it will take to produce a frame is constant. It only depends on how many triangles the scene contains. The time it takes to render a pixel is constant, whether this pixel is the top-left corner of the frame or right in the middle of it. The idea behind acceleration structure is simple. For example, we could first test if a ray intersects the object's bounding box. If it doesn't, then we know for sure that the ray can't hit the object. Though if it does, then we can still test if the ray intersects any of the mesh triangles (figure 4). This very simple test can already save a lot of time. For example, all the pixels in the corners of the frame are unlikely to intersect the mesh's bounding box. Acceleration structures are more complex than bounding boxes but are based on the same idea. They can be used to divide the space that the objects fill in into simple volumes which are fast to ray-trace. If rays intersect these sub-volumes then we can go deeper into the structure and test the ray against smaller sub-volumes until we eventually hit a sub-volume that contains the mesh original triangle. At this point of the process, we test the ray against the triangles contained in the sub-volume. Lessons on acceleration structures can be found in the Advanced Ray-Tracing section. Conclusion In this lesson we have given some information about the polygon geometry representation and showed how this geometry could be rendered with a ray tracer. The polygons (if they are convex) can be converted to triangles and an efficient algorithm (the Möller-Trumbore method for instance) can then be used to compute the intersection of rays with these triangles. Converting all the different geometry representations into triangles is easier than supporting a fast and robust ray-geometry intersection method for each one of these representations. It require to only support a ray-triangle intersection routine which can be highly optimized. Finally in this chapter, we showed one of the most important properties of the ray-tracing algorithm: the computational time is linearly dependent on the number of objects or triangles in the scene. Despite the progress made with processors, using a naive implementation of the ray tracing to render reasonably complex scenes becomes quickly unpractical. Hopefully the cost of testing each object in the scene with each ray can be greatly reduced if we use some acceleration techniques. What's Next? In the next chapter, we will render the image we produced in the lesson on rasterization but using ray-tracing. We will load the object geometry from disk, pass the data to the TriangleMesh constructor, generate a triangle mesh, loop over all the pixels in the image, generate primary rays and test each one of the rays against each triangle in the mesh.
Search Now showing items 1-10 of 32 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
We have defined and used the concept of limit, primarily in our development of the derivative. Recall that $\ds \lim_{x\to a}f(x)=L$ is true if, in a precise sense, $f(x)$ gets closer and closer to $L$ as $x$ gets closer and closer to $a$. While some limits are easy to see, others take some ingenuity; in particular, the limits that define derivatives are always difficult on their face, since in $$\lim_{\Delta x\to 0} {f(x+\Delta x)-f(x)\over \Delta x}$$ both the numerator and denominator approach zero. Typically this difficulty can be resolved when $f$ is a "nice'' function and we are trying to compute a derivative. Occasionally such limits are interesting for other reasons, and the limit of a fraction in which both numerator and denominator approach zero can be difficult to analyze. Now that we have the derivative available, there is another technique that can sometimes be helpful in such circumstances. Before we introduce the technique, we will also expand our concept of limit, in two ways. When the limit of $f(x)$ as $x$ approaches $a$ does not exist, it may be useful to note in what way it does not exist. We have already talked about one such case: one-sided limits. Another case is when "$f$ goes to infinity''. We also will occasionally want to know what happens to $f$ when $x$ "goes to infinity''. Example 4.7.1 What happens to $1/x$ as $x$ goes to 0? From the right, $1/x$ gets bigger and bigger, or goes to infinity. From the left it goes to negative infinity. Example 4.7.2 What happens to the function $\ds \cos(1/x)$ as $x$ goes to infinity? It seems clear that as $x$ gets larger and larger, $1/x$ gets closer and closer to zero, so $\cos(1/x)$ should be getting closer and closer to $\cos(0)=1$. As with ordinary limits, these concepts can be made precise. Roughly, we want $\ds \lim_{x\to a}f(x)=\infty$ to mean that we can make $f(x)$ arbitrarily large by making $x$ close enough to $a$, and $\ds \lim_{x\to \infty}f(x)=L$ should mean we can make $f(x)$ as close as we want to $L$ by making $x$ large enough. Compare this definition to the definition of limit in section 2.3, definition 2.3.2. Definition 4.7.3 If $f$ is a function, we say that $\ds \lim_{x\to a}f(x)=\infty$ if for every $N>0$ there is a $\delta>0$ such that whenever $|x-a|< \delta$, $f(x)>N$. We can extend this in the obvious ways to define $\ds \lim_{x\to a}f(x)=-\infty$, $\ds \lim_{x\to a^-}f(x)=\pm\infty$, and $\ds \lim_{x\to a^+}f(x)=\pm\infty$. Definition 4.7.4 (Limit at infinity) If $f$ is a function, we say that $\ds \lim_{x\to \infty}f(x)=L$ if for every $\epsilon>0$ there is an $N > 0$ so that whenever $x>N$, $|f(x)-L|< \epsilon$. We may similarly define $\ds \lim_{x\to-\infty}f(x)=L$, and using the idea of the previous definition, we may define $\ds \lim_{x\to\pm\infty}f(x)=\pm\infty$. We include these definitions for completeness, but we will not explore them in detail. Suffice it to say that such limits behave in much the same way that ordinary limits do; in particular there are some analogs of theorem 2.3.6. Now consider this limit: $$\lim_{x\to \pi}{x^2-\pi^2\over \sin x}.$$ As $x$ approaches $\pi$, both the numerator and denominator approach zero, so it is not obvious what, if anything, the quotient approaches. We can often compute such limits by application of the following theorem. Theorem 4.7.5 (L'Hôpital's Rule) For "sufficiently nice'' functions $f(x)$ and $g(x)$, if $\ds\lim_{x\to a} f(x)= 0 = \lim_{x\to a} g(x)$ or both $\ds\lim_{x\to a} f(x)= \pm\infty$ and $\lim_{x\to a} g(x)=\pm\infty$, and if $\ds\lim_{x\to a}{f'(x)\over g'(x)}$ exists, then $\ds\lim_{x\to a}{f(x)\over g(x)}=\lim_{x\to a}{f'(x)\over g'(x)}$. This remains true if "$x\to a$'' is replaced by "$x\to \infty$'' or "$x\to -\infty$''. This theorem is somewhat difficult to prove, in part because it incorporates so many different possibilities, so we will not prove it here. We also will not need to worry about the precise definition of "sufficiently nice'', as the functions we encounter will be suitable. Example 4.7.6 Compute $\ds\lim_{x\to \pi}{x^2-\pi^2\over \sin x}$ in two ways. First we use L'Hôpital's Rule: Since the numerator and denominator both approach zero, $$\lim_{x\to \pi}{x^2-\pi^2\over \sin x}= \lim_{x\to \pi}{2x \over \cos x},$$ provided the latter exists. But in fact this is an easy limit, since the denominator now approaches $-1$, so $$\lim_{x\to \pi}{x^2-\pi^2\over \sin x}={2\pi\over -1} = -2\pi.$$ We don't really need L'Hôpital's Rule to do this limit. Rewrite it as $$\lim_{x\to \pi}(x+\pi){x-\pi\over \sin x}$$ and note that $$\lim_{x\to \pi}{x-\pi\over \sin x}= \lim_{x\to \pi}{x-\pi\over -\sin (x-\pi)}= \lim_{x\to 0}-{x\over \sin x}$$ since $x-\pi$ approaches zero as $x$ approaches $\pi$. Now $$\lim_{x\to \pi}(x+\pi){x-\pi\over \sin x}= \lim_{x\to \pi}(x+\pi)\lim_{x\to 0}-{x\over \sin x}= 2\pi(-1)=-2\pi$$ as before. Example 4.7.7 Compute $\ds\lim_{x\to \infty}{2x^2-3x+7\over x^2+47x+1}$ in two ways. As $x$ goes to infinity both the numerator and denominator go to infinity, so we may apply L'Hôpital's Rule: $$\lim_{x\to \infty}{2x^2-3x+7\over x^2+47x+1}= \lim_{x\to \infty}{4x-3\over 2x+47}.$$ In the second quotient, it is still the case that the numerator and denominator both go to infinity, so we are allowed to use L'Hôpital's Rule again: $$\lim_{x\to \infty}{4x-3\over 2x+47}=\lim_{x\to \infty}{4\over 2}=2.$$ So the original limit is 2 as well. Again, we don't really need L'Hôpital's Rule, and in fact a more elementary approach is easier—we divide the numerator and denominator by $\ds x^2$: $$\lim_{x\to \infty}{2x^2-3x+7\over x^2+47x+1}= \lim_{x\to \infty}{2x^2-3x+7\over x^2+47x+1}{{1\over x^2}\over {1\over x^2}}= \lim_{x\to \infty}{2-{3\over x}+{7\over x^2}\over 1+{47\over x}+{1\over x^2}}.$$ Now as $x$ approaches infinity, all the quotients with some power of $x$ in the denominator approach zero, leaving 2 in the numerator and 1 in the denominator, so the limit again is 2. Example 4.7.8 Compute $\ds\lim_{x\to 0}{\sec x - 1\over \sin x}$. Both the numerator and denominator approach zero, so applying L'Hôpital's Rule: $$\lim_{x\to 0}{\sec x - 1\over \sin x}= \lim_{x\to 0}{\sec x\tan x\over \cos x}={1\cdot 0\over 1}=0.$$ Exercises 4.7 Compute the limits. Ex 4.7.1$\ds\lim_{x\to 0} {\cos x -1\over \sin x}$(answer) Ex 4.7.2$\ds\lim_{x\to \infty} \sqrt{x^2+x}-\sqrt{x^2-x}$(answer) Ex 4.7.3$\ds\lim_{x\to0}{\sqrt{9+x}-3\over x}$(answer) Ex 4.7.4$\ds\lim_{t\to1^+}{(1/t)-1\over t^2-2t+1}$(answer) Ex 4.7.5$\ds\lim_{x\to2}{2-\sqrt{x+2}\over 4-x^2}$(answer) Ex 4.7.6$\ds\lim_{t\to\infty}{t+5-2/t-1/t^3\over 3t+12-1/t^2}$(answer) Ex 4.7.7$\ds\lim_{y\to\infty}{\sqrt{y+1}+\sqrt{y-1}\over y}$(answer) Ex 4.7.8$\ds\lim_{x\to1}{\sqrt{x}-1\over \root 3\of{x}-1}$(answer) Ex 4.7.9$\ds\lim_{x\to0}{(1-x)^{1/4}-1\over x}$(answer) Ex 4.7.10$\ds\lim_{t\to 0}{\left(t+{1\over t}\right)((4-t)^{3/2}-8)}$(answer) Ex 4.7.11$\ds\lim_{t\to 0^+}\left({1\over t}+{1\over\sqrt{t}}\right)(\sqrt{t+1}-1)$(answer) Ex 4.7.12$\ds\lim_{x\to 0}{x^2\over\sqrt{2x+1}-1}$(answer) Ex 4.7.13$\ds\lim_{u\to 1}{(u-1)^3\over (1/u)-u^2+3u-3}$(answer) Ex 4.7.14$\ds\lim_{x\to 0}{2+(1/x)\over 3-(2/x)}$(answer) Ex 4.7.15$\ds\lim_{x\to 0^+}{1+5/\sqrt{x}\over 2+1/\sqrt{x}}$(answer) Ex 4.7.16$\ds\lim_{x\to 0^+}{3+x^{-1/2}+x^{-1}\over 2+4x^{-1/2}}$(answer) Ex 4.7.17$\ds\lim_{x\to\infty}{x+x^{1/2}+x^{1/3}\over x^{2/3}+x^{1/4}}$(answer) Ex 4.7.18$\ds\lim_{t\to\infty}{1-\sqrt{t\over t+1}\over 2-\sqrt{4t+1\over t+2}}$(answer) Ex 4.7.19$\ds\lim_{t\to\infty}{1-{t\over t-1}\over 1-\sqrt{t\over t-1}}$(answer) Ex 4.7.20$\ds\lim_{x\to-\infty}{x+x^{-1}\over 1+\sqrt{1-x}}$(answer) Ex 4.7.21$\ds\lim_{x\to\pi/2}{\cos x\over (\pi/2)-x}$(answer) Ex 4.7.22$\ds\lim_{x\to1}{x^{1/4}-1\over x}$(answer) Ex 4.7.23$\ds\lim_{x\to1^+}{\sqrt{x}\over x-1}$(answer) Ex 4.7.24$\ds\lim_{x\to1}{\sqrt{x}-1\over x-1}$(answer) Ex 4.7.25$\ds\lim_{x\to\infty}{x^{-1}+x^{-1/2}\over x+x^{-1/2}}$(answer) Ex 4.7.26$\ds\lim_{x\to\infty}{x+x^{-2}\over 2x+x^{-2}}$(answer) Ex 4.7.27$\ds\lim_{x\to\infty}{5+x^{-1}\over 1+2x^{-1}}$(answer) Ex 4.7.28$\ds\lim_{x\to\infty}{4x\over\sqrt{2x^2+1}}$(answer) Ex 4.7.29$\ds\lim_{x\to0}{3x^2+x+2\over x-4}$(answer) Ex 4.7.30$\ds\lim_{x\to0}{\sqrt{x+1}-1\over \sqrt{x+4}-2}$(answer) Ex 4.7.31$\ds\lim_{x\to0}{\sqrt{x+1}-1\over \sqrt{x+2}-2}$(answer) Ex 4.7.32$\ds\lim_{x\to0^+}{\sqrt{x+1}+1\over\sqrt{x+1}-1}$(answer) Ex 4.7.33$\ds\lim_{x\to0}{\sqrt{x^2+1}-1\over\sqrt{x+1}-1}$(answer) Ex 4.7.34$\ds\lim_{x\to\infty}{(x+5)\left({1\over 2x}+{1\over x+2}\right)}$(answer) Ex 4.7.35$\ds\lim_{x\to0^+}{(x+5)\left({1\over 2x}+{1\over x+2}\right)}$(answer) Ex 4.7.36$\ds\lim_{x\to1}{(x+5)\left({1\over 2x}+{1\over x+2}\right)}$(answer) Ex 4.7.37$\ds\lim_{x\to2}{x^3-6x-2\over x^3+4}$(answer) Ex 4.7.38$\ds\lim_{x\to2}{x^3-6x-2\over x^3-4x}$(answer) Ex 4.7.39$\ds\lim_{x\to1+}{x^3+4x+8\over 2x^3-2}$(answer) Ex 4.7.40The function $\ds f(x) = {x\over\sqrt{x^2+1}}$ has two horizontal asymptotes. Find them and give a rough sketch of $f$ with its horizontal asymptotes. (answer)
Definition 3.3.1 A partition of a positive integer $n$ is amultiset of positive integers that sum to $n$. We denote the number ofpartitions of $n$ by $p_n$. Typically a partition is written as a sum, not explicitly as a multiset. Using the usual convention that an empty sum is 0, we say that $p_0=1$. Example 3.3.2 The partitions of 5 are $$\eqalign{ &5\cr &4+1\cr &3+2\cr &3+1+1\cr &2+2+1\cr &2+1+1+1\cr &1+1+1+1+1.\cr }$$ Thus $p_5=7$. There is no simple formula for $p_n$, but it is not hard to find a generating function for them. As with some previous examples, we seek a product of factors so that when the factors are multiplied out, the coefficient of $x^n$ is $p_n$. We would like each $x^n$ term to represent a single partition, before like terms are collected. A partition is uniquely described by the number of 1s, number of 2s, and so on, that is, by the repetition numbers of the multiset. We devote one factor to each integer: $$(1+x+x^2+x^3+\cdots)(1+x^2+x^4+x^6+\cdots)\cdots (1+x^k+x^{2k}+x^{3k}+\cdots)\cdots =\prod_{k=1}^\infty \sum_{i=0}^\infty x^{ik}.$$ When this product is expanded, we pick one term from each factor in all possible ways, with the further condition that we only pick a finite number of "non-1'' terms. For example, if we pick $x^3$ from the first factor, $x^3$ from the third factor, $x^{15}$ from the fifth factor, and 1s from all other factors, we get $x^{21}$. In the context of the product, this represents $3\cdot 1+1\cdot3+3\cdot 5$, corresponding to the partition $1+1+1+3+5+5+5$, that is, three 1s, one 3, and three 5s. Each factor is a geometric series; the $k$th factor is $$ 1+x^k+(x^{k})^2+(x^{k})^3+\cdots= {1\over1-x^k},$$ so the generating function can be written $$\prod_{k=1}^\infty {1\over 1-x^k}.$$ Note that if we are interested in some particular $p_n$, we do not need the entire infinite product, or even any complete factor, since no partition of $n$ can use any integer greater than $n$, and also cannot use more than $n/k$ copies of $k$. We expand $$\eqalign{ (1+x&+x^2+x^3+x^4+x^5+x^6+x^7+x^8)(1+x^2+x^4+x^6+x^8)(1+x^3+x^6)\cr (1&+x^4+x^8)(1+x^5)(1+x^6)(1+x^7)(1+x^8)\cr &=1+x+2x^2+3x^3+5x^4+7x^5+11x^6+15x^7+22x^8+\cdots+x^{56},\cr }$$ so $p_8=22$. Note that all of the coefficients prior to this are also correct, but the following coefficients are not necessarily the corresponding partition numbers. Here is how to use Sage for the computation. We define $f=\prod_{k=1}^8 {1\over 1-x^k}$ to make sure the $x^8$ coefficient will be correct. Instead of doing the explicit product above, we use Sage to compute the Taylor series, which has the same effect. Partitions of integers have some interesting properties. Let $p_d(n)$ be the number of partitions of $n$ into distinct parts; let $p_o(n)$ be the number of partitions into odd parts. Example 3.3.4 For $n=6$, the partitions into distinct parts are $$6,5+1,4+2,3+2+1,$$ so $p_d(6)=4$, and the partitions into odd parts are $$5+1,3+3,3+1+1+1,1+1+1+1+1+1,$$ so $p_o(6)=4$. In fact, for every $n$, $p_d(n)=p_o(n)$, and we can see this by manipulating generating functions. The generating function for $p_d(n)$ is $$f_d(x)=(1+x)(1+x^2)(1+x^3)\cdots=\prod_{i=1}^\infty (1+x^i).$$ The generating function for $p_o(n)$ is $$f_o(x)=(1+x+x^2+x^3+\cdots)(1+x^3+x^6+x^9+\cdots)\cdots= \prod_{i=0}^\infty {1\over 1-x^{2i+1}}.$$ We can write $$f_d(x)={1-x^2\over 1-x}\cdot{1-x^4\over 1-x^2}\cdot{1-x^6\over 1-x^3}\cdots$$ and notice that every numerator is eventually canceled by a denominator, leaving only the denominators containing odd powers of $x$, so $\ds f_d(x)=f_o(x)$. We can also use a recurrence relation to find the partition numbers, though in a somewhat less direct way than the binomial coefficients or the Bell numbers. Let $p_k(n)$ be the number of partitions of $n$ into exactly $k$ parts. We will find a recurrence relation to compute the $p_k(n)$, and then $$ p_n=\sum_{k=1}^n p_k(n). $$ Now consider the partitions of $n$ into $k$ parts. Some of these partitions contain no 1s, like $3+3+4+6$, a partition of 16 into 4 parts. Subtracting 1 from each part, we get a partition of $n-k$ into $k$ parts; for the example, this is $2+2+3+5$. The remaining partitions of $n$ into $k$ parts contain a 1. If we remove the 1, we are left with a partition of $n-1$ into $k-1$ parts. This gives us a 1–1 correspondence between the partitions of $n$ into $k$ parts, and the partitions of $n-k$ into $k$ parts together with the partitions of $n-1$ into $k-1$ parts, so $p_k(n)=p_k(n-k)+p_{k-1}(n-1)$. Using this recurrence we can build a triangle containing the $p_k(n)$, and the row sums of this triangle give the partition numbers. For all $n$, $p_1(n)=1$, which gives the first column of the triangle, after which the recurrence applies. Also, note that $p_k(n)=0$ when $k>n$ and we let $p_k(0)=0$; these are needed in some cases to compute the $p_k(n-k)$ term of the recurrence. Here are the first few rows of the triangle; at the left are the row numbers, and at the right are the row sums, that is, the partition numbers. For the last row, each entry is the sum of the like-colored numbers in the previous rows. Note that beginning with $p_4(7)=3$ in the last row, $p_k(7)=p_{k-1}(6)$, as $p_k(7-k)=0$. $$\matrix{ 1 & 1& & & & & \color{purple}0& & 1\cr 2 & 1& 1& & & \color{orange}0& & & 2\cr 3 & 1& 1& 1& \color{blue}0& & & & 3\cr 4 & 1& 2& \color{green}1& 1& & & & 5\cr 5 & 1& \color{red}2& 2& 1& 1& & & 7\cr 6 & \color{red}1& \color{green}3& \color{blue}3& \color{orange}2& \color{purple}1& \color{fuchsia}1& & 11\cr 7 & 1& \color{red}3& \color{green}4& \color{blue}3& \color{orange}2& \color{purple}1& \color{fuchsia}1& 15\cr }$$ Yet another sometimes useful way to think of a partition is with a Ferrers diagram. Each integer in thepartition is represented by a row of dots, and the rows are orderedfrom longest on the top to shortest at the bottom. For example,the partition $3+3+4+5$ would be represented by The conjugate of a partition is the one corresponding to the Ferrersdiagram produced by flipping the diagram for the original partitionacross the main diagonal, thus turning rows into columns and viceversa. For the diagram above, the conjugate is with corresponding partition $1 + 2+4+4+4$. This concept can occasionally make facts about partitions easier to see than otherwise. Here is a classic example: the number of partitions of $n$ with largest part $k$ is the same as the number of partitions into $k$ parts, $p_k(n)$. The action of conjugation takes every partition of one type into a partition of the other: the conjugate of a partition into $k$ parts is a partition with largest part $k$ and vice versa. This establishes a 1–1 correspondence between partitions into $k$ parts and partitions with largest part $k$. Exercises 3.3 Ex 3.3.1Use generating functions to find $p_{15}$. Ex 3.3.2Find the generating function for the number of partitions ofan integer into distinct odd parts. Find the number of such partitionsof 20. Ex 3.3.3Find the generating function for the number of partitions ofan integer into distinct even parts. Find the number of such partitionsof 30. Ex 3.3.4Find the number of partitions of 25 into odd parts. Ex 3.3.5Find the generating function for the number of partitions ofan integer into $k$ parts; that is, the coefficient of $x^n$ isthe number of partitions of $n$ into $k$ parts. Ex 3.3.6Complete row 8 of the table for the $p_k(n)$, and verifythat the row sum is 22, as we saw in example 3.3.3. Ex 3.3.7A partition of $n$ is self-conjugate if its Ferrers diagram is symmetric around the main diagonal, so thatits conjugate is itself. Show that the number of self-conjugatepartitions of $n$ is equal to the number of partitions of $n$ intodistinct odd parts.
Suppose $\{ p_{k} \}$ is a collection of real numbers with the following properties: 1) $p_k \in (0,1)$ $~~~~$(i.e. $0$ and $1$ are not allowed values) 2) $\sum_{k=1}^{\infty} p_k =1$ An example of such a collection is $p_k := \frac{6}{\pi^2 k^2}$. You are free to choose any such collection in order to answer the following question: Can you generate a sequence of natural numbers $\{x_n\}$ with the following property: $$ \lim_{N \rightarrow \infty}\frac{\#\{n \in [1,N]: x_n = k \}}{N} = p_k \qquad \forall k.$$ My criteria for "generating" a sequence means I should actually be able write a program to generate these $\{x_n\}$. I am not looking for an abstract existential result; ideally the $x_n$ should be given by an explicit formula. To clarify my question; you can choose whatever $p_k$ you want to answer my question, as long as it satisfies conditions $1)$ and $2)$. You can take the specific example $p_k := \frac{6}{\pi^2 k^2}$ I gave, but that is not necessary. Again, the only condition is that given $k$ I should actually be able to evaluate $p_k$; it should not be something that is given indirectly or which is merely shown to exist theoretically. Ideally, $p_k$ should be a formula in terms of $k$.
I would like to graphically show Bendixson’s criterion. The Bendixson Criterion: If $f_1$ and $f_2$ are continuous in a region $R$ which is simply-connected (i.e., without holes), and $$\frac{\partial f_1}{\partial x_1}+\frac{\partial f_2}{\partial x_2}\ne0$$ at any point of $R$, then the system $$x_1' = f_1(x_1, x_2)$$ $$x_2' = f_2(x_1, x_2)$$ has no closed trajectories inside $R$. Basically you can use this theorem to proof that there is no limit cycle within a system: $$x' = f(x,y) $$ ($x$ being a state vector and $f(x,y)$ the dynamic equation vector) I wanted to visualize graphically that, if we HAVE a limit cycle $\frac{\partial f_1}{\partial x_1}+\frac{\partial f_2}{\partial x_2}$ will be equal to zero at some points. Hence I draw a limit cycle (simple circle in the $x_1-x_2$ plane). Since $x'=f(x_1,x_2) \rightarrow f(x_1,x_2)$ will be tangent to the circle at all points, if the circle is a limit cycle (the trajectory can not escape). Clear[t, x, y, z, P];x[t_] = -Sin[t];y[t_] = Cos[t];P[t_] = {x[t], y[t]};V[t_] = {x'[t], y'[t]};curveplot = ParametricPlot[P[t], {t, 0, 2*Pi}, PlotStyle -> Thickness[0.01]];ar = Table[{P[t], P[t] + V[t]}, {t, 0, 2*Pi, Pi/4}];Show[curveplot, Graphics[{Arrow[ar], Red, AbsolutePointSize@10, Point@ar[[All, 1]]}], PlotRange -> All, AxesLabel -> {"x1", "x2"}, Ticks -> None] But how could I visualize: $\frac{\partial f_1}{\partial x_1}+\frac{\partial f_2}{\partial x_2}$ ??Any suggestions ? Based on a comment from Rahul: $\frac{\partial f_1}{\partial x_1}+\frac{\partial f_2}{\partial x_2}$ is simply the divergence. Hence I do the follwing: Angle with x1 and x2: f1[x1_, x2_] = -Sin[ArcTan[x2/x1]];f2[x1_, x2_] = Cos[ArcTan[x2/x1]];a = StreamPlot[{f1[x1, x2], f2[x1, x2]}, {x1, -1, 1}, {x2, -1, 1}];Show[curveplot, a, Graphics[{Arrow[ar], Red, AbsolutePointSize@10, Point@ar[[All, 1]]}], PlotRange -> All, AxesLabel -> {"x1", "x2"}, Ticks -> None] EDIT: Thanks to J.M, I changed the ArcTan function to: -Sin[ArcTan[x1, x2]] and Cos[ArcTan[x1, x2]] (putting a comma between x1 and x2, gives the angle for this coordinate) the output no looks better: Now the question is:... what could you interpret ?...and is there a better way to visualize the divergence ? Any help is highly appreciated ! :)