qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
633,223 | <p><img src="https://i.stack.imgur.com/xVS2C.png" alt="enter image description here"></p>
<p>This one has a great degree of self-evidence. Paradoxically, I find it difficult to deduce it from primitive propositions. The book only hinted ❋4.21 and ❋4.22.</p>
| Albert Steppi | 7,323 | <p>$\textbf{Edit:}$ My original proof had a gap in the derivation of what I know call statement $B$. I've added the relevant propositions that would allow one to fill the gap.</p>
<p>I own the second edition, but I don't think that should matter too much. I'm not going to give a full Russell-Whitehead style proof, but you should be able to construct one from my answer. I'm going to use modern notation so that my answer can be more widely understood.</p>
<p>In modern notation, we're trying to prove</p>
<p>$\:\:\:\:\textbf{*4.86.}\:\:\:\:\:\:\:\: (p \iff q) \Rightarrow \left\{(p \iff r) \iff (q \iff r)\right\}$</p>
<p>The text hints that we should use propositions $*4.21.$ and $*4.22.$ these are</p>
<p>$\:\:\:\:\textbf{*4.21.}\:\:\:\:\:\:\:\: (p \iff q) \iff (q \iff p)$</p>
<p>$\:\:\:\:\textbf{*4.22.}\:\:\:\:\:\:\:\: \left\{(p \iff q) \wedge (q \iff r)\right\}\Rightarrow (p \iff r)$</p>
<p>By $*4.22.$ we have</p>
<p>$\:\:\:\:\:\textbf{A.}\:\:\:\:\:\:\:\:\:\:\:\:\:\left\{(p \iff q) \wedge (q \iff r)\right\} \Rightarrow (p \iff r)$</p>
<p>Proposition $*4.36$ states that</p>
<p>$\:\:\:\:\textbf{*4.36.}\:\:\:\:\:\:\:\: (p \iff q) \Rightarrow \left\{(p \wedge r)\iff (q \wedge r)\right\}$</p>
<p>while proposition $*4.84$ states that</p>
<p>$\:\:\:\:\textbf{*4.84.}\:\:\:\:\:\:\:\: (p \iff q) \Rightarrow \left\{(p \Rightarrow r) \iff (q \Rightarrow r)\right\}$</p>
<p>By applying $*4.21.$ and $*4.22.$ together with these previous two propositions
we can prove</p>
<p>$\:\:\:\:\:\textbf{B.}\:\:\:\:\:\:\:\:\:\:\:\:\:\left\{(p \iff q) \wedge (p \iff r)\right\} \Rightarrow (q \iff r)$</p>
<p>Proposition $*3.3.$ states that</p>
<p>$\:\:\:\:\textbf{*3.3.}\:\:\:\:\:\:\:\:\:\:\left\{(p \wedge q) \Rightarrow r\right\} \Rightarrow \left\{p \Rightarrow (q \Rightarrow r)\right\}$</p>
<p>Applying this to each of our statements $A$ and $B$ yields the two statements</p>
<p>$\:\:\:\:\:\textbf{1.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:(p \iff q) \Rightarrow \left\{(p \iff r) \Rightarrow (q \iff r)\right\}$</p>
<p>and</p>
<p>$\:\:\:\:\:\textbf{2.}\:\:\:\:\:\:\:\:\:\:\:\:\:\:(p \iff q) \Rightarrow \left\{(q \iff r) \Rightarrow (p \iff r)\right\}$</p>
<p>Proposition $*3.43.$ states that </p>
<p>$\:\:\:\:\textbf{*3.43.}\:\:\:\:\:\:\:\:\left\{(p \Rightarrow q) \wedge (p \Rightarrow r)\right\} \Rightarrow \left\{p \Rightarrow q \wedge r\right\}$</p>
<p>Applying $*3.43.$ to statements $1$ and $2$ yields</p>
<p>$$(p \iff q) \Rightarrow \left[\left\{(p \iff r) \Rightarrow (q \iff r)\right\} \wedge \left\{(q \iff r) \Rightarrow (p \iff r)\right\}\right]$$</p>
<p>$*4.86.$ then follows directly from this last statement by applying the definition of equivalence given in $*4.01.$ </p>
<p>$\:\:\:\:\textbf{*4.01.}\:\:\:\:\:\:\:\:(p \iff q) \equiv_{\operatorname{def}}
(p \Rightarrow q) \wedge (q \Rightarrow p)$</p>
<p>By the way, I'd like to add that it makes me happy to see other people interested in studying the Principia. Perhaps <a href="http://www.thebigquestions.com/2010/12/20/lord-russells-nightmare/" rel="nofollow">Russell's nightmare</a> won't come true for some time yet.</p>
|
149,830 | <p>As we know, the eigen vectors and eigen values of a real symmetric matrix always is are real numbers. I am trying to use the <em>Mathematica</em> to verify this theory. Suppose I have a matrix <code>A</code></p>
<pre><code>A={{6585, 7579, 6717}, {7579, 11002, 12324}, {6717, 12324, 17030}}
Eigenvalues[A]
</code></pre>
<blockquote>
<p><code>{Root[-13826467396+117514474 #1-34617 #1^2+#1^3&,3],Root[-13826467396+117514474 #1-34617 #1^2+#1^3&,2],Root[-13826467396+117514474 #1-34617 #1^2+#1^3&,1]}</code></p>
</blockquote>
<p>But I really don't like those <code>Root</code>, so I try to simplfy it to a radical expression.</p>
<pre><code>Simplify /@ ToRadicals /@ Eigenvalues[A]
</code></pre>
<blockquote>
<p><img src="https://i.stack.imgur.com/CbKYv.png" alt=""></p>
</blockquote>
<p>The result includes <code>I</code>. Since they are real numbers, could we express this number by radical express without <code>I</code>?</p>
| LCarvalho | 37,895 | <p>It's just an idea because it lacks information:</p>
<pre><code>Points = {{1, 2}, {2, 4}, {3, 7}, {4, 12}};
MapThread[f, Points];
f[x_, y_] := {x, y*200}
ListLogPlot[f @@@ Points]
</code></pre>
|
2,756,744 | <p>It is easy to derive from $AB=-BA$ that $\mathrm{tr}(AB)=0$ since $\mathrm{tr}(AB)=\mathrm{tr}(-BA)=-\mathrm{tr}(BA)=-\mathrm{tr}(AB)$. However, I cannot get that $\mathrm{tr}(A)=\mathrm{tr}(B)=0$ without the fact that $A$ and $B$ are invertible. </p>
<p>My Professor suggested I use the Cayley-Hamilton Theorem. However, that just gives me a few extra conditions on the elements of $A$ and $B$, and I still can't get that their traces equal $0$. </p>
<p>Any ideas are greatly appreciated! </p>
| egreg | 62,967 | <p>The Cayley-Hamilton theorem says that
$$
A^2-\operatorname{tr}(A)A+\det(A)I=0
$$
If we multiply by $B$ on the right:
$$
A^2B-\operatorname{tr}(A)AB+\det(A)B=0
$$
If we multiply by $B$ on the left:
$$
BA^2-\operatorname{tr}(A)BA+\det(A)B=0
$$
Subtracting the two relations:
$$
A^2B-BA^2-2\operatorname{tr}(A)AB=0
$$
On the other hand, $A^2B=AAB=-ABA=BAA=BA^2$.</p>
|
2,756,744 | <p>It is easy to derive from $AB=-BA$ that $\mathrm{tr}(AB)=0$ since $\mathrm{tr}(AB)=\mathrm{tr}(-BA)=-\mathrm{tr}(BA)=-\mathrm{tr}(AB)$. However, I cannot get that $\mathrm{tr}(A)=\mathrm{tr}(B)=0$ without the fact that $A$ and $B$ are invertible. </p>
<p>My Professor suggested I use the Cayley-Hamilton Theorem. However, that just gives me a few extra conditions on the elements of $A$ and $B$, and I still can't get that their traces equal $0$. </p>
<p>Any ideas are greatly appreciated! </p>
| user1551 | 1,551 | <p>Here is an alternative proof without using Cayley-Hamilton theorem. We assume that the underlying field has characteristic $\ne2$, otherwise $A=B=\operatorname{diag}(1,0)$ would give a counterexample. As you said, $\operatorname{tr}(AB)=0$. It remains to show that $\operatorname{tr}(A)=\operatorname{tr}(B)=0$.</p>
<p>If both $A$ and $B$ are invertible, then $ABA^{-1}=-B$ and $B^{-1}AB=-A$. Taking traces on both sides of each equation, the conclusion follows.</p>
<p>If at least one of the two matrices is singular, we may assume without loss of generality that $B=uv^T$. The condition $AB=-BA\ne0$ implies that $Auv^T=-uv^TA\ne0$. Hence $\{Au,u\}$ and and $\{v^TA,v^T\}$ are two pairs of nonzero parallel vectors. Thus $Au=\lambda u$ for some $\lambda\ne0$. Substitute this into $Auv^T=-uv^TA$, we also get $v^TA=-\lambda v^T$. Therefore, both $\lambda$ and $-\lambda$ are eigenvalues of $A$. By assumption, the field has characteristic $\ne2$. Hence $\lambda$ and $-\lambda$ are distinct eigenvalues of $A$ and $\operatorname{tr}(A)=0$. </p>
<p>Finally, as $A$ has distinct nonzero eigenvalues, it is invertible. So, our previous argument ($ABA^{-1}=-B$) shows that $\operatorname{tr}(B)=0$. Alternatively, the conclusion also follows from the observation that $0=\operatorname{tr}(AB)=\operatorname{tr}(Auv^T)=\operatorname{tr}(\lambda uv^T)=\lambda\operatorname{tr}(B)$.</p>
|
452,803 | <p>Test the convergence of improper integrals :</p>
<p>$$\int_1^2{\sqrt x\over \log x}dx$$</p>
<p>I basically have no idea how to approach a problem in which log appears. Need some hint on solving this type of problems.</p>
| Ron Gordon | 53,268 | <p>Consider the following:</p>
<p>$$\lim_{\epsilon \to 0} \int_{\epsilon}^1 dx \frac{\sqrt{1+x}}{\log{(1+x)}}$$</p>
<p>Now, for the bottom limit of the integral, note that</p>
<p>$$\log{(1+\epsilon)} \sim \epsilon$$</p>
<p>so that, near this limit, the integrand behaves as $1/\epsilon$ as $\epsilon \to 0$, or as $1/x$ as $x \to 0$. This represents a non-integrable singularity (the integral would behave as $\log{\epsilon}$ near this limit), and therefore the integral diverges.</p>
|
452,803 | <p>Test the convergence of improper integrals :</p>
<p>$$\int_1^2{\sqrt x\over \log x}dx$$</p>
<p>I basically have no idea how to approach a problem in which log appears. Need some hint on solving this type of problems.</p>
| Kunnysan | 84,764 | <p>Let, $I(\delta)=\displaystyle\int_{1+ \delta}^2{\sqrt x\over \log x}dx$</p>
<p>$$I(\delta) \ge \displaystyle\int_{1+ \delta}^2{1\over \log x} \geq \displaystyle\int_{1+ \delta}^2{1\over x-1}=-\log\delta$$</p>
<p>As, $\log x \le x-1 $ for $x\ge 1$.</p>
<p>Letting $\delta \to 0$ you can have that your integral diverges, as $I(\delta)$ is continuousin $\delta$.</p>
|
3,418,526 | <p>The problem is as follows:</p>
<blockquote>
<p>The figure from below shows the squared speed against distance
attained of a car. It is known that for <span class="math-container">$t=0$</span> the car is at <span class="math-container">$x=0$</span>.
Find the time which will take the car to reach <span class="math-container">$24\,m$</span>.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/rZpd9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rZpd9.png" alt="Sketch of the problem"></a></p>
<p>The given alternatives on my book are:</p>
<p><span class="math-container">$\begin{array}{ll}
1.&8.0\,s\\
2.&9.0\,s\\
3.&7.0\,s\\
4.&6.0\,s\\
5.&10.0\,s\\
\end{array}$</span></p>
<p>What I attempted to do to solve this problem was to find the acceleration of the car given that from the graph it can be inferred that:</p>
<p><span class="math-container">$\tan 45^{\circ}=\frac{v^{2}\left(\frac{m^{2}}{s^{2}}\right)}{m}=1\,\frac{m}{s^{2}}$</span></p>
<p>Using this information I went to the position equation as follows:</p>
<p><span class="math-container">$x(t)=x_{o}+v_{o}t+\frac{1}{2}at^2$</span></p>
<p>Since it is mentioned that <span class="math-container">$x=0$</span> when <span class="math-container">$t=0$</span> this would make the equation of position into:</p>
<p><span class="math-container">$0=x(0)=x_{o}+v_{o}(0)+\frac{1}{2}a(0)^2$</span></p>
<p>Therefore,</p>
<p><span class="math-container">$x_{o}=0$</span></p>
<p><span class="math-container">$x(t)=v_{o}t+\frac{1}{2}at^2$</span></p>
<p>From the graph I can spot that:</p>
<p><span class="math-container">$v_{o}^2=1$</span></p>
<p><span class="math-container">$v_{o}=1$</span></p>
<p>Since <span class="math-container">$a=1$</span></p>
<p><span class="math-container">$x(t)=t+\frac{1}{2}t^2$</span></p>
<p>Then:</p>
<p><span class="math-container">$t+\frac{1}{2}t^2=24$</span></p>
<p><span class="math-container">$t^2+2t-48=0$</span></p>
<p><span class="math-container">$t=\frac{-2\pm \sqrt{2+192}}{2}=\frac{-2\pm \sqrt{194}}{2}=\frac{-2\pm 14}{2}$</span></p>
<p><span class="math-container">$t=6,-8$</span></p>
<p>Therefore the time would be <span class="math-container">$6$</span> but apparently the answer listed on my book is <span class="math-container">$8$</span>. Could it be that I missunderstood something or what happened? Is the answer given wrong?. Can somebody help me here?.</p>
| AgentS | 168,854 | <p>Given graph has the equation:
<span class="math-container">$$v^2 = 1+x$$</span></p>
<p>Implicitly differentiate both sides with respect to <span class="math-container">$x$</span> :</p>
<p><span class="math-container">$$2v\dfrac{dv}{dx} =1$$</span></p>
<p>Multiply left side by <span class="math-container">$1=\color{blue}{\frac{dt}{dt}}$</span>:
<span class="math-container">$$2v\dfrac{dv}{\color{blue}{dt}}\dfrac{\color{blue}{dt}}{dx}=1$$</span></p>
<p>Since <span class="math-container">$\frac{dt}{dx} = \frac{1}{dx/dt} = \frac{1}{v}$</span>:
<span class="math-container">$$2v\dfrac{dv}{dt}\dfrac{1}{v}=1 \implies \dfrac{dv}{dt}=\dfrac{1}{2}$$</span></p>
<hr>
<p>In general, acceleration from <span class="math-container">$v^2,x$</span> graph is obtained by the formula:
<span class="math-container">$$a = \dfrac{1}{2}\dfrac{d}{dx}(v^2)$$</span></p>
|
941,182 | <p>If I know <span class="math-container">$\text{Im}(T)$</span> and <span class="math-container">$\text{Ker }(T)$</span>, is <span class="math-container">$\text{Im}(T)+\text{Ker }(T)$</span> the union of the two vector space?</p>
<p>If not, how do I find the addition of the two vector space. It is best if examples can be given. Thanks.</p>
| karakusc | 176,950 | <p>No, <span class="math-container">${\rm Im}(T)+{\rm Ker}(T)$</span> is not the same as <span class="math-container">${\rm Im}(T)\cup {\rm Ker}(T)$</span>. The former is defined as</p>
<p><span class="math-container">$$
{\rm Im}(T)+{\rm Ker}(T) = \{x+y: x \in {\rm Im}(T), y \in {\rm Ker}(T) \}
$$</span></p>
<p>To visualize this, imagine <span class="math-container">${\rm Im}(T)$</span> is the <span class="math-container">$x$</span>-axis in <span class="math-container">$\mathbb{R}^3$</span>, and <span class="math-container">${\rm Ker}(T)$</span> is the <span class="math-container">$yz$</span>-plane. Then <span class="math-container">${\rm Im}(T)+{\rm Ker}(T)$</span> would be the entire <span class="math-container">$\mathbb{R}^3$</span>, since any vector in <span class="math-container">$\mathbb{R}^3$</span> can be written as a sum of a vector on <span class="math-container">$x$</span>-axis and a vector on <span class="math-container">$yz$</span>-plane. On the other hand, the union of the two would not contain any point outside the <span class="math-container">$x$</span>-axis and the <span class="math-container">$yz$</span>-plane.</p>
|
941,182 | <p>If I know <span class="math-container">$\text{Im}(T)$</span> and <span class="math-container">$\text{Ker }(T)$</span>, is <span class="math-container">$\text{Im}(T)+\text{Ker }(T)$</span> the union of the two vector space?</p>
<p>If not, how do I find the addition of the two vector space. It is best if examples can be given. Thanks.</p>
| Kamster | 159,813 | <p>For an example, let <span class="math-container">$T:\mathbb{R}^2\rightarrow\mathbb{R}^2$</span> be defined such that
<span class="math-container">$$T(x,y)=T(x,0)$$</span>
so <span class="math-container">$T$</span> is essentially the projection of a vector in <span class="math-container">$\mathbb{R}^2$</span> on the x axis. Thus with this we have that
<span class="math-container">$${\rm Im}(T)=\{(a,0):a\in\mathbb{R}\}$$</span>
and
<span class="math-container">$${\rm Ker}(T)=\{(0,b):b\in\mathbb{R}\}$$</span>
Thus we have that <span class="math-container">${\rm Im}(T)\cup {\rm Ker}(T)$</span> is just set contains the <span class="math-container">$x$</span> axis and <span class="math-container">$y$</span> axis, but notice that <span class="math-container">${\rm Im}(T)+{\rm Ker}(T)=\{(a,b):a,b\in\mathbb{R}\}=\mathbb{R}^{2}$</span></p>
|
2,919,683 | <p>I am using Monte Carlo method to evaluate the integral above:
$$\int_0^\infty \frac{x^4sin(x)}{e^{x/5}} \ dx $$
I transformed variables using $u=\frac{1}{1+x}$ so I have the following finite integral:
$$\int_0^1 \frac{(1-u)^4 sen\frac{1-u}u}{u^6e^{\frac{1-u}{5u}}} \ du $$
I wrote the following code on R:</p>
<p>set.seed (666)</p>
<p>n <- 1e6</p>
<p>f <- function(x) ( (1-x)^4 * sin ((1-x)/x) ) / ( exp((1-x)/(5*x) ) * x^6 )</p>
<p>x <- runif(n)</p>
<p>I <- sum (f(x))/n</p>
<p>But I get the wrong answer, but if I integrate f(x) using the R built-in function and not Monte Carlo I get the right answer.</p>
| Matematleta | 138,929 | <p>Use the following easily proved facts: a set $V\subseteq X$ is saturated with respect to $f$ if and only if there is a $U\subseteq Y$ such that $f^{-1}(U)=V,$ and then $V=f^{-1}(f(V))$</p>
<p>The claim is that if $f:X\to Y$ is continuous and surjective, then $f$ is a quotient map if and only if it takes saturated open (or closed) sets to open (closed) sets. Let's prove the "open" case. </p>
<p>$(\Rightarrow )$ If $S\subseteq X$ is open and saturated, then $f(S)$ is open because $f^{-1}(f(S))=S$ is open and $f$ is a quotient map.</p>
<p>$(\Leftarrow )$ Suppose $f$ is continuous and takes saturated open sets to open sets. Then, if $U\subseteq Y$ is open, so is $f^{-1}(U)$. On the other hand, if $f^{-1}(U)$ is open in $X$, then $f^{-1}(U)$ is saturated and so by assumption $f(f^{-1}(U))=U$ is open in $Y$ so $f$ is a quotient map.</p>
|
335,651 | <p>I'm having trouble proving $$\left(\frac{\sin(\frac{n\theta}{2})}{\sin(\frac{\theta}{2})}\right)^2=\left|\sum_{k=1}^{|n|}e^{ik\theta}\right|^2$$ where $n\in\mathbb{Z}$ and $\theta\in\mathbb{R}$. Can anyone suggest a hint?</p>
| Ángel Mario Gallegos | 67,622 | <p>In order to proof
$$\sum_{k=1}^{n}{\sin \left(\varphi + k\alpha \right)}=\frac{\sin\left(\frac{\left(n+1\right)\alpha}{2}\right)\cdot\sin\left(\varphi+\frac{n\alpha}{2}\right)}{\sin \frac{\alpha}{2}}$$
observe
$$\sin \frac{\alpha}{2}\cdot \sin \left(\varphi + k\alpha\right)=\frac{1}{2}\left[\cos \left(\varphi + k\alpha - \frac{\alpha}{2}\right)-\cos \left(\varphi+k\alpha+\frac{\alpha}{2}\right)\right]$$
because $\sin u \cdot \sin v = \frac{1}{2}\left[\cos \left(u-v\right) - \cos \left(u + v\right)\right]$.</p>
<p>Then $\sin \frac{\alpha}{2}\cdot\sum_{k=1}^{n}{\sin \left(\varphi + k\alpha \right)}$ is a telescopic series:
$$\sin \frac{\alpha}{2}\cdot\sum_{k=1}^{n}{\sin \left(\varphi + k\alpha \right)} = \frac{1}{2}\left[\cos \left(\varphi + \frac{\alpha}{2}\right)-\cos \left(\varphi + n\alpha +\frac{\alpha}{2}\right)\right].$$</p>
<p>And use $\sin u \cdot \sin v = \frac{1}{2}\left[\cos \left(u-v\right) - \cos \left(u + v\right)\right]$ again in order to reduce the last expresion.</p>
<p>Also, for the identity
$$\sum_{k=1}^{n}{\cos \left(\varphi + k\alpha \right)}=\frac{\sin\left(\frac{\left(n+1\right)\alpha}{2}\right)\cdot\cos\left(\varphi+\frac{n\alpha}{2}\right)}{\sin \frac{\alpha}{2}}$$
we can use the identity $\cos u \sin v =\frac{1}{2}\left[\sin \left(u+v\right)-\sin \left(u-v\right)\right]$:
$$\cos \left(\varphi + k\alpha\right)\cdot \sin \frac{\alpha}{2}=\frac{1}{2}\left[\sin \left(\varphi + k\alpha + \frac{\alpha}{2}\right) - \sin \left(\varphi + k\alpha - \frac{\alpha}{2}\right)\right].$$
And so on.</p>
|
1,531,755 | <p>Let $a\in [0,1)$. I want to show that $$\lim_{n\to \infty}{na^n}=0$$</p>
<p>My try : $$na^n={n\over e^{-(\log{a})n}}$$ and the limit is $${+\infty\over +\infty}$$
Hence by l'Hopital's rule we have that
$$\lim_{n\to \infty}{1\over -(\log{a})e^{-(\log{a})n}}={1\over -\infty}=0$$</p>
<p>Is there any other way to compute this limit ? thanks!</p>
| draks ... | 19,341 | <p><strong>HINT</strong> Expand $e^{-(\log a)n}$ and divide by $n$...</p>
|
4,004,827 | <p>I need to calculate:
<span class="math-container">$$\displaystyle \lim_{x \to 0^+} \frac{3x + \sqrt{x}}{\sqrt{1- e^{-2x}}}$$</span></p>
<p>I looks like I need to use common limit:
<span class="math-container">$$\displaystyle \lim_{x \to 0} \frac{e^x-1}{x} = 1$$</span></p>
<p>So I take following steps:</p>
<p><span class="math-container">$$\displaystyle \lim_{x \to 0^+} \frac{3x + \sqrt{x}}{\sqrt{1- e^{-2x}}} = \displaystyle \lim_{x \to 0^+} \frac{-3x - \sqrt{x}}{\sqrt{e^{-2x} - 1}}$$</span></p>
<p>And I need to delete root in the denominator and make nominator equal to <span class="math-container">$-2x$</span>. But I don't know how.</p>
| Restless | 880,216 | <p><span class="math-container">$$\lim_{x \to 0^+} \frac{3x + \sqrt{x}}{\sqrt{1- e^{-2x}}} =\lim_{x \to 0^+}\frac{\sqrt{x}}{\sqrt{1- e^{-2x}}} \cdot(3\sqrt{x} + 1) = \lim_{x \to 0^+} \sqrt{\frac{x}{1- e^{-2x}}} \cdot\lim_{x \to 0^+} (3\sqrt{x} + 1)$$</span></p>
<p><span class="math-container">$$\lim_{x \to 0^+} \sqrt{\frac{x}{1- e^{-2x}}} = \lim_{x \to 0^+} \sqrt{\frac{-2x}{(2)( e^{-2x} -1)}}= \frac{1}{\sqrt{2}}$$</span></p>
|
200,093 | <p>I have a BLDC electric motor, I'm currently trying to control via a <code>PIDTune</code>. This is mostly an attempt to reduce (remove) a small run away drift that ends up showing up in the motor signal <code>u[t]</code>.</p>
<p>I've modelled this via:</p>
<pre><code>ssm = StateSpaceModel[\[ScriptCapitalJ] \[Phi]''[t] + \[ScriptCapitalR] \[Phi]'[t] == \[ScriptCapitalT] u[t], {{\[Phi][t], 0}, {\[Phi]'[t], 0}, {u[t], 1}}, u[t], \[Phi]'[t], t]
</code></pre>
<p>And simulated: </p>
<pre><code>params = { \[ScriptCapitalJ] -> 4.63 10^-5, \[ScriptCapitalR] -> 1 10^-5, \[ScriptCapitalT] -> 0.0335};
Plot[Evaluate[OutputResponse[ssm /. params, 1, {t, 0, 12}]], {t, 0, 12}]
</code></pre>
<p><a href="https://i.stack.imgur.com/vSYOC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vSYOC.png" alt="plot"></a></p>
<p>This is a nice model response and mirrors the response of the real motor almost exactly. </p>
<p>So I tried to create a control system to add to the control signal and bring the system relatively quickly back to zero. </p>
<pre><code>control = PIDTune[ssm /. params , {"PID"}]
</code></pre>
<p>But I continue to get the following error:</p>
<pre><code>PIDTune::infgains: Unable to compute finite controller parameters because a denominator in the tuning formula is effectively zero.
</code></pre>
<p>I have tried <em>all</em> tuning methods within the documentation, however I continue to get errors.</p>
<p>Changing to a "PD" control</p>
<pre><code>control = PIDTune[ssm /. params , {"PD"}]
</code></pre>
<p>Gives me control system, however when adding it to the feedback and then seeing the response I get a different error:</p>
<pre><code>simul = SystemsModelFeedbackConnect[ssm, control] /. params
OutputResponse[simul, UnitStep[t - 3], {t, 0, 12}]
OutputResponse::irregss: A solution could not be found for the irregular state-space model with a characteristic equation of zero.
</code></pre>
<p>The error messages don't really make any sense to me...or explain what the issue is with the model...being that it simulates reality quite well....How can I relieve these errors, or create a feedback loop via <code>PIDTune</code> for my system?</p>
<p>Thank you for the help!</p>
<p>There is a similar example with a dcmotor within the documentation for <code>PIDTune</code> for reference which works fine (albeit a different tfm):</p>
<pre><code>dcMotor = TransferFunctionModel[Unevaluated[{{k/(s ((j s + b) (l s + r) + k^2))}}], s, SamplingPeriod ->None, SystemsModelLabels -> {{None}, {None}}] /. pars;
PIDTune[dcMotor, "PID", "PIDData"]
</code></pre>
<p><strong>Update</strong></p>
<p>As per M.K.s suggestion, I have changed the ssm slightly, or rather rewritten it to come directly to the equation of motion for angular velocity omega, instead of the motors angle phi. This change simplifies the ssm and allows <code>PIDTune</code> to come up with a solution.</p>
<p>As a small explanation, the ODE is derived via <a href="http://www.site.uottawa.ca/~rhabash/StateSpaceModelBLDC.pdf" rel="nofollow noreferrer">equation 6 of this paper</a> as a simplified motor for control via amperage of u[t]. Though is is a relatively 'standard' equation used and can be found in many papers. J and R were found via nonlinearfitting of driving the motor at different amperages. As such, the model params, J, T, R are quite accurate. </p>
<pre><code>ssmnew = StateSpaceModel[\[ScriptCapitalJ] \[Omega]'[t] + \[ScriptCapitalR] \[Omega][t] == \[ScriptCapitalT] u[t], {{\[Omega][t], 0}}, {{u[t]}}, {\[Omega][t]}, t]
control = PIDTune[ssmnew /. params, {"PID"}]
loop = SystemsModelFeedbackConnect[ssmnew, control] /. params
test1 OutputResponse[loop, UnitStep[t - 4], {t, 0, 12}]
</code></pre>
<p>or </p>
<pre><code> test2 = OutputResponse[control /. params, UnitStep[t - 3], {t, 0, 10}]
</code></pre>
<p>Unfortunately at this point, I am now getting either new errors, or a response that is completely wrong, using inputs of <code>UnitStep</code> or just <code>1</code></p>
<pre><code>NDSolve::ndsz: At t == 4.000000000000114`, step size is effectively zero; singularity or stiff system suspected.
</code></pre>
<p>or </p>
<pre><code>NDSolve::irfail: Unable to reduce the index of the system to 0 or 1.
</code></pre>
<p><a href="https://i.stack.imgur.com/Chmos.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Chmos.png" alt="plot4"></a></p>
| MK. | 61,732 | <p>Not an answer, but too long for a comment.</p>
<p>Your definition of <code>ssm</code> seems to not comply with the syntax listed in the documentation. It can be changed, for example, to</p>
<pre><code>ssm = StateSpaceModel[
\[ScriptCapitalJ] \[Phi]''[t] + \[ScriptCapitalR] \[Phi]'[
t] == \[ScriptCapitalT] u[t],
{{\[Phi][t], 0}, {\[Phi]'[t], 0}}, {{u[t], 1}}, {\[Phi][
t](*,\[Phi]'[t]*)}, t]
</code></pre>
<p>Then it produces no errors when evaluating</p>
<pre><code>control = PIDTune[ssm /. params, {"PID"}]
</code></pre>
<p>But for the parameters given, the response function has a different plot from the one resulting from your definition of <code>ssm</code>. So if you can change the <code>ssm</code> that it has a valid syntax and still produces a response that is close to the one of a real motor, then you are done. And if not, I cannot probably help much more, but feel free to elaborate on your endeavours. For example, how have you obtained the model for your motor? Parameters? Are there any other requirements beside giving a proper response function?</p>
|
789,407 | <p>If the roots of the equation $$ax^2-bx+c=0$$ lie in the interval $(0,1)$, find the minimum possible value of $abc$. </p>
<p><strong>Edit:</strong> I forgot to mention in the question that $a$, $b$, and $c$ are natural numbers. Sorry for the inconvenience.<br>
<strong>Edit 2:</strong> As Hagen von Eitzen said about the double roots not allowed, I forgot to mention that too. Extremely sorry :(</p>
<blockquote>
<p>I tried to use $D > 0$, where $D$ is the discriminant but I can't further analyze in terms of the coefficients. Thanks in advance!</p>
</blockquote>
| vadim123 | 73,324 | <p>If you multiply the equation by $k$, you get $$(ka)x^2-(bk)x+(ck)=0$$
This new equation has the same roots as the original, hence in $(0,1)$, but has the product of <em>its</em> coefficients $k^3abc$. By letting $k\to\pm \infty$ (depending on whether $abc>0$ or $abc<0$), you can make this product as small as you like. Hence the answer is $-\infty$.</p>
|
3,272,738 | <p>I've been trying to make sense of these two integrals, somehow the result seems intuitive, yet very hard to compute. We define</p>
<p><span class="math-container">$$
f(x)=\frac{1}{4\pi}\delta(|x|-R)$$</span>
and then note that
<span class="math-container">$$
-\frac{1}{2}\int\int\frac{f(x)f(y)}{|x-y|}=-\frac{1}{2R}$$</span>
and </p>
<p><span class="math-container">$$\int \frac{f(x)}{|x-y|}dx=\frac{1}{|y|}\quad \text{if }|y|\geq R$$</span></p>
<p>the integration is over <span class="math-container">$R^3$</span>, and <span class="math-container">$\delta$</span> is the Dirac delta function.
I'd appreciate any help with this.</p>
| StubbornAtom | 321,264 | <p>By the <a href="https://en.wikipedia.org/wiki/Law_of_total_expectation" rel="nofollow noreferrer">law of total expectation</a>,</p>
<p><span class="math-container">$$E[XY]=E\left[E\,[XY\mid X]\right]=E\left[XE\,[Y\mid X]\right]$$</span></p>
<p>, where you are given that <span class="math-container">$E\,[Y\mid X]=1.4$</span> (which also implies <span class="math-container">$E\,[Y]=1.4 $</span> by the same result).</p>
<p>And note that by <a href="https://en.wikipedia.org/wiki/Law_of_the_unconscious_statistician" rel="nofollow noreferrer">this</a> theorem,</p>
<p><span class="math-container">$$E\,[XY]=\iint xy f_{X,Y}(x,y)\,\mathrm{d}x\,\mathrm{d}y$$</span></p>
<p>, where <span class="math-container">$f_{X,Y}(x,y)=f_{Y\mid X=x}(y)f_{X}(x)$</span> is the joint density of <span class="math-container">$(X,Y)$</span>.</p>
|
3,386,371 | <p>Find the explicit form of
<span class="math-container">$$
\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n(n+2)}x^{n-1}.
$$</span></p>
<p>Let <span class="math-container">$S(x)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n(n+2)}x^{n-1}$</span>. It has radius of convergence <span class="math-container">$1$</span>.</p>
<p>Let <span class="math-container">$S_1(x)=xS(x)$</span>. Then <span class="math-container">$S_1'(x)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{(n+2)}x^{n-1}$</span> for <span class="math-container">$|x|<1$</span>.</p>
<p>Let <span class="math-container">$S_2(x)=x^3S_1'(x)$</span>. Then <span class="math-container">$S_2'(x)=\sum_{n=1}^{\infty}(-x)^{n-1}=\frac{x^2}{1+x}$</span>.</p>
<p>By integration, I obtained <span class="math-container">$S_1'(x)=\frac{1}{2x}-\frac{1}{x^2}+\frac{\ln (x+1)}{x^3}$</span>. Then how to obtain <span class="math-container">$S(x)$</span>? Or there is other method to do this problem?</p>
| Ross Millikan | 1,827 | <p>Stirling is a reasonable approach here. We have
<span class="math-container">$$\frac{100!}{(50!)^2 2^{100}}\approx \frac {100^{100}e^{50}e^{50}\sqrt{2\pi 100}}{50^{50}50^{50}e^{100}2^{100}(2\pi 50)}=\frac 1{\sqrt{50\pi}}\approx \frac 1{7\cdot 1.8}=\frac 1{12.6}$$</span>
Where I took <span class="math-container">$\sqrt{50} \approx 7$</span> and <span class="math-container">$\sqrt \pi \approx 1.8$</span> because <span class="math-container">$\sqrt 3 \approx 1.732$</span> and <span class="math-container">$\pi$</span> is a little greater than <span class="math-container">$3$</span> </p>
<p>I did this without checking with <a href="https://www.wolframalpha.com/input/?i=%2850%21%5E2%292%5E100%2F100%21" rel="nofollow noreferrer">Alpha</a>, which shows it is about <span class="math-container">$\frac 1{12.56}$</span></p>
|
3,386,371 | <p>Find the explicit form of
<span class="math-container">$$
\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n(n+2)}x^{n-1}.
$$</span></p>
<p>Let <span class="math-container">$S(x)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n(n+2)}x^{n-1}$</span>. It has radius of convergence <span class="math-container">$1$</span>.</p>
<p>Let <span class="math-container">$S_1(x)=xS(x)$</span>. Then <span class="math-container">$S_1'(x)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{(n+2)}x^{n-1}$</span> for <span class="math-container">$|x|<1$</span>.</p>
<p>Let <span class="math-container">$S_2(x)=x^3S_1'(x)$</span>. Then <span class="math-container">$S_2'(x)=\sum_{n=1}^{\infty}(-x)^{n-1}=\frac{x^2}{1+x}$</span>.</p>
<p>By integration, I obtained <span class="math-container">$S_1'(x)=\frac{1}{2x}-\frac{1}{x^2}+\frac{\ln (x+1)}{x^3}$</span>. Then how to obtain <span class="math-container">$S(x)$</span>? Or there is other method to do this problem?</p>
| Brian Tung | 224,454 | <p>Completely aside from the mental math estimation, you may see that the standard deviation of the number of heads is <span class="math-container">$\sqrt{100(0.5)(0.5)} = 5$</span>, which means that the probability of being between <span class="math-container">$45$</span> and <span class="math-container">$55$</span> is approximately <span class="math-container">$68/10 = 6.8$</span> percent, which we can bump up based on the fact that the peak is higher than the average (duh). This gives us a decent ballpark estimate.</p>
<hr>
<p>This seems as good an opportunity as any to plug my version of the <a href="https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule" rel="nofollow noreferrer">empirical rule</a>:</p>
<ul>
<li>The probability of being outside one standard deviation is approximately one in <span class="math-container">$\pi$</span>.</li>
<li>The probability of being outside two standard deviations is approximately one in <span class="math-container">$7\pi$</span>.</li>
<li>The probability of being outside three standard deviations is approximately one in <span class="math-container">$16e^\pi$</span>.</li>
</ul>
|
3,832,684 | <p>Does the following inequality hold?
<span class="math-container">$$\sqrt {x-z} \geq \sqrt x -\sqrt{z} \ , $$</span>
for all <span class="math-container">$x \geq z \geq 0$</span>.</p>
<p>My justification
<span class="math-container">\begin{equation}
z \leq x \Rightarrow \\ \sqrt z \leq \sqrt {x} \Rightarrow \\ 2\sqrt z \sqrt z \leq 2\sqrt z\sqrt {x} \Rightarrow \\ 2 z \leq 2\sqrt z\sqrt {x} \Rightarrow \\ z - 2\sqrt z\sqrt {x} + x \leq x - z \Rightarrow \\ (\sqrt x -\sqrt z )^2 \leq x - z \Rightarrow \\ \sqrt x -\sqrt z \leq \sqrt {x - z}
\end{equation}</span></p>
| user21820 | 21,820 | <p>While the other existing answers give simple algebraic reasons for this fact, it is actually far more useful in general to see this fact as a special case of the <strong>smoothing</strong> technique. In particular, for any concave function <span class="math-container">$f$</span> on domain <span class="math-container">$D⊆ℝ$</span>, we have that <span class="math-container">$f(a+b) ≥ f(a'+b')$</span> for every <span class="math-container">$a,b,a',b'$</span> such that <span class="math-container">$a+b = a'+b'$</span> and <span class="math-container">$a' ≤ a,b ≤ b'$</span>. That is, pushing the points <span class="math-container">$a,b$</span> apart while preserving their sum decreases the total value of <span class="math-container">$f$</span> on them. In your case you simply have <span class="math-container">$(a,b) = (z,x-z)$</span> and <span class="math-container">$(a',b') = (0,x)$</span> and <span class="math-container">$f$</span> being the real-square-root function.</p>
<p>This general smoothing technique is extremely powerful if you know how to use it. For example it gives <a href="https://chat.stackexchange.com/transcript/77161?m=55470401#55470401">a one-line proof of AM-GM inequality</a>, and similarly a short proof of Jensen's inequality. In discrete mathematics it is sometimes called a swapping argument (<a href="http://math.stackexchange.com/a/1357293/21820">here</a> is an example usage). In real analysis, it can be used in conjunction with a compactness argument to prove theorems that can be quite difficult to prove without (such as the two continuous optimization theorems in <a href="http://math.stackexchange.com/a/868065/21820">this post</a>).</p>
|
4,033,831 | <p>Example:</p>
<p><img src="https://i.stack.imgur.com/nPzJb.png" alt="Notation" /></p>
<p>From this we can tell no negative real number can be the image of any element of the domain. Thus not surjective because the range is not equal to the codomain, which means the function associates a any real number to a positive real number only. Is it better to write it this way <span class="math-container">$f : \Bbb{R} \to \Bbb{R}^+$</span> then?</p>
| Theo Bendit | 248,286 | <p>It depends on the context, but you're right that writing <span class="math-container">$f : \Bbb{R} \to \Bbb{R}^+$</span> contains superior information about the rule <span class="math-container">$x \mapsto |x|$</span>. There are a few reasons why you might choose to write a more general codomain than what is called for, for example:</p>
<ul>
<li>Sometimes a definition will tie you to a codomain. For example, the definition of function composition takes a function <span class="math-container">$f : A \to B$</span> and <span class="math-container">$g : B \to C$</span> and produces a function <span class="math-container">$g \circ f : A \to C$</span>, defined by <span class="math-container">$(g \circ f)(x) := g(f(x))$</span>. Now, if we take, for example, <span class="math-container">$g : \Bbb{R} \to \Bbb{R} : x \mapsto x^3 - x$</span>, then we can only really compose with <span class="math-container">$f$</span> it is has codomain <span class="math-container">$\Bbb{R}$</span>, to match the domain of <span class="math-container">$g$</span>. Of course, it's a simple thing to do to extend a codomain (or even limit the domain of <span class="math-container">$g$</span> to <span class="math-container">$\Bbb{R}^+$</span>), but one or the other should be done in order to make the composition kosher.</li>
<li>Sometimes the codomain is easy, but the range is unknown. As per Hagen von Eitzen's suggestion, we can take any object conjectured to exist (but whose existence is unknown) and turn it into a function to <span class="math-container">$\{0, 1\}$</span> with unknown range. For example:
<span class="math-container">$$f : \Bbb{N} \to \{0, 1\} : n \mapsto \begin{cases} 1 & \text{if $n$ is an odd perfect number} \\ 0 & \text{otherwise.} \end{cases}$$</span>
Nobody knows the range of this function, but we do know the codomain is <span class="math-container">$\{0, 1\}$</span>.</li>
<li>Sometimes, when you're writing mathematics, you choose a more general codomain, because writing the range as the codomain becomes an unproven assertion that demands proof, and writing a proof will interrupt the flow of the writing.</li>
<li>And sometimes there are pedagogical reasons. You might want to show your students that not every function needs to be surjective, to give the term "surjective" some meaning!</li>
</ul>
|
2,120,539 | <p>Find the points of local maximum and minimun of the function:
$$f(x)=\sin^{-1}(2x\sqrt{1-x^2})~~~~;~~x\in (-1,1)$$
I know
$$f'(x)=-\frac{2}{\sqrt{1-x^2}}$$</p>
<p>How to find the local maximum and minimum? I have drawn the fig and seen the points of local maximum and minimum. But how to find then analytically?
<a href="https://i.stack.imgur.com/Z2fN6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z2fN6.jpg" alt="enter image description here"></a></p>
| egreg | 62,967 | <p>First of all, a bit of theory.</p>
<p>Suppose you have a function $F$ defined over the interval $(a,b)$ and increasing thereon. Suppose also that $G$ takes on values in $(a,b)$. Then, in order to find the points where $F(G(x))$ has a local maximum or minimum, it's sufficient to find the points where $G$ has a local maximum or minimum. This is because $F(G(x_1))<F(G(x_2))$ if and only if $G(x_1)<G(x_2)$, as $F$ is increasing.</p>
<hr>
<p>The arcsine is an increasing function, so you just need to find the local maxima and minima of $g(x)=x\sqrt{1-x^2}$. You should already have worked out that $-1\le 2x\sqrt{1-x^2}\le 1$ for $x\in[-1,1]$ when you have discussed the domain of $f$. With notation as before, $G(x)=2x\sqrt{1-x^2}$ and $F(t)=\arcsin t$.</p>
<p>Since
$$
G'(x)=\sqrt{1-x^2}-\frac{x^2}{\sqrt{1-x^2}}=\frac{1-2x^2}{\sqrt{1-x^2}}
$$
you find that:</p>
<ol>
<li><p>$\lim_{x\to-1^+}G'(x)=-\infty=\lim_{x\to1^-}G'(x)$</p></li>
<li><p>$G'(x)$ vanishes at $-1/\sqrt{2}$ and at $1/\sqrt{2}$, being positive in the interval $(-1/\sqrt{2},1/\sqrt{2})$.</p></li>
</ol>
<p>Now you should be able to end: $G$ has a local minimum at $-1/\sqrt{2}$ and a local maximum at $1/\sqrt{2}$.</p>
<p>If you include $-1$ and $1$ in the domain of $f$, then $-1$ is a local maximum point and $1$ a local minimum point.</p>
<hr>
<p>As a side note, $f$ is not differentiable at $\pm1/\sqrt{2}$, but this is irrelevant.</p>
<p>Your computation of the derivative is wrong: it should be
$$
f'(x)=\frac{1}{\sqrt{1-(2x\sqrt{1-x^2})^2}}\frac{1-2x^2}{\sqrt{1-x^2}}=
\frac{1-2x^2}{|1-2x^2|}\frac{1}{\sqrt{1-x^2}}
$$
Note that $\sqrt{(1-2x^2)^2}=|1-2x^2|$.</p>
<p>From this you could derive the same conclusions as before. The function is increasing where $1-2x^2>0$ and decreasing where $1-2x^2<0$.</p>
|
1,392,205 | <p>The equation of line $A$ is $3x + 6y - 1 = 0$. Give the equation of a line that passes through the point $(5,1)$ that is</p>
<ol>
<li><p>Perpendicular to line $A$.</p></li>
<li><p>Parallel to line $A$.</p></li>
</ol>
<p>Attempting to find the parallel,</p>
<p>I tried $$y = -\frac{1}{2}x + \frac{1}{6}$$</p>
<p>$$y - (1) = -\frac{1}{2}(x-5)$$</p>
<p>$$Y = -\frac{1}{2}x - \frac{1}{10} - \frac{1}{10}$$</p>
<p>$$y = -\frac{1}{2}x$$</p>
| Michael Hardy | 11,667 | <p>The parameters $p$ if that is construed in the usual way, would remain the same, and $r$ would be $36$.</p>
<p><b>But</b> that's not the best way to proceed. Find the expected value and variance of your geometric distribution. Multiply them by $36$. Those would be the expected value and variance of the distribution of the sum of independent random variables. Use the normal distribution with that same expected value and that same variance.</p>
|
1,817,681 | <p>Let $n=|\mathbb{Z}[i]/(1+i)|$</p>
<p>Need to show it is isomorphic to a field of order n once I find n.</p>
<p>I know $(1+i)$ is maximal in $\mathbb{Z}[i]$.</p>
<p>Kind of confused on quotients like this. Saw the other solution: 2/(1+i)=(1-i) etc. No idea what it means.</p>
<p>I know I might need to use isomorphic theorem. Please no modular arithmetic. </p>
| Crostul | 160,300 | <p>Since you know that $2=(1-i)(1+i)$, you have that in the quotient
$$ \overline{a+bi} = \overline{a} + \overline{i}\overline{b} = \overline{a}-\overline{b} = \overline{a-b} = (a-b) \mod{2}$$
So that the elements of $\Bbb{Z}[i]/(i+1)$ are $\overline{0},\overline{1}$.</p>
|
1,817,681 | <p>Let $n=|\mathbb{Z}[i]/(1+i)|$</p>
<p>Need to show it is isomorphic to a field of order n once I find n.</p>
<p>I know $(1+i)$ is maximal in $\mathbb{Z}[i]$.</p>
<p>Kind of confused on quotients like this. Saw the other solution: 2/(1+i)=(1-i) etc. No idea what it means.</p>
<p>I know I might need to use isomorphic theorem. Please no modular arithmetic. </p>
| Brent Kerby | 218,224 | <p>Hint: $\mathbb Z[i]\cong \mathbb Z[x]/(x^2+1)$. The image of the ideal $(1+i)$ under this isomorphism is $(1+x,x^2+1)/(x^2+1)$, hence $$\mathbb Z[i]/(1+i) \cong [\mathbb Z[x]/(x^2+1)]/[(1+x,x^2+1)/(x^2+1)] \cong \mathbb Z[x]/(1+x,x^2+1)$$</p>
|
634,132 | <p>Let $G$ be a cyclic group with $N$ elements. Then it follows that</p>
<p>$$N=\sum_{d|N} \sum_{g\in G,\text{ord}(g)=d} 1.$$</p>
<p>I simply can not understand this equality. I know that for every divisor $d|N$ there is a unique subgroup in $G$ of order $d$ with $\phi(d)$ elements. But how come that when you add all these together you end up with the number of elements in the group $G$. </p>
| arkadeep | 120,499 | <p>See it is a bi variate function that you have given here.
Just think of a 3-D(2 dimentional system of co-ordinate).
Now if a function is a bivariate one then the functional value will have the value on axis which one is mutually perpendicular to the other two axis.
Now you have the equation as f(a,b)=f(a,c).Now we have to prove that the equation can only hold if b=c.
But see can we say anythhing about the functional characteristics like if f(a,b)=f(a,c) then 'b' should be equal to 'c'!
How can anyone conclude it surely without knowing the actual function?
See here I have an example: f(a,b)= (a^2)+(b^2) and f(a,c)= (a^2)+(c^2).
See here,for the first function f(a,b) if we put a=1 and b=1 then the functional value will be 2,fine.If we put in the 2nd function a=1 and c=(-1) then also the functional value is 2 but see here for these two function <strong>b(b=1) is not equal to c(c=-1)</strong> for these two cases <strong>but still the functional value are same!!</strong>
So what do you think my friend.Don't we need the actual characteristics of the function? </p>
|
3,041,632 | <p><span class="math-container">$X_n=4X_{n-1}+5$</span></p>
<p>How come the solution of this recurrence is this? </p>
<p><span class="math-container">$X_n=\frac834^n+\frac53$</span></p>
<p>I also have that <span class="math-container">$X_0=1$</span>.</p>
<p>I am using telescoping method and I am trying to solve it like this:</p>
<p><span class="math-container">$X_n= 5 + 4X_{n-1}$</span></p>
<p><span class="math-container">$X_n= 5 + 4(5+4X_{n-2})$</span></p>
<p><span class="math-container">$X_n= 5 + 4\times5 + 4\times4\times X_{n-2}$</span></p>
<p>But this leads to me getting <span class="math-container">$5\times4^{n-1}\times4^n$</span>.</p>
<p>Can some please explain this to me? </p>
| Matthias | 626,460 | <p>If you calculate the first few terms explicitly, you will find that the <span class="math-container">$n$</span>th term is the sum of an exponential and a geometric series. For example,</p>
<p><span class="math-container">$$
X_3 = 4^3 + 5(4^2+4+1).
$$</span>
So in general,
<span class="math-container">$$
X_n = 4^n + 5\sum_{k=0}^{n-1}4^k = 4^n +5\frac{1-4^n}{1-4},
$$</span></p>
<p>which should simplify to the answer you gave.</p>
|
315,551 | <p>So I'm going over my practice midterms (which all seem to have solutions like this one), </p>
<p><img src="https://i.stack.imgur.com/fC8Gu.png" alt="Image"></p>
<p>Can anyone help clarify this for me? I understand that you multiply by the reciprocal to get to line two. But after that I'm completely lost, I don't understand how:</p>
<p>$$x^{2} + 1 - [(x + h)^{2} + 1]$$</p>
<p>can become:</p>
<p>$$(x-(x+h))(x+x+h)$$</p>
<p>and so forth, I'm sorry if this is a stupid question the solution doesn't seem to explain it very well.</p>
| Cameron Buie | 28,900 | <p>This is one of those "verification left to the reader" moments.</p>
<p>If it helps, we can use the intermediate step that $$x^2-1-[(x+h)^2+1]=x^2-(x+h)^2,$$ so the conclusion follows from the fact that $a^2-b^2=(a-b)(a+b).$</p>
<p>Too, they didn't explain how they got from the third line to the fourth, but since $$(x-(x+h))(x+(x+h))=-h(2x+h),$$ that should be fairly straightforward. You'll run into this kind of thing a lot. If you're ever uncertain how they got there, just see if you can get there through some intermediate steps.</p>
|
1,063,352 | <p>$A$ and $B$ are sets and $\mathcal{F}$ is a family of sets. I'm trying to prove that</p>
<p>$\bigcap_{A \in \mathcal{F}}(B \cup A) \subseteq B \cup (\cap \mathcal{F})$</p>
<p>I start with "Let $x$ be arbitrary and let $x \in \bigcap_{A \in \mathcal{F}}(B \cup A)$, which means that $\forall C \in \mathcal{F}(x \in B \cup C)$. So, I need some set to plug in for $C$.</p>
<p>Looking at the goal, I need to prove that $x \in B \cup (\cap \mathcal{F})$, which is $x \in B \lor \forall C \in \mathcal{F}(x \in C)$. But I'm stuck here too because I need to break up the givens into cases in order to break up the goals into cases. I think.</p>
| Jack D'Aurizio | 44,121 | <p>Since:
$$\frac{5^n}{25^n+1}=\frac{1}{5^n}-\frac{1}{125^n}+\frac{1}{3125^n}-\ldots $$
we have:
$$\sum_{n=0}^{+\infty}\frac{5^n}{25^n+1}=\frac{1}{2}+\left(\frac{1}{4}-\frac{1}{124}+\frac{1}{3124}+\ldots\right)=\frac{1}{2}+\sum_{k=0}^{+\infty}\frac{(-1)^k}{5^{2k+1}-1}.$$</p>
<hr>
<p>Despite being easy to compute through Euler's acceleration technique, such a series does not have a nice closed expression. Otherwise, also the <a href="http://mathworld.wolfram.com/ReciprocalLucasConstant.html" rel="nofollow">reciprocal Lucas constant</a> would have one.</p>
|
3,911,548 | <p><strong>If a,b,c,d are real numbers and <span class="math-container">$\frac{a}{b}+\frac{b}{c}+\frac{c}{d}+\frac{d}{a}=17$</span> and <span class="math-container">$\frac{a}{c}+\frac{c}{a}+\frac{b}{d}+\frac{d}{b}=20$</span>, then find the sum of all possible valuse of <span class="math-container">$\frac{a}{b}$</span>+<span class="math-container">$\frac{c}{d}$</span> ?</strong></p>
<p>I tried this problem for a while but made no progress. I don't know how <span class="math-container">$\frac{a}{b}+\frac{c}{d}$</span> can take only certain values. The answer was given to be <span class="math-container">$17$</span>. Can someone help me with this?</p>
| Ben Grossmann | 81,360 | <p><strong>Hint:</strong> Note that
<span class="math-container">$$
\frac ac + \frac ca + \frac bd + \frac db = 20 \implies\\
\frac ab \cdot \frac bc + \frac cd \cdot \frac da + \frac bc \cdot \frac cd + \frac da \cdot \frac ab = 20.
$$</span>
With that in mind, compare the expanded sums
<span class="math-container">$$
\left(\frac{a}{b}+\frac{b}{c}+\frac{c}{d}+\frac{d}{a}\right)^2, \quad
\left(\frac{a}{b}-\frac{b}{c}+\frac{c}{d}-\frac{d}{a}\right)^2.
$$</span></p>
|
3,282,895 | <p>How do I calculate <span class="math-container">$$\int_{0}^{2\pi} (2+4\cos(t))/(5+4\sin(t)) dt$$</span></p>
<p>I've recently started calculating integral via the residue theorem. Somehow I'm stuck with this certain integral. I've substituted t with e^it and received two polynoms but somehow I only get funny solutions.. Could someone please help me finding the residues?</p>
| Maurizio Moreschi | 578,146 | <p>As mentioned in the comment, the following gives you an alternative way to compute the integral without using complex analysis.</p>
<p>I sketch my idea and let you finish the computation yourself. Please tell me in you need more details.</p>
<p>First, using that <span class="math-container">$\cos(t+\pi)=-\cos(t)$</span> and <span class="math-container">$\sin(t+\pi)=-\sin(t)$</span>, you get</p>
<p><span class="math-container">$$ \int_{0}^{2\pi} \frac{2+4\cos(t)}{5+4\sin(t)}dt=\int_{-\pi}^{\pi} \frac{2-4\cos(t)}{5-4\sin(t)}dt.$$</span></p>
<p>Let <span class="math-container">$u=\tan(t/2)$</span>. Then</p>
<p><span class="math-container">$$ \frac{dt}{du}=\frac{2}{1+u^2}$$</span></p>
<p>and, by the parametric formulas,</p>
<p><span class="math-container">$$\frac{2-4\cos(t)}{5-4\sin(t)}=\frac{2-4\frac{1-u^2}{1+u^2}}{5-4\frac{2u}{1+u^2}}=\frac{6u^2-2}{5u^2-8u+5}.$$</span></p>
<p>Therefore</p>
<p><span class="math-container">$$ \int_{-\pi}^{\pi} \frac{2-4\cos(t)}{5-4\sin(t)}dt=4\int_{-\infty}^{+\infty} \frac{3u^2-1}{(5u^2-8u+5)(u^2-1)}du.$$</span></p>
<p>You probably know how to continue from here, right?</p>
|
83,246 | <p>Let H be a separable and infinite-dimensional Hilbert space and let B be the closed ball
of H having unit radius, whose center is at the origin h of H. Suppose one would like to
know how much of B can be "filled up" by any of its compact subsets-since B itself
(although closed and bounded) is not compact. Let E be the set of all positive real
numbers z for which there exists a compact subset C of B such that all points of B lie at
a distance from C (in the metric of H) which is not greater than z. The greatest lower bound of E would be a measure of this "filling up". My question is-what is this greatest
lower bound?
I believe that it is 1 but cannot prove it. Clearly 1 belongs to E since we can take for
C any compact subset of B that contains h. I can prove that no positive real number less
than one-half the square root of 2 belongs to E. But this is as far as I have been able to
get. If 1 is the right answer, it would show that no compact subset of B can "fill up" any
more of B than the set containing only the point h.</p>
| Alessandro Sisto | 9,342 | <p>Instead of using compact subsets one can just use finite subsets. Now, consider any finite subset of $B$: it is contained in a finite dimensional subspace $V$. There is a unit vector perpendicular to $V$, and such vector has distance at least $1$ from the given finite subset.</p>
|
3,528,237 | <p>I am just being introduced to quantifiers in logic and my lecturer was going through the following two statements. The question is to determine which, if any, is/are true.</p>
<ol>
<li><span class="math-container">$(\forall x \in \mathbb{R})(\exists y \in \mathbb{R})[x + y = 0]$</span></li>
<li><span class="math-container">$(\exists x \in \mathbb{R})(\forall y \in \mathbb{R})[x + y = 0]$</span></li>
</ol>
<p>Clearly, the first statement is true; we can just let <span class="math-container">$y = -x$</span>. However, my lecturer says that the second statement is false. I cannot wrap my head around why that is the case. If we can take <span class="math-container">$y = -x$</span> in the first, why can we not do the same for the second i.e. let <span class="math-container">$x = -y$</span>? In fact, how is the second statement any different from the first?</p>
<p>Any intuitive explanations/examples would be greatly appreciated!</p>
| Mostafa Ayaz | 518,023 | <p>The second one is false because <strong>there is no real number <span class="math-container">$x$</span> such that its addition with all real numbers results in <span class="math-container">$0$</span>.</strong> You can take <span class="math-container">$y=x^2+1$</span> for each <span class="math-container">$x$</span> and observe that <span class="math-container">$$x+y=x^2+x+1\ne 0$$</span></p>
|
2,791,087 | <p>I have the following density function:
$$f_{x, y}(x, y) = \begin{cases}2 & 0\leq x\leq y \leq 1\\ 0 & \text{otherwise}\end{cases}$$</p>
<p>We know that $\operatorname{cov}(X,Y) = E[(Y - EY)(X - EX)]$, therefore we need to calculate E[X] and E[Y]. </p>
<p>$$f_x(x)=\int_x^1 2\,\mathrm dy = \big[2y\big]_x^1 = 2-x, \forall x\in[0, 1]$$</p>
<p>$$E[X] = \int_0^1 x (2-x)\,\mathrm dx = \int_0^1 2x - x^2\,\mathrm dx= \left[\frac{2x^2}{2}-\frac{x^3}{3}\right]_0^1 = 1 - \frac{1}{3} = \frac23 $$</p>
<p>$$f_y(y) = \int_0^y\,\mathrm dx = \big[2x\big]_0^y = 2y, \forall y\in [0, 1]$$</p>
<p>$$E[Y] = \int_0^1 y\cdot2y\,\mathrm dy= \int_0^1 2y^2\,\mathrm dy= \left[\frac{2y^3}{3}\right]_0^1 = \frac23$$</p>
<p>However, the <strong>provided solution</strong> states that $E[X]=\dfrac13$. Have I done a mistake or is the solution wrong?</p>
<p>The continuation of the solution is: </p>
<p>$$\mathrm{cov}(X,Y) = \int_0^1\int_x^1(x-\frac 13)(y- \frac 23) \times 2\,\mathrm dy\,\mathrm dx$$</p>
<p>Where does the $\underline{2\,\mathrm dy\,\mathrm dx}$ come from?</p>
| caverac | 384,830 | <p>In general</p>
<p>$$
\mathbb{E}[g(X,Y)] = \iint_{\mathbb{R}^2}{\rm d}x{\rm d}y~ g(x,y)\color{blue}{f_{X,Y}(x,y)} = \color{blue}{2}\int_0^1 {\rm d}x\int_x^1{\rm d}y~ g(x,y)
$$</p>
<p>so that</p>
<p>\begin{eqnarray}
\mathbb{E}[X] = 2\int_0^1 {\rm d}x\int_x^1{\rm d}y~ x = 2\int_0^1{\rm d}x~x(1-x) = \frac{1}{3}
\end{eqnarray}</p>
<p>and</p>
<p>\begin{eqnarray}
\mathbb{E}[Y] = 2\int_0^1 {\rm d}x\int_x^1{\rm d}y~ y = 2\int_0^1{\rm d}x~\frac{1}{2}(1-x^2) = \frac{2}{3}
\end{eqnarray}</p>
<p>The covariance is then</p>
<p>\begin{eqnarray}
{\rm Cov}[X,Y] &=& \mathbb{E}[(X-1/3)(Y-2/3)] = 2\int_0^1 {\rm d}x\int_x^1{\rm d}y~ (x-1/3)(y-2/3) = \frac{1}{36}
\end{eqnarray}</p>
|
250,119 | <p>I'd like to show that if a set $X$ is Dedekind finite then is is finite if we assume $(AC)_{\aleph_0}$. As set $X$ is called Dedekind finite if the following equivalent conditions are satisfied: (a) there is no injection $\omega \hookrightarrow X$ (b) every injection $X \to X$ is also a surjection.</p>
<p>Countable choice $(AC)_{\aleph_0}$ says that every contable family of non-empty, pairwise disjoint sets has a choice function. </p>
<p>There is the following theorem: </p>
<p><img src="https://i.stack.imgur.com/yKpuR.png" alt="enter image description here"></p>
<p>from which I can prove what I want as follows: Pick an $x_0 \in X$. Define $G(F(0), \dots, F(n-1)) = \{x_0\}$ if $x_0 \notin \bigcup F(k)$ and $G(F(0), \dots , F(n-1)) = X \setminus \bigcup F(k)$ otherwise. Also, $G(\varnothing) = \{x_0\}$. Let $F: \omega \to X$ be as in the theorem. Then $F$ is injective by construction. </p>
<p>The problem with that is that I suspect that the proof of theorem 24 needs countable choice. So what I am after is the following: consider the generalisation of theorem 24: </p>
<p><img src="https://i.stack.imgur.com/z8hQB.png" alt="enter image description here"></p>
<p>(note the typo in $(R^\ast)$, it should be $F(z) \in G^\ast (F \mid I(z), z)$), and its proof (assuming AC): </p>
<p><img src="https://i.stack.imgur.com/Xtzf1.png" alt="enter image description here"></p>
<p>I want to modify this proof to prove the countable version of the theorem. But I can't seem to manage. I need a countable set $\{G^\ast \mid \{\langle f,z \rangle \} : \langle f,z \rangle \in dom(G^\ast) \}$. Ideas I had were along the lines of picking $f_0(x) = x_0$ the constant function and then to consider $\{G^\ast \mid \{\langle f_0,n \rangle \} : \langle f_0,n \rangle \in dom(G^\ast) \}$ but what then?</p>
<p>Thanks for your help.</p>
| Zhen Lin | 5,191 | <p>You will have to restrict the language, obviously, because $P$ could be the proposition "the base field is $\mathbb{C}$". As André Nicolas has already mentioned, if $P$ is a proposition in the <em>first-order</em> language of fields then anything that holds for $\mathbb{C}$ holds for any algebraically closed field of characteristic $0$. This, however, is much less impressive than it sounds: first-order logic cannot express many things we take for granted like "there are uncountably many elements". </p>
<p>However, there are more powerful results from model theory that give stronger transfer principles. Hodges [<em>Model theory</em>, §A.5] writes:</p>
<blockquote>
<p>Finally there is an old heuristic principle which Weil [1946] called <strong>Lefschetz' principle</strong>. According to Weil (p. 242f),</p>
<blockquote>
<p>… for a given value of the characteristic $p$, every result, involving only a finite number of points and varieties, which can be proved for some choice of the universal domain … is true without restriction; <em>there is but one algebraic geometry of characteristic $p$</em>, for each value of $p$, and not one algebraic geometry for each choice of the universal domain. In particular, as S. Lefschetz has observed on various occasions, whenever a result, involving only a finite number of points and varieties, can be proved in the ‘classical case’ where the universal domain is the field of all complex numbers, it remains true whenever the characteristic is $0$ …</p>
</blockquote>
<p>This looks as if it should be a model-theoretic principle, and several writesr have suggested what model-theoretic principle it might be. The most convincing proposals are those of <a href="http://dx.doi.org/10.1016/0021-8693%2869%2990117-3" rel="nofollow">Barwise & Eklof [1969]</a> using $L_{\omega_1 \omega}$, and of <a href="http://dx.doi.org/10.2307/2039433" rel="nofollow">Eklof [1973]</a> using $L_{\infty \omega}$; [...]</p>
</blockquote>
<p>So what exactly do the cited results say? First things first: $L_{\infty \omega}$ refers to the language of <em>infinitary</em> first-order logic, i.e. the logic where conjunctions and disjunctions of arbitrary <em>sets</em> of formulae are allowed, but only any string of consecutive quantifiers is finite. $L_{\infty \omega}$ is more expressive than finitary first-order logic (known as $L_{\omega \omega}$ in symbols): it is possible to express in $L_{\infty \omega}$ (with the help of sufficiently many constant symbols) the proposition "there are most $\kappa$ elements", for any cardinal $\kappa$. This is known to be impossible in $L_{\omega \omega}$ when $\kappa$ is any infinite cardinal, essentially by the upward Löwenheim–Skolem theorem. $L_{\omega_1 \omega}$ is the fragment of $L_{\infty \omega}$ where only countable conjunctions and disjunctions are allowed. We say that two structures are $L_{\infty \omega}$-equivalent if they satisfy exactly the same sentences over $L_{\infty \omega}$. </p>
<p>Now let $\mathcal{C}$ be the category of structures for a many-sorted first-order signature $\Sigma$, with $\Sigma$-homomorphisms as the morphisms of $\mathcal{C}$. (As usual this means a map that preserves the interpretation of function symbols and relation symbols in $\Sigma$.) An embedding is an injective homomorphism that <em>reflect</em> the interpretations of the relation symbols in $\Sigma$. Let $\mathcal{U}_p$ be the category of universal domains of characteristic $p$. An <strong>$\omega$-local functor</strong> $F : \mathcal{U}_p \to \mathcal{C}$ is a functor satisfying these conditions:</p>
<ul>
<li>The image under $F$ of any homomorphism in $\mathcal{U}_p$ is an embedding in $\mathcal{C}$.</li>
<li>Given a field extension $U' \subseteq U$ in $\mathcal{U}_p$ and any finite subset $X$ of $F U$, we can find an intermediate extension $U' \subseteq U'' \subseteq U$ in $\mathcal{U}_p$ such that $U''$ is of finite transcendence degree over $U'$, and if $j : U'' \hookrightarrow U$ is the inclusion, $X$ is contained in the image of $F U''$ under $F j$. </li>
</ul>
<p>It turns out that a functor $F : \mathcal{U}_p \to \mathcal{C}$ is $\omega$-local if and only if it preserves directed colimits (and so if and only if it preserves filtered colimits; see [Adámek and Rosický, <em>LPAC</em>, Thm. 1.5]). Eklof's 1973 result is the following:</p>
<p><strong>Theorem.</strong> If $F : \mathcal{U}_p \to \mathcal{C}$ is an $\omega$-local functor, then $F U_1$ is $L_{\infty \omega}$-equivalent to $F U_2$ for any $U_1$ and $U_2$ in $\mathcal{U}_p$.</p>
<p>To apply this to algebraic geometry, we take $F$ to be the functor that takes a universal domain $U$ to the fragment of "geometry" we are interested in; for example, we could take $A = F U$ to have the following elements:</p>
<ul>
<li>$A_0$ is the set of integers.</li>
<li>$A_1 = \bigcup_{n < \omega} U^n$ is the union of all finite-dimensional affine spaces over $U$.</li>
<li>$A_2 = \bigcup_{n < \omega} U [x_1, \ldots, x_n]$ is the union of all finitely-generated polynomial rings over $U$.</li>
<li>$A_3 = \bigcup_{n < \omega} \{ I \triangleleft U [x_1, \ldots, x_n] \}$ is the set of all ideals over all finitely-generated polynomial rings over $U$.</li>
<li>$A_4$ is the set of affine varieties in $U$ (concretely realised as subsets of $A_1$, I suppose).</li>
<li>$A_5$ is the set of abstract varieties. (There is only a set of them because each one is covered by finitely many open affine subvarieties.)</li>
<li>etc.</li>
</ul>
<p>The relations of $A$ will be those expressing the propositions of interest; for example, we could have a relation that encodes the proposition "$V$ is an $n$-dimensional variety", where $V$ is a variable of type $A_5$ and $n$ is a variable of type $A_0$. These should be chosen so that $F$ actually defines a functor, i.e. so that extending the universal domain doesn't change the truth value of the propositions of interest. Then Eklof's theorem tells us that all of these "geometries" are $L_{\infty \omega}$-equivalent: in essence, it tells us that the propositions of "geometry" that are independent of the choice of universal domain are those preserved by extension of universal domain, and the class of these propositions is closed under a rich class of logical connectives.</p>
|
890,313 | <p>Say the probability of an event occurring is 1/1000, and there are 1000 trials.</p>
<p>What's the expected number of events that occur? </p>
<p>I got to an answer in a quick script by doing the above 100,000 times and averaging the results. I got 0.99895, which seems like it makes sense. How would I use math to get right to this answer? The only thing I can think of to calculate is the probability that an event never occurs, which would be 0.999^1000, but I am stuck there. </p>
| beep-boop | 127,192 | <p>In general, the expectected value (i.e. the expected number of events), </p>
<h2>$$\boxed{E=np},$$</h2>
<p>where $n$ is the number of trials and $p$ is the probability of success during each trial.</p>
|
5,586 | <p>I'm in my last year of highschool. And I'm aiming for a perfect grade in maths. The problem is that this year is the hardest year of maths I have ever faced in my entire life. Especially derivation and limits as its the first time I am studying it. Here are the lessons that are required to study for the first semester:</p>
<ul>
<li>Limit of a function at a point</li>
<li>Limits Theorems</li>
<li>Limits of fractional functions</li>
<li>Limits of Trigonometric functions</li>
<li>Limits at Infinity</li>
<li>Continuity at a point</li>
<li>Continuity on an interval</li>
<li>Rate of Change</li>
<li>First derivative</li>
<li>Continuity and differentiation</li>
<li>Differentiation Rules</li>
<li>Derivatives of Higher Order</li>
<li>The chain rule</li>
<li>Implicit differentiation</li>
<li>Geometric applications of differentiation</li>
<li>Physical applications of differentiation</li>
<li>Related Rates</li>
<li>Increasing and Decreasing functions</li>
<li>Extreme Values.</li>
</ul>
<p>Limits are relatively easy. However, related rates and extreme values are disgustingly difficult, is there any way to make those two lessons easy and routine? Something like a book filled with questions on those two or something.</p>
<p>Thanks.</p>
| alternative | 3,250 | <p>Solve problems.</p>
<p>That's literally the only way to get better in math. Do more problems. Don't do the same problem with different numbers, do harder and harder problems until you are mentally exhausted, take a break, and delve in again.</p>
|
5,586 | <p>I'm in my last year of highschool. And I'm aiming for a perfect grade in maths. The problem is that this year is the hardest year of maths I have ever faced in my entire life. Especially derivation and limits as its the first time I am studying it. Here are the lessons that are required to study for the first semester:</p>
<ul>
<li>Limit of a function at a point</li>
<li>Limits Theorems</li>
<li>Limits of fractional functions</li>
<li>Limits of Trigonometric functions</li>
<li>Limits at Infinity</li>
<li>Continuity at a point</li>
<li>Continuity on an interval</li>
<li>Rate of Change</li>
<li>First derivative</li>
<li>Continuity and differentiation</li>
<li>Differentiation Rules</li>
<li>Derivatives of Higher Order</li>
<li>The chain rule</li>
<li>Implicit differentiation</li>
<li>Geometric applications of differentiation</li>
<li>Physical applications of differentiation</li>
<li>Related Rates</li>
<li>Increasing and Decreasing functions</li>
<li>Extreme Values.</li>
</ul>
<p>Limits are relatively easy. However, related rates and extreme values are disgustingly difficult, is there any way to make those two lessons easy and routine? Something like a book filled with questions on those two or something.</p>
<p>Thanks.</p>
| Axiomaric | 7,893 | <p>I feel that the excellent answers given already may not do enough to discourage the idea that you <em>need</em> to be perfect. While I do understand your aim towards a perfect grade, you need to embrace the possibility that this may not happen and to not let that deter you from continuing your math education with the same passion you've shown so far. Grades are an arbitrary measure, which have their place in education, but they must not become what defines you; you are not your grade. It is from failure that we build the most astounding successes. Mathematical perfection in a student is beautiful in the way the smell of a flower makes the tree beautiful: will you reject the refreshing taste of its fruit, the intricate pattern of its leaves, the impressive strength of its root because its flower didn't smell how you expected? </p>
<p>Embrace rigor, and never settle for less than your best, but seeking perfection during your mathematical exploration above all else may cloud the wonderful growth it may spur in you whenever you meet failure, the same failure even your teachers met, as they were preparing to guide you through the wonderful journey ahead.</p>
|
649,239 | <p>By <a href="http://en.wikipedia.org/wiki/Post%27s_theorem" rel="nofollow">Post's Theorem</a> we know that a set $A\subseteq\mathbf{N}$ is recursively enumerable iff it is definable by a $\Sigma_1$-formula, i.e. there exists a $\Sigma_1$-formula $\varphi(x)$ with $x$ free such that for every number $n$:
\[
n\in A\longleftrightarrow \mathfrak{N}\vDash\varphi(\overline{n})
\]
where $\mathfrak{N}$ is the standard model of the first-order language of Peano Arithmetic.</p>
<p>I have the following question: given a r.e. set $A$ can we always find a $\Sigma_1$-formula defining it?</p>
| Andreas Blass | 48,510 | <p>The answer to this question is that it depends on how we are "given" the r.e. set <span class="math-container">$A$</span>. In most situations, the answer is yes. For example, if we are given a Turing machine (or a C++ program or anything like that) to list all the elements of <span class="math-container">$A$</span>, then yes, we can convert that into a <span class="math-container">$\Sigma^1_1$</span> definition of <span class="math-container">$A$</span>. The same holds if we're given a Turing machine (or ...) that halts on any input <span class="math-container">$x$</span> iff <span class="math-container">$x\in A$</span> (as in Xoff's answer). The same goes if we're given a formula weakly representing <span class="math-container">$A$</span> in Peano Arithmetic (or in ZFC or in Robinson's Q or ...). But not if we're given <span class="math-container">$A$</span> by just a black box for deciding membership in <span class="math-container">$A$</span> (plus a promise that it is r.e.). And not if we're just given a definition of <span class="math-container">$A$</span> in the language of PA (or ZFC or ...) (plus again a promise that it's r.e.).</p>
|
530,484 | <p>Let $f:\mathbb{R}\rightarrow\mathbb{R}^2$ be a $C^1$ function. Prove that the image of $f$ contains no open set of $\mathbb{R}^2$.</p>
<p>So say $f(x)=(g(x),h(x))$. Since $f$ is $C^1$, we have that $g'(x),h'(x)$ both exist and are continuous functions in $x$. To show that $f$ contains no open set of $\mathbb{R}^2$, it suffices to show that $f$ does not contain any open ball in $\mathbb{R}^2$. Suppose, for contradiction, that it contains the ball centered at $(a,b)$ with radius $r$. How can I continue?</p>
| Luiz Cordeiro | 58,818 | <p>Let $F:\mathbb{R}^2\rightarrow\mathbb{R}^2$ be given by $F(x,y)=f(x)$. Then every point of $\mathbb{R}^2$ is a critical point of $F$ (that is $\det J(F)(x)=0$ for every $x\in\mathbb{R}^2$, where $J(F)(x)$ denotes the Jacobian of $F$ at $x$). By Sard's theorem, $f(\mathbb{R})=F(\mathbb{R}^2)$ has zero measure. Since non-empty open sets have positive measure, $f(\mathbb{R})$ cannot contain any open set.</p>
|
54,311 | <p>I found <a href="http://www.rle.mit.edu/dspg/documents/HilbertComplete.pdf" rel="nofollow">this paper</a> on Hilbert Transform, which is a very nice read. I've studied signal processing, but from a more practical than mathematical perspective. Can someone explain to me how we arrive at equation (2) in this paper?</p>
| Michael Hardy | 11,667 | <p>We have
$$
\oint X(v) H\left(\frac z v\right) v^{-1} \, dv.
$$
Let $u = \dfrac z v$, so that $du = \dfrac{-z}{v^2}\,dv$. Then $v$ becomes $\dfrac z u$ and $v^{-1}\,dv$ becomes $\dfrac{-du}{u}$. But as $v$ goes around the unit circle in the counterclockwise direction, $u$ goes around in the clockwise direction. So
$$
\oint_{\text{counterclockwise}} X(v) H\left(\frac z v\right) v^{-1} \, dv
= \oint_{\text{clockwise}} X\left(\frac{z}{u}\right) H\left(u\right) (-u^{-1}) \, du
$$
$$
= \oint_{\text{counterclockwise}} X\left(\frac{z}{u}\right) H\left(u\right) u^{-1} \, du.
$$
Then rename the bound variable $u$ so that it's called $v$ again.</p>
|
105,868 | <p>Let $f(x)$ be a continuous probability distribution in the plane. It is obvious that if $X$ and $X'$ are two independent random samples from $f$, then $\mathbf{E}(\|X - X'\|) \leq 2 \mathbf{E}(\|X\|)$ by the triangle inequality. Can this upper bound be made tighter if we assume that $f$ is rotationally symmetric about the origin., i.e. $f(x) = g(\|x\|)$ for some function $g$?</p>
| Robert Israel | 13,650 | <p>The conditional expectation
$$\eqalign{E[ \|X - X'\| | \|X\| = r, \|X'\| = s] &= \frac{1}{2\pi} \int_0^{2\pi} \sqrt{r^2 + s^2 - 2 r s \cos \theta}\ d\theta \cr &= \frac{2(r+s)}{\pi} EllipticE(2 \sqrt{rs}/(r+s))\cr}$$
where EllipticE is Maple's version of the complete elliptic integral of the second kind.
Note that $0 < 2 \sqrt{rs}/(r+s) \le 1$, with $1$ occurring for $r=s$. On the interval $[0,1]$, $1 \le EllipticE(x) \le \pi/2$. Thus
$$ \frac{4}{\pi} E[\|X\|] = \frac{2}{\pi} E[\|X\|+\|X'\|] \le E[\|X - X'\|] \le E[\|X\|+\|X'\|] = 2 E[\|X\|] $$</p>
|
213,665 | <p><strong>I've tried 3 methods but all failed to do that.</strong></p>
<p>1st Method</p>
<pre><code>Apply[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>2nd Method</p>
<pre><code>Map[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>3rd Method</p>
<pre><code>Flatten[{1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>I wanna get {1, {2, 3, 4, 5}, 6}</p>
| Sjoerd Smit | 43,522 | <p>We've got a few answers already, but here's my 2 cents:</p>
<pre><code>Replace[l_List :> Flatten[l]] /@ {1, {2, {3, 4}, 5}, 6}
</code></pre>
<blockquote>
<p>{1, {2, 3, 4, 5}, 6}</p>
</blockquote>
|
213,665 | <p><strong>I've tried 3 methods but all failed to do that.</strong></p>
<p>1st Method</p>
<pre><code>Apply[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>2nd Method</p>
<pre><code>Map[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>3rd Method</p>
<pre><code>Flatten[{1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>I wanna get {1, {2, 3, 4, 5}, 6}</p>
| Mr.Wizard | 121 | <p>A couple more:</p>
<pre><code>list = {1, {2, {3, 4}, 5}, 6};
Apply[## &, list, {2}]
Flatten[{##}] & @@@ list
</code></pre>
<p>Also if you are looking for specific level control consider <em>levelspec</em> in <a href="https://reference.wolfram.com/language/ref/Replace.html" rel="nofollow noreferrer"><code>Replace</code></a>. For example with:</p>
<pre><code>rl = {a_, x__, z_} :> {a, {x} /. rl, z};
deep = Range[14] /. rl
</code></pre>
<blockquote>
<pre><code>{1, {2, {3, {4, {5, {6, {7, 8}, 9}, 10}, 11}, 12}, 13}, 14}
</code></pre>
</blockquote>
<p>Then:</p>
<pre><code>Replace[deep, {x__} :> x, {3, 5}]
</code></pre>
<blockquote>
<pre><code>{1, {2, {3, 4, 5, 6, {7, 8}, 9, 10, 11, 12}, 13}, 14}
</code></pre>
</blockquote>
<p>Related:</p>
<ul>
<li><a href="https://mathematica.stackexchange.com/q/20180/121">How to remove redundant {} from a nested list of lists?</a></li>
<li><a href="https://mathematica.stackexchange.com/q/3700/121#3705">How to avoid returning a Null if there is no "else" condition in an If construct</a></li>
</ul>
|
2,538,297 | <p>This is my first question here so I hope I'm doing it right :) sorry otherwise!</p>
<p>As in the title, I was wondering if and when it is OK to calculate a limit i three dimensions through a substitution that "brings it down to two dimensions". Let me explain what I mean in a clearer way through an example. I was calculating this limit:<br>
$$\lim_{(x,y) \to (0,0)} \frac{\ln (1+\sin^2(xy))}{x^2+y^2} =\lim_{(x,y) \to (0,0)} \frac{\ln (1+\sin(xy)\cdot \sin(xy))}{x^2+y^2}$$
$$=\lim_{(x,y) \to (0,0)} \frac{\ln (1+xy\cdot xy)}{x^2+y^2} =\lim_{(x,y) \to (0,0)} \frac{\ln (1+x^2y^2)}{x^2+y^2}=\lim_{(x,y) \to (0,0)} \frac{x^2y^2}{x^2+y^2}$$
$$=\lim_{(x,y) \to (0,0)}\frac{1}{\frac{1}{y^2}+\frac{1}{x^2}}="\frac{1}{\infty}"=0.$$
Where I have used:
$$ \lim_{(x,y) \to (0,0)} \frac{\sin(xy)}{xy}=[z=xy]=\lim_{z\to 0}\frac{\sin z}{z}=1$$
and
$$ \lim_{(x,y) \to (0,0)} \frac{\ln(1+xy)}{xy}=[z=xy]=\lim_{z\to 0}\frac{\ln(1+z)}{z}=1.$$
Is the way I calculated the limits for $(x,y)\to (0,0)$ by substituting with $z=xy$ legit?
Also, if it is... am I allowed to substitute an expression with its limit <em>inside</em> a limit, as in <em>while</em> calculating the limit, or can I only take the limits in one last step (I'm a bit confused by this exercise in general, I have solved it with Taylor series but I'm curious to know whether this works too)?<br>
Thank you so much in advance!</p>
| Dr. Sonnhard Graubner | 175,066 | <p>use that $$x^2+y^2\geq 2|xy|$$ then $$\frac{\ln(1+\sin^2(xy))}{x^2+y^2}\le \frac{\ln(1+\sin^2(xy))}{2|xy|}$$ substituting $$xy=t$$ then we have $$\frac{\ln(1+\sin^2(t))}{2t}$$ with L'Hospital we can prove that $$\lim_{t\to 0}\frac{\ln(1+\sin^2(t))}{2t}=\lim_{t\to0}\frac{\sin(2t)}{2(1+\sin^2(t))}=0$$</p>
|
4,610,394 | <p>Clearly, none of the roots are in <span class="math-container">$\mathbb{Q}$</span> so <span class="math-container">$f(x) = x^4 + 1$</span> does not have any linear factors. Thus, the only thing left to check is to show that <span class="math-container">$f(x)$</span> cannot reduce to two quadratic factors.</p>
<p>My proposed solution was to state that <span class="math-container">$f(x) = x^4 + 1 = (x^2 + i)(x^2 - i)$</span> but <span class="math-container">$\pm i \not\in \mathbb{Q}$</span> so <span class="math-container">$f(x)$</span> is irreducible.</p>
<p>However, I stumbled across this post <a href="https://math.stackexchange.com/questions/1249143/x4-1-reducible-over-mathbbr-is-this-possible">$x^4 + 1$ reducible over $\mathbb{R}$... is this possible?</a> with a comment suggesting that <span class="math-container">$x^4 + 1 = (x^2 + \sqrt{2}x + 1)(x^2 - \sqrt{2}x + 1)$</span> which turns out to be a case that I did not fully consider. It made me realize that <span class="math-container">$\mathbb{Q}[x]$</span> being a UFD only guarantees a unique factorization of irreducible elements in <span class="math-container">$\mathbb{Q}[x]$</span> (which <span class="math-container">$x^2 \pm i$</span> nor <span class="math-container">$x^2 \pm \sqrt{2} x + 1$</span> aren't in <span class="math-container">$\mathbb{Q}[x]$</span>) so checking a single combination of quadratic products is not sufficient.</p>
<p>Therefore, what is the ideal method for checking that <span class="math-container">$x^4 + 1$</span> cannot be reduced to a product of two quadratic polynomials in <span class="math-container">$\mathbb{Q}[x]$</span>? Am I forced to just brute force check solutions of <span class="math-container">$x^4 + 1 = (x^2 + ax + b)(x^2 + cx + d)$</span> don't have rational solutions <span class="math-container">$(a,b,c,d) \in \mathbb{Q}^4$</span>?</p>
| user2661923 | 464,411 | <p>Alternative approach:</p>
<blockquote>
<p>My proposed solution was to state that <span class="math-container">$f(x) = x^4 + 1 = (x^2 + i)(x^2 - i)$</span> but <span class="math-container">$\pm i \not\in \mathbb{Q}$</span> so <span class="math-container">$f(x)$</span> is irreducible.</p>
</blockquote>
<p>This isn't quite valid. Even when neither of the two 2nd degree polynomials given by <br>
<span class="math-container">$[(x - r_1) \times (x - r_2)], ~[(x - r_3) \times (x - r_4)]$</span> are polynomials in <span class="math-container">$~\mathbb{Q}[x],~$</span> <br>
it is still possible that the 2nd degree polynomials given by <br>
<span class="math-container">$~[(x - r_1) \times (x - r_3)]~$</span> and <span class="math-container">$~[(x - r_2) \times (x - r_4)]~$</span> are polynomials in <span class="math-container">$~\mathbb{Q}[x].$</span></p>
<p>To me, it is unclear what the intent of the problem composer is. If he intended that brute force be avoided, then my response (below) is not what the composer intended.</p>
<p>Alternatively, if the composer was merely testing your understanding of the basic idea that a 4th degree polynomial in <span class="math-container">$~\Bbb{Q}[x]~$</span> would be irreducible, and assuming that you had not been exposed to deeper theory (re the answer of Patrick Stevens), the <em>brute force</em> approach is not that bad.</p>
<p>Using the variable <span class="math-container">$~z,~$</span> instead of <span class="math-container">$~x,~$</span> to denote a complex root, note that the <span class="math-container">$4$</span> roots of <span class="math-container">$z^4 = 1$</span> are given by the set of values <span class="math-container">$\{1,i,-1,-i\} = \{e^{i(0)}, e^{i\pi/2}, e^{i\pi}, e^{3i\pi/2}\}.$</span></p>
<p>Further, one of the roots of <span class="math-container">$z^4 = -1$</span> is given by <span class="math-container">$z = e^{i\pi/4}$</span>.</p>
<p>Therefore, the <span class="math-container">$4$</span> roots of <span class="math-container">$z^4 = -1$</span> are given by <br>
<span class="math-container">$\{e^{i\pi/4}, e^{3i\pi/4}, e^{5i\pi/4}, e^{7i\pi/4} \}.$</span></p>
<p>In order for the polynomial <span class="math-container">$[(z-r_1) \times (z - r_2)]$</span> to be an element in <span class="math-container">$\Bbb{Q}[x]$</span>, you need <strong>both</strong> of the following:</p>
<ul>
<li><p><span class="math-container">$[r_1 + r_2]$</span> must be an element in <span class="math-container">$\Bbb{Q}$</span>.</p>
</li>
<li><p><span class="math-container">$[r_1 \times r_2]$</span> must be an element in <span class="math-container">$\Bbb{Q}$</span>.</p>
</li>
</ul>
<p>Consider (for example) attempting to combine <span class="math-container">$e^{i\pi/4}$</span> with any of the other three roots.</p>
<ul>
<li><p><span class="math-container">$\displaystyle \left[e^{i\pi/4} + e^{3i\pi/4}\right] = [2i\sin\pi/4] \not\in \Bbb{Q}$</span>.</p>
</li>
<li><p><span class="math-container">$\displaystyle \left[e^{i\pi/4} \times e^{5i\pi/4}\right] = [e^{3i\pi/2}] \not\in \Bbb{Q}$</span>.</p>
</li>
<li><p><span class="math-container">$\displaystyle \left[e^{i\pi/4} + e^{7i\pi/4}\right] = [2\cos\pi/4] \not\in \Bbb{Q}$</span>.</p>
</li>
</ul>
<p>This makes it game over.</p>
|
4,610,394 | <p>Clearly, none of the roots are in <span class="math-container">$\mathbb{Q}$</span> so <span class="math-container">$f(x) = x^4 + 1$</span> does not have any linear factors. Thus, the only thing left to check is to show that <span class="math-container">$f(x)$</span> cannot reduce to two quadratic factors.</p>
<p>My proposed solution was to state that <span class="math-container">$f(x) = x^4 + 1 = (x^2 + i)(x^2 - i)$</span> but <span class="math-container">$\pm i \not\in \mathbb{Q}$</span> so <span class="math-container">$f(x)$</span> is irreducible.</p>
<p>However, I stumbled across this post <a href="https://math.stackexchange.com/questions/1249143/x4-1-reducible-over-mathbbr-is-this-possible">$x^4 + 1$ reducible over $\mathbb{R}$... is this possible?</a> with a comment suggesting that <span class="math-container">$x^4 + 1 = (x^2 + \sqrt{2}x + 1)(x^2 - \sqrt{2}x + 1)$</span> which turns out to be a case that I did not fully consider. It made me realize that <span class="math-container">$\mathbb{Q}[x]$</span> being a UFD only guarantees a unique factorization of irreducible elements in <span class="math-container">$\mathbb{Q}[x]$</span> (which <span class="math-container">$x^2 \pm i$</span> nor <span class="math-container">$x^2 \pm \sqrt{2} x + 1$</span> aren't in <span class="math-container">$\mathbb{Q}[x]$</span>) so checking a single combination of quadratic products is not sufficient.</p>
<p>Therefore, what is the ideal method for checking that <span class="math-container">$x^4 + 1$</span> cannot be reduced to a product of two quadratic polynomials in <span class="math-container">$\mathbb{Q}[x]$</span>? Am I forced to just brute force check solutions of <span class="math-container">$x^4 + 1 = (x^2 + ax + b)(x^2 + cx + d)$</span> don't have rational solutions <span class="math-container">$(a,b,c,d) \in \mathbb{Q}^4$</span>?</p>
| reuns | 276,986 | <p>A general method: assume that <span class="math-container">$f\in \Bbb{Q}[x]$</span> monic of degree <span class="math-container">$1$</span> or <span class="math-container">$2$</span> divides <span class="math-container">$x^4+1$</span>. The roots of <span class="math-container">$x^4+1$</span> have absolute value <span class="math-container">$\le 1$</span> whence so do the roots of <span class="math-container">$f$</span>.</p>
<blockquote>
<p>Gauss lemma: <span class="math-container">$f\in \Bbb{Z}[x]$</span></p>
</blockquote>
<ul>
<li><p>If <span class="math-container">$\deg(f)=1$</span> then it must be that <span class="math-container">$f\in \{x,x+1,x-1\}$</span>. Well no it doesn't divide <span class="math-container">$x^4+1$</span></p>
</li>
<li><p>If <span class="math-container">$\deg(f)=2$</span> then it must be that <span class="math-container">$f=(x-a)(x-b)=x^2+cx+d$</span> with <span class="math-container">$|d|\le 1, |c|\le 2$</span>. Only <span class="math-container">$15$</span> polynomials to try, you can check that none of them divide <span class="math-container">$x^4+1$</span>.</p>
</li>
</ul>
<p>Therefore <span class="math-container">$x^4+1$</span> is irreducible.</p>
|
3,884,659 | <p>Let <span class="math-container">$H$</span> be a graph, and let <span class="math-container">$n > |V(H)|$</span> be an integer.Suppose there is a graph on <span class="math-container">$n$</span> vertices and <span class="math-container">$t$</span> edges containing no copy of <span class="math-container">$H$</span>, and suppose that <span class="math-container">$tk>n^2\log_en$</span>. Show that there is a coloring of the edges of the complete graph on <span class="math-container">$n$</span> vertices by <span class="math-container">$k$</span> colors with no monochromatic copy of <span class="math-container">$H$</span>.</p>
| saulspatz | 235,128 | <p>About <span class="math-container">$3$</span> years ago, I downloaded a solution manual from <a href="https://radimentary.files.wordpress.com/2017/03/solutions_compilation.pdf" rel="nofollow noreferrer">https://radimentary.files.wordpress.com/2017/03/solutions_compilation.pdf</a>, but this link no longer works. The author was Xiaoyu He. Maybe you can find it.</p>
<p>Anyway, here's a sketch of his solution to this problem.</p>
<p>Let <span class="math-container">$G$</span> be the graph with <span class="math-container">$n$</span> vertices and <span class="math-container">$t$</span> edges containing no copy of <span class="math-container">$H$</span>. Consider <span class="math-container">$K_n$</span> as a labeled graph, make <span class="math-container">$k$</span> independent random relabelings of <span class="math-container">$G$</span> and color them with colors <span class="math-container">$1$</span> through <span class="math-container">$k$</span>. Viewing the labeled copies of <span class="math-container">$G$</span> as subgraphs, of the labeled <span class="math-container">$K_n$</span>, color each edge of <span class="math-container">$K_n$</span> with the color of the first copy of <span class="math-container">$G$</span> that it shows up in. Then <span class="math-container">$K_n$</span> contains no monochromatic <span class="math-container">$H$</span>, because none of the <span class="math-container">$G$</span>'s does.</p>
<p>To complete the proof, it must be shown that there is a non-zero probability that the <span class="math-container">$G$</span>'s cover all the edges of <span class="math-container">$K_n$</span> so that the coloring described above can actually be carried out. A straightforward calculation shows that the expected number of uncolored edges is <span class="math-container">$<1$</span>, so there must be a complete coloring.</p>
|
3,884,659 | <p>Let <span class="math-container">$H$</span> be a graph, and let <span class="math-container">$n > |V(H)|$</span> be an integer.Suppose there is a graph on <span class="math-container">$n$</span> vertices and <span class="math-container">$t$</span> edges containing no copy of <span class="math-container">$H$</span>, and suppose that <span class="math-container">$tk>n^2\log_en$</span>. Show that there is a coloring of the edges of the complete graph on <span class="math-container">$n$</span> vertices by <span class="math-container">$k$</span> colors with no monochromatic copy of <span class="math-container">$H$</span>.</p>
| Vezen BU | 823,641 | <p>Below is my solution when I met the problem in my homework before.</p>
<p>When <span class="math-container">$|V(H)| \leq 1$</span>, it's not so meaningful, so we only need to care about the cases when <span class="math-container">$|V(H)| \geq 2$</span>, and then <span class="math-container">$n \geq |V(H)| + 1 \geq 3$</span>.</p>
<p>Let <span class="math-container">$G$</span> be such a graph on <span class="math-container">$n$</span> vertices and <span class="math-container">$t$</span> edges containing no copy of <span class="math-container">$H$</span>. As we supposed that <span class="math-container">$tk > n^2 \log_e n > \binom{n}{2}$</span>, maybe we can find some way to paste <span class="math-container">$k$</span> monochromatic copies of <span class="math-container">$G$</span> onto <span class="math-container">$K_n$</span> (some edges may be colored more than once, among the colors they are colored in, choosing any one can make the conclusion hold), where each edge is colored and no copy of <span class="math-container">$H$</span> exists in the graph.</p>
<p>So, let's randomly paste <span class="math-container">$k$</span> monochromatic copies of <span class="math-container">$G$</span> onto <span class="math-container">$K_n$</span>, mathematically speaking, let <span class="math-container">$V(G) = [n]$</span> and <span class="math-container">$V(K_n) = \{v_i\}_{i \in [n]}$</span>, for each color, we randomly pick a permutation of <span class="math-container">$[n]$</span>, <span class="math-container">$\sigma: [n] \rightarrow \sigma([n])$</span>, then for all <span class="math-container">$(i, j) \in E(V(G))$</span>, we color the edge <span class="math-container">$(v_{\sigma(i)}, v_{\sigma(j)})$</span> in <span class="math-container">$K_n$</span> in that color (if it's already colored, then we simply recolor it). Let <span class="math-container">$X$</span> be the random variable of edges not colored, which is naturally the summation of <span class="math-container">$X_e$</span>, the indicator random variable of <span class="math-container">$e$</span> being not colored, for all <span class="math-container">$e \in E(K_n)$</span>, thus clearly,
<span class="math-container">$$E[X_e] = (1 - \frac{2t(n-2)!}{n!})^k = (1 - \frac{2t}{n(n-1)})^k.$$</span>
Therefore,
<span class="math-container">\begin{align*}
E[X] = & \sum_e E[X_e] \\
= & \sum_e (1 - \frac{2t}{n(n-1)})^k \\
= & \binom{n}{2} (1 - \frac{2t}{n(n-1)})^k \\
\leq & \frac{n(n-1)}{2} e^{-\frac{2tk}{n(n-1)}} \\
< & \frac{n^2}{2} e^{-2\log_e n} \\
= & \frac{1}{2},
\end{align*}</span>
as <span class="math-container">$X$</span> is an integer, there is a coloring by pasting <span class="math-container">$k$</span> monochromatic copies of <span class="math-container">$G$</span> with <span class="math-container">$X = 0$</span>, i.e., each edge is colored, completing the proof.</p>
|
2,637 | <p>Trying to round-trip expressions through JSON, I'm getting unexpected errors for held expressions, and would be grateful for advice or clues. Consider, first, something that works well</p>
<pre><code>Export[Environment["USERPROFILE"] <> "\\AppData\\Local\\test.json", {1, 2, 3},"JSON"]
</code></pre>
<p>and read it back in</p>
<pre><code>Import[Environment["USERPROFILE"] <> "\\AppData\\Local\\test.json","JSON"]
</code></pre>
<p>producing the expected <code>{1, 2, 3}</code>. However, when I try a held expression, such as (and now you can see why I might want to do this):</p>
<pre><code>Export[
Environment["USERPROFILE"] <> "\\AppData\\Local\\test.json",
HoldComplete[myList = {1, 2, 3}],
"JSON"]
</code></pre>
<p>and we get</p>
<pre><code>Export::badval: The element Data contains invalid values. >>
</code></pre>
<p>I haven't been able to find anything useful on this error message. I suspect it's something to do with <code>HoldComplete</code> and its friends not being real expressions, but rather some kinds of special syntax in the front end or the kernel, but it's a bit surprising since one of the oft-repeated slogans in Mathematica is <em>everything is an expression</em>.</p>
<p>Btw, lest we think that the assignment to <code>myList</code> is the problem, the following fails with the same message:</p>
<pre><code>Export[
Environment["USERPROFILE"] <> "\\AppData\\Local\\test.json",
HoldComplete[{1, 2, 3}],
"JSON"]
</code></pre>
| Leonid Shifrin | 81 | <p>The short answer is that, as @FJRA noted in the comment, only certain types are supported. Which types? Enter the long answer.</p>
<h3>Why the converter behaves as it does</h3>
<p>Long answer: JSON supports only certain types, and their nested combinations, as defined e.g. <a href="http://www.json.org/" rel="nofollow noreferrer">here</a>. Mathematica converter maps JSON objects to lists of rules, arrays to lists, strings to strings, plus has some special cases for <code>True</code>, <code>False</code> and <code>Null</code>. Once Mathematica JSON converter sees a general expression, it does not know what to do with it.</p>
<p>The problem with the "obvious" solution to convert to string and store as a string is that there will be no automatic way (without imposing some additional conventions) to tell which strings are really strings and which are stringified Mathematica expressions. So, IMO, the converter is doing the right thing.</p>
<h3>Digging deeper</h3>
<p>You can actually quite easily trace the execution the the functions of interest. If we use my <code>debug</code> function (from <a href="https://stackoverflow.com/questions/8362170/how-to-find-line-where-error-occurred-in-mathematica-notebook/8363717#8363717">here</a>), as</p>
<pre><code>debug@Export[
Environment["USERPROFILE"] <> "\\AppData\\Local\\test.json",
HoldComplete[{1, 2, 3}], "JSON"]
</code></pre>
<p>It will quickly tell us to look at the function <code>System`Convert`JSONDump`iexportJSON</code>, which in turn points to <code>System`Convert`JSONDump`toString</code>. Inspecting the <code>DownValues</code> of the latter, you will see the procedure I described above.</p>
<h3>Making the JSON import - export more liberal (for illustration purposes only !)</h3>
<p>If you <em>really</em> want to make the JSON import - export more liberal, so that, upon seeing an unrecognized general expression, it somehow converts it to string for export, and back to expression during import, here is one way:</p>
<pre><code>ClearAll[withLiberalJsonTostring];
SetAttributes[withLiberalJsonTostring, HoldAll];
withLiberalJsonTostring[code_] :=
Block[{dv = DownValues[System`Convert`JSONDump`toString],
System`Convert`JSONDump`toString},
DownValues[System`Convert`JSONDump`toString] = Most[dv];
System`Convert`JSONDump`toString[expr_, _Integer] :=
StringJoin[
"StringifiedOpen",
StringReplace[ToString[FullForm@expr],
{"[" :> "EscapeOpen", "]" :> "EscapeClose", "," :> "EscapeComma"}],
"StringifyClose"
];
code];
</code></pre>
<p>and the import counterpart:</p>
<pre><code>ClearAll[withLiberalJsonImport];
SetAttributes[withLiberalJsonImport, HoldAll];
withLiberalJsonImport[code_] :=
With[{result = code},
result /.
s_String :>
StringReplace[
s,
{"EscapeOpen" :> "[", "EscapeClose" :> "]", "EscapeComma" :> ","}
] /.
s_String /; StringMatchQ[s, "StringifiedOpen" ~~ __ ~~ "StringifyClose"] :>
ToExpression@ StringReplace[s, "StringifiedOpen" | "StringifyClose" :> ""]
];
</code></pre>
<p>Note that the escaping strings are arbitrary, and this will break if these particular strings are also used in different capacities in the JSON expression.</p>
<p>With this, we can do:</p>
<pre><code>withLiberalJsonTostring[
Export[Environment["USERPROFILE"] <> "\\AppData\\Local\\test.json",
{1, 2, HoldComplete[{1, 2, 3}]},
JSON"]]
</code></pre>
<p>and</p>
<pre><code>withLiberalJsonImport@
Import[Environment["USERPROFILE"] <> "\\AppData\\Local\\test.json", "JSON"]
(*
==> {1, 2, HoldComplete[{1, 2, 3}]}
*)
</code></pre>
<p>Note that I <strong>don't really recommend</strong> this method as robust, just posted this code for an illustration, and to aid the understanding of the matter. This is not robust on many grounds, incuding dependence on implementation details, the escaping procedure being arbitrary and not really robust, etc.</p>
<p>A robust solution would be to write an alternative converter (importer / exporter) to JSON, which would do the thing you want, and use that intead. <em>Also, please have a look at the <a href="https://mathematica.stackexchange.com/questions/2637/exporting-held-expressions-through-json/2643#2643">solution by @celtschk</a>, which is a lot cleaner and simpler</em>.</p>
<p><strong>EDIT</strong></p>
<p>As @celtschk pointed out in the comments, escaping is not really necessary if we add extra string quatation marks. The mechanism to distinguish strings from stringified expressions (to be converted back to expressions during import) is still needed however.</p>
|
230,154 | <p><strong>Question.</strong> Is it true that to check that a model category is right proper, it suffices to check the property for weak equivalences with fibrant codomain ? (if the domain is also fibrant, the pullback is always a weak equivalence). Or is there a close statement that I can't remember (browsing nLab did not help me) ?</p>
<p><strong>Comments.</strong> Consider the diagram $\mathbf{D}=X\rightarrow Y \leftarrow Z$ where the left-hand map is a fibration and the right-hand map a weak equivalence. Let $T=\projlim \mathbf{D}$. Choose a fibrant replacement $Y^{fib}$ for $Y$ and a trivial cofibration $Y\rightarrow Y^{fib}$. Factor the composite map $X \rightarrow Y \rightarrow Y^{fib}$ as a composite trivial cofibration-fibration. We obtain a diagram $\mathbf{E}=X^{fib}\rightarrow Y^{fib}\leftarrow Z$ such that the left-hand map is a fibration between fibrant objects and the right-hand map is a weak equivalence. Let $U=\projlim \mathbf{E}$. By hypothesis, the map $U\rightarrow X^{fib}$ is a weak equivalence. By construction, the map of diagrams $\mathbf{D} \rightarrow \mathbf{E}$ is a weak equivalence of diagrams. The map $T\rightarrow X$ is a weak equivalence iff the map $T\rightarrow U$ is a weak equivalence. What next ?</p>
<p><strong>Why.</strong> I found this cryptic remark in my notebook, and I can't remember where it comes from. The reason why I want to simplify the proof of right properness is that I have to deal with model categories where a set of generating trivial cofibrations is not known. I only know what I call a set of generating anodyne cofibrations. And the trivial fibrations which are the anodyne fibrations (i.e. having the RLP with respect to the set of generating anodyne cofibrations) which are a dual strong deformation retract. And the reason why I am interested in right properness is that I want to study right Bousfield localizations.</p>
| Karol Szumiło | 12,547 | <p>To complete the argument you need to apply K. Brown's Lemma. Call your model category $\mathcal{M}$, then the map $Z \to Y$ induces a pullback functor $\mathcal{M} \downarrow Y \to \mathcal{M} \downarrow Z$ and the lemma implies that it preserves weak equivalences between fibrations over $Y$. If you define $V \to Y$ as the pullback of $X^{\mathrm{fib}} \to Y^{\mathrm{fib}}$, then $X \to V$ is a weak equivalence by the assumption and 2-out-of-3. Then apply the previous observation to $X \to V$ to see that $T \to U$ is a also weak equivalence and hence so is $T \to X$.</p>
<p>A reference for this is Lemma 9.4 in Bousfield's <em>On the Telescopic Homotopy Theory of Spaces</em>, but I imagine it was known well before that.</p>
|
1,260,260 | <blockquote>
<p>Find, with proof, the smallest value of $N$ such that $$x^N \ge \ln x$$ for all $0 < x < \infty$. </p>
</blockquote>
<p>I thought of adding the natural logarithm to both sides and taking derivative. This gave me $N \ge \frac 1{\ln x}$. However, is there a better way to this?</p>
<p>Please note that I would like to see only a <em>hint</em>, not a complete solution.</p>
<p>If anything, I am made aware that the answer is $N \ge \frac 1e$.</p>
| abel | 9,252 | <p>first you can show that the line $ y = kx $ touches at $x = e, y = 1, k = \frac 1 e.$ that is $$kx > \ln(x) \text{ for } k > \frac1e \text{ and } k x = \ln x \text{ for } x = e, k = \frac 1e.\tag 1$$</p>
<p>we have $$x^N > \ln x \implies N\ln x > \ln(ln(x) \implies N > \frac 1e. $$</p>
<p>$$\text{ so the smallest value of } N \text{ is }\frac 1e.$$</p>
|
2,767,070 | <p>The intuition for $E[g(Y)|Y=y]$ would be that $g(Y)$ would play the role of a constant once $Y$ is fixed to a certain $y$ value. But how to show this more formally ? I can't seem to expand the equation below.</p>
<p>$E[g(Y)|Y=y]=\sum_{y} g(y)P[g(y)=y'|Y=y]$</p>
| heropup | 118,193 | <p>Your error arises from using the same variable name for the index of summation as the given condition. More precisely, for a discrete random variable $Y$ with support $S$, $$\operatorname{E}[g(Y) \mid Y = y] = \sum_{a \in S} g(a) \Pr[Y = a \mid Y = y].$$ Since $\Pr[Y = a \mid Y = y] = \mathbb 1(a = y)$, it follows that the sum contains only one term and its value is $g(y)$.</p>
|
3,032,258 | <p>Assume 5 out of 100 units are defective. We pick 3 out of the 100 units at random. </p>
<p>What is the probability that exactly one unit is defective?</p>
<hr>
<p>My answer would be </p>
<p><span class="math-container">$P(\text{Defect}=1) = P(\text{Defect})\times P(\text{Not defect})\times P(\text{Not defect}) = 5/100 \times 95/99 \times 94/98$</span> </p>
<p>However, I am not sure whether or not this is correct or not. Can someone verify?</p>
| trancelocation | 467,003 | <p>Here is a suggestion how to proceed as ordering does not play a role</p>
<ul>
<li>Choose one defective item: <span class="math-container">$\binom{5}{1}$</span></li>
<li>Choose two non-defective ones: <span class="math-container">$\binom{95}{2}$</span></li>
<li>Chose any three: <span class="math-container">$\binom{100}{3}$</span>
<span class="math-container">$$P(\mbox{"exactly 1 defective"}) = \frac{\binom{5}{1}\cdot \binom{95}{2}}{\binom{100}{3}}$$</span></li>
</ul>
|
163,672 | <p>Is there a characterization of boolean functions $f:\{-1,1\}^n \longrightarrow \{-1,1\}$,
so that $\mathbf{Inf_i}[f]=\frac{1} {2}$, for all $1\leq i\leq n$? Is it known how many such functions there are? </p>
| Brendan McKay | 9,025 | <p>I didn't really understand the original question, but from the comments of others you are looking for correlation-immune boolean functions of order 1. These go under many other names as well, including binary orthogonal arrays of strength 1 and balanced hypercube colourings. The last image is the simplest to understand: place equal weights at some vertices of a hypercube such that the centre of mass is at the centre of the hypercube.
It can be generalised in multiple ways with extra parameters.</p>
<p>A complicated exact formula and a table of small values appeared in Palmer, Read and Robinson, J. Algebraic Combinatorics, 1 (1992) 257-273. Several people published on the asymptotics (including one Denisov who got the right answer and later incorrectly retracted it). My paper with Canfield, Gao, Greenhill and Robinson <a href="http://arxiv.org/abs/0909.3321" rel="nofollow">here</a> gives the following: Let $q=q(n)$ be a function which is always an even integer in the interval $[0,2^{n-1}]$. Then the number of ways to place $2q$ equal weights on the vertices of a hypercube such that the centre of mass is at the centre of the hypercube is asymptotically
$$ \binom{2^n}{2q}
\left( \frac{\binom{2^{n-1}}{q}^2}{\binom{2^n}{2q}} \right)^n
(1 + o(n^52^{-n/5}))
$$ uniformly over $q$. For the sum over $q$, see Corollary 1.2 of the linked paper with $k=1$.</p>
<p>WITHDRAWN: Although I have answered an interesting question, it was not the question asked. Thanks to Seva for pointing this out. </p>
|
3,253,891 | <p>I'm a complete n00b at math, but I'm wondering how one would go about determining the value of <code>n</code> in the following comparison.</p>
<p><code>n * 1.5 + 12.5 = 12.5 / 2 + n</code></p>
<p>I'm new to the math StackExchange, so I'm also not sure how to properly format this question. Feel free to edit.</p>
<p><span class="math-container">$1.5n+12.5=\displaystyle \frac{12.5}{2}+n$</span></p>
<hr>
<h1>Explanation</h1>
<p>I don't think I formulated my mathematical equation properly because I know that the value I'm looking for is obviously a positive integer.</p>
<p><strong>I'm trying to figure out what size the squares must be, so that the center most square in each row is centered above or below the gap between the two squares in the other row.</strong></p>
<ol>
<li>All the squares must be the same size.</li>
<li>All of the gaps are <code>12.5</code> pixels.</li>
</ol>
<p>The squares in the image below are obviously not big enough as of right now.</p>
<p><a href="https://i.stack.imgur.com/8qNNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8qNNh.png" alt="enter image description here" /></a></p>
| BigSlurm | 833,788 | <p>Elliptical PDEs can be defined by functions that do not have any characteristic lines/surfaces. That is, there are no functions <span class="math-container">$F(x)$</span> or <span class="math-container">$G(y)$</span> such that <span class="math-container">$u''(x,y)=F(x)$</span> or <span class="math-container">$u''(x,y)=G(y)$</span>. <br />
<br />
Therefore, solutions to elliptical PDEs are dependent on both <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. Similarly, dx and dy must also share a dependence. This dependence concludes that solutions can not have discontinuities in their partial derivatives. For parabolic and hyperbolic PDEs, solutions can have discontinuities in their partial derivatives along the characteristics.</p>
<p>The difference, I believe, between elliptical and parabolic/hyperbolic PDEs, is that the dependencies, or restrictions, are built into the equations of elliptical PDEs; whereas the dependencies for parabolic/hyperbolic PDEs are artificially put in since the variables are independent of each other along the characteristics.</p>
|
127,322 | <p>Being a new member, I am not yet sure whether my question will be taken as a research level question (and thus, appropriate for MO). However, I have seen similar questions on MO, couple of which led me asking mine, and I seem to not be able to find many resources except discussion on FOM and MO. So, any references to resolve the question and fix my possible confusion would be appreciated.</p>
<p>As the title suggests, I want to understand the relation between $ZFC \vdash \varphi$ and $ZFC \vdash\ 'ZFC \vdash \varphi'$. Let me give my motivation (and some partial answers) asking this question so that what I'm trying to arrive at is understood.</p>
<p>We know that if $ZFC \vdash \varphi$, then $ZFC \vdash\ 'ZFC \vdash \varphi'$ for we could write down the Gödel number of the proof we have for $\varphi$ and then check that the formalized $\vdash$ relation holds. I believe even more can be checked to be true for this provability predicate (<a href="http://en.wikipedia.org/wiki/Hilbert-Bernays_provability_conditions" rel="noreferrer">Hilbert-Bernays provability conditions</a>).</p>
<p>Is the converse true in general? Not necessarily. (Just to make sure that it will be pointed out sooner if I am doing any mistakes, I will try to write down everything unnecessarily detailed using less English and more symbols!)</p>
<p>Let us assume only that $ZFC$ is consistent (However, I am not assuming the formal statement $Con(ZFC)$, that is $\ 'ZFC \nvdash \lceil 0=1 \rceil'$). Then, it is conceivable that $ZFC \vdash\ \ 'ZFC \vdash \lceil 0=1 \rceil'$ but $ZFC \nvdash 0=1$. It might be that in reality ZFC is consistent but $\omega$-inconsistent.</p>
<p>Indeed, if I am not missing a point, it is consistent to have this situation:</p>
<p>$ZFC \vdash Con(ZFC) \rightarrow Con(ZFC+\neg Con(ZFC))$ (Gödel)
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models ZFC+\neg Con(ZFC)$ (Gödel)
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models\ ZFC+\ 'ZFC \vdash \lceil 0=1 \rceil'$
$ZFC \vdash Con(ZFC) \rightarrow\ \exists M\ M \models\ ZFC+\ 'ZFC \vdash\ 'ZFC \vdash \lceil 0=1 \rceil\ '\ '$ (Soundness and the second provability condition <a href="http://en.wikipedia.org/wiki/Hilbert-Bernays_provability_conditions" rel="noreferrer">here</a>)
$ZFC \vdash Con(ZFC) \rightarrow Con(ZFC+\ 'ZFC \vdash \neg Con(ZFC)\ ')$</p>
<p>So we cannot hope to have $ZFC \vdash\ 'ZFC \vdash \varphi'$ implying $ZFC \vdash \varphi$ for an arbitrary formula without requiring an additional assumption. At least, we know this for $\varphi: 0=1$ (this is not because of the consistency argument above, but because consistency and $\omega$-inconsistency of ZFC is a possibility).</p>
<p>If you believe that ZFC's characterization of natural numbers coincides with what we have in mind and agree that ZFC should not be $\omega$-inconsistent, then you might want to throw in the assumption $Con(ZFC)$.</p>
<p>Now imagine a universe where $Con(ZFC)$ holds but all the models of ZFC is $\omega$-nonstandard and believe $\neg Con(ZFC)$. I do not know whether this scenario is even possible (which is another question I am wondering) but if it is possible, then it would be the case that $'ZFC \vdash \neg Con(ZFC)\ '$, by completeness since $\neg Con(ZFC)$ is true in all models. Then if the implication in title (or should I say, an informal version of it: $V \models ZFC \vdash \varphi$ implies $V \models \varphi$) held, then $\neg Con(ZFC)$ which contradicts our assumption that there are models at all. The point is arbitrary models of ZFC may not be sufficient to have existence of ZFC-proofs implying existence of actual proofs.</p>
<p>However, if we add a stronger assumption $\psi$ that there is an $\omega$-model, then whenever we have an arithmetic sentence $\varphi$, if</p>
<p>$ZFC \vdash\ 'ZFC \vdash \lceil \varphi \rceil'$</p>
<p>then</p>
<p>$ZFC+\psi \vdash \exists M\ \omega^M=\omega \wedge M \models ZFC+\ \varphi$</p>
<p>and because $\omega$ in the model is the real one, by taking care of quantifiers one by one we can deduce $ZFC+\psi \vdash \varphi$. Thus, existence of an $\omega$-model solves our problem for arithmetical sentences. I cannot see any reason to make this work for arbitrary sentences without strengthening the assumption. Here is a thought:</p>
<p>We know, by the reflection principle, that we can find some limit ordinal $\alpha$ such that $\varphi \leftrightarrow \varphi^{V_{\alpha}} \leftrightarrow V_{\alpha} \models \varphi$. Thus, if we could make sure somehow that $V_{\alpha}$ is a model of ZFC while we reflect $\varphi$, then we would be done. But I could not modify the proofs of reflection in such a way that this can be done and am not even sure that this could be done.</p>
<p>My question to MO is to what extent (and under which assumptions) can we get the implication in the title?</p>
<p><strong>Edit</strong>: After reading Emil Jerabek's answer, I realized I should clarify some details.</p>
<p>Firstly, I want to treat ZFC only as a formal system (meaning that if you are claiming some assumption $\psi$ does what I want, I want to have a description of how that proof would formally look. This is why I kept writing all the leftmost $ZFC \vdash$'s all the time). Then, it is clear by the above discussions that even if we could prove $ZFC \vdash \varphi$ within our system, we may not prove $\varphi$ without additional assumptions on our system.</p>
<p>One solution could be that our system satisfies the "magical" property that whenever we have $ZFC \vdash \exists x \in \omega\ \varphi(x)$, say for some arithmetic sentence, then we have $ZFC \vdash \varphi(SSS...0)$ for some numeral. This of course is not available by default setting for we know that the theory $ZFC$+ $c \in \omega$ + $c \neq 0$ + $c \neq S0$ +... is consistent if $ZFC$ is consistent. Thus, that magical property seems like an unreasonably strong assumption.</p>
<p>To make my question very precise, what I want is some assumption $\psi$ so that for some class of formulas, whenever I have $ZFC \vdash ZFC \vdash \varphi$, then $ZFC + \psi \vdash \varphi$. For arithmetic sentences, existence of an $\omega$-model is sufficient.</p>
<p>I agree that $\Sigma^0_1$ soundness should be sufficient for arithmetic sentences if what you mean by $\Sigma^0_1$ soundness is having $ZFC \vdash ZFC \vdash \varphi$ requiring (maybe even as a derivation rule, attached to our system!) $ZFC \vdash \omega \models \phi$, where $\phi$ is the translated version of $\varphi$ into the appropriate language, since I can again go through quantifier by quantifier and prove the sentence itself, that is $ZFC \vdash \varphi$.</p>
<p>However, I see no reason why $\Sigma^0_1$ soundness should be enough for arbitrary sentence $\varphi$. It seems to me that what we need is some structure for which we have the reflection property that formal truth in the structure is provably equivalent to $\varphi$ and that structure being model of all the ZFC-sentences used in the ZFC-proof of $\varphi$.</p>
<p>I believe existence of <a href="http://cantorsattic.info/Reflecting#Inaccessible_reflecting_cardinal" rel="noreferrer">$\Sigma_n$-reflecting cardinals</a> which are inaccesible is more than sufficient for sentences up to $n$ in the Levy hiearchy. By definition of those, we have the equivalence $\varphi \leftrightarrow V_{\kappa} \models \varphi$ and then provability of $\varphi$ in ZFC implies $V_{\kappa} \models \varphi$. However, I am not sure if we had to go that far. ${}{}$</p>
| Emil Jeřábek | 12,705 | <p>$\def\zfc{\mathrm{ZFC}}\def\pr{\operatorname{Prov}\nolimits}$The statement</p>
<blockquote>
<p>$\zfc\vdash\pr_\zfc(\ulcorner\varphi\urcorner)$ implies $\zfc\vdash\varphi$ for every sentence $\varphi$ in the language of $\zfc$</p>
</blockquote>
<p>is equivalent to the statement that $\zfc$ is either inconsistent or $\Sigma^0_1$-sound: the latter means that every $\Sigma^0_1$-sentence provable in $\zfc$ is true in standard integers. One direction is obvious as $\pr_\zfc(\ulcorner\varphi\urcorner)$ is a $\Sigma^0_1$-sentence, and its truth in $\mathbb N$ says exactly that $\varphi$ is provable in $\zfc$. The converse follows from the Friedman–Goldfarb–Harrington principle: if $T$ is a recursively axiomatized theory containing Robinson’s arithmetic and $\sigma$ a $\Sigma^0_1$-sentence, there exists a sentence $\varphi$ (that can also be taken $\Sigma^0_1$) such that</p>
<p>$$I\Delta_0+\mathit{EXP}\vdash\pr_T(\ulcorner\varphi\urcorner)\leftrightarrow(\sigma\lor\pr_T(\ulcorner0=1\urcorner)).$$</p>
<p>$\Sigma^0_1$-soundness is stronger than consistency, but weaker than $\omega$-consistency. If you are wondering about foundational issues, it is best to consider it as a separate assumption on its own.</p>
|
787,894 | <p>Find the values of $x,y$ for which $x^2 + y^2$ takes the minimum value where $(x+5)^2 +(y-12)^2 =14$.</p>
<p>Tried Cauchy-Schwarz and AM - GM , unable to do.</p>
| lab bhattacharjee | 33,337 | <p>Any point satisfying $\displaystyle(x+5)^2 +(y-12)^2 =14$ can be expressed as $\sqrt{14}\cos\phi-5,\sqrt{14}\sin\phi+12$ </p>
<p>$\displaystyle x^2 + y^2=14(\cos^2\phi+\sin^2\phi)+2\sqrt{14}(12\sin\phi-5\cos\phi)+5^2+12^2$
$\displaystyle=14+12^2+5^2+2\sqrt{14}(12\sin\phi-5\cos\phi)$ </p>
<p>This will attain minimum if $\displaystyle12\sin\phi-5\cos\phi$ is minimum</p>
<p>Now set $12=r\sin\phi,5=r\cos\phi$ to find $\displaystyle12\sin\phi-5\cos\phi=13\sin\left(\phi-\arctan\frac5{12}\right)$</p>
<p>What is the minimum value of $\displaystyle\sin\left(\phi-\arctan\frac5{12}\right)?$</p>
|
1,614,989 | <p>A portion of a $30$m long tree is broken by
tornado and the top struck up the ground
making an angle $30^{\circ}$ with ground
level. The height of the point where the tree
is broken is equal to:</p>
<p>$a.)\ \dfrac{30}{\sqrt{3}}m$ $~~~~~~~~~~$ $\color{green}{b.)\ 10m} \\$
$~~~~~~~~~~$ $c.)\ 30\sqrt{3}m$ $~~~~~~~~~~$ $d.)\ 60m$</p>
<p><a href="https://i.stack.imgur.com/Hohlt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hohlt.png" alt="enter image description here"></a></p>
<p>My teacher explained that $AB(tree)=30 m$, $BD=x ~~m$</p>
<p>and $DK =(30-x)m$</p>
<p>I didn't understand how come $DK =(30-x)m$ ?</p>
| Eric Haney | 203,977 | <p><a href="https://i.stack.imgur.com/9xrRe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9xrRe.png" alt="enter image description here"></a></p>
<p>The tree AB has been split at the point D. The top of the tree has fallen to point K making a 30 degree angle with the ground. Since the tree's height was 30 meters, and we are choosing to call the stump $x$, the remainder of the tree along DK must be what is left over: $(30 - x)$.</p>
<p>To solve the problem, we need to know about the 30-60-90 special right triangle. Notice how we can derive it from an equilateral triangle such as $\triangle TRI$. In this triangle, M is the midpoint of TR. Therefore the ratio of the short side to the hypotenuse is 1 : 2 in a 30-60-90 triangle.</p>
<p>Use this information to verify the answer.</p>
|
3,506,316 | <p>I am trying to evaluate this limit:</p>
<p><span class="math-container">$$\lim_{x\to0^{+}}(x-\sin x)^{\frac{1}{\log x}}$$</span></p>
<p>It's a <span class="math-container">$0^0$</span> intedeterminate form, and I am unsure how to deal with it. I have a feeling that if I could turn it to a form where L'Hopital's rule is applicable, then I'd have a chance at solving the problem.</p>
<p>Is there any consistent way of turning a <span class="math-container">$0^0$</span> form into a <span class="math-container">$\frac{\infty}{\infty}$</span> or <span class="math-container">$\frac{0}{0}$</span> form?</p>
<p>If not, how do you deal with this kind of limits?</p>
| Paramanand Singh | 72,031 | <p>The best option is to take logs. If <span class="math-container">$L$</span> is the desired limit then we have <span class="math-container">$$\log L=\lim_{x\to 0^{+}}\frac{\log(x-\sin x)} {\log x} $$</span> The expression under limit above can be rewritten as <span class="math-container">$$3+\dfrac{\log\dfrac{x-\sin x} {x^3}} {\log x} $$</span> and the numerator here tends to <span class="math-container">$\log(1/6)$</span> and denominator tends to <span class="math-container">$-\infty $</span> so that the fraction tends to <span class="math-container">$0$</span>. It follows that <span class="math-container">$L=e^3$</span>. The limit <span class="math-container">$$\lim_{x\to 0^{+}}\frac{x-\sin x} {x^3}=\frac{1}{6}$$</span> is easily handled by L'Hospital's Rule or Taylor series. </p>
|
4,246,726 | <p>For the system of linear equations <span class="math-container">$Ax = b$</span> with <span class="math-container">$b =\begin{bmatrix}
4\\
6\\
10\\
14
\end{bmatrix}\\
$</span>. The set of solutions is given by- <span class="math-container">$\left\{ x : x = \begin{bmatrix}
0\\
0\\
-2
\end{bmatrix} + c \begin{bmatrix}
0\\
1\\
0\end{bmatrix} + d \begin{bmatrix}
1\\
0\\
1\end{bmatrix} \right\}$</span>.</p>
<p>The question requires to find the matrix <span class="math-container">$A$</span> and dimensions of all four fundamental subspaces of <span class="math-container">$A$</span>. Is there any intuitive way of finding the matrix <span class="math-container">$A$</span>, since the remaining problem becomes straightforward thereafter. Thanks in advance for any help.</p>
| greg | 357,854 | <p><span class="math-container">$
\def\a{\alpha}\def\b{\beta}
\def\o{{\tt1}}\def\p{\partial}
\def\E{{\cal E}}\def\F{{\cal F}}\def\G{{\cal G}}
\def\L{\left}\def\R{\right}\def\LR#1{\L(#1\R)}
\def\vec#1{\operatorname{vec}\LR{#1}}
\def\trace#1{\operatorname{Tr}\LR{#1}}
\def\grad#1#2{\frac{\p #1}{\p #2}}
\def\c#1{\color{red}{#1}}
$</span>If you absolutely need the tensor-valued gradient, then you have several options.</p>
<p>Perhaps the simplest approach is to take Ben Grossmann's matrix-valued gradient
<span class="math-container">$$\eqalign{
G_{\a\b} &= \grad{x_\a}{y_\b} \quad\iff\quad
G &= \grad{x}{y} &= \grad{\vec{X}}{\vec{Y}} \\
}$$</span>
and reverse the Kronecker-vec indexing
<span class="math-container">$$\eqalign{
x &\in {\mathbb R}^{n^2\times\o} \implies
X \in {\mathbb R}^{n\times n} \\
x_{\a} &= X_{ij} \\
\a &= i+(j-1)\,n \\
i &= \o+(\a-1)\,{\rm mod}\,n \\
j &= \o+(\a-1)\,{\rm div}\,n \\
\\
y &\in {\mathbb R}^{mn\times\o} \implies
Y \in {\mathbb R}^{m\times n} \\
y_{\b} &= Y_{k\ell} \\
\b &= k+(\ell-1)\,m \\
k &= \o+(\b-1)\,{\rm mod}\,m \\
\ell &= \o+(\b-1)\,{\rm div}\,m \\
}$$</span>
to recover the tensor-valued gradient
<span class="math-container">$$\eqalign{
G &\in {\mathbb R}^{n^2\times mn}
\implies \Gamma\in {\mathbb R}^{n\times n\times m\times n} \\
G_{\a\b} &= \Gamma_{ijk\ell} = \grad{X_{ij}}{Y_{k\ell}} \\
}$$</span></p>
|
738,083 | <blockquote>
<p>Show that if two random variables X and Y are equal almost surely, then they
have the same distribution. Show that the reverse direction is not correct.</p>
</blockquote>
<p>If $2$ r.v are equal a.s. can we write $\mathbb P((X\in B)\triangle (Y\in B))=0$ (How to write this better ?)</p>
<p>then </p>
<p>$\mathbb P(X\in B)-\mathbb P(Y\in B)=\mathbb P(X\in B \setminus Y\in B)\le \mathbb P((X\in B)\triangle (Y\in B))=0$</p>
<p>$\Longrightarrow P(X\in B)=\mathbb P(Y\in B)$</p>
<p>but the other direction makes no sense for me, i don't know how this can be true.</p>
| user521337 | 521,337 | <p>If <span class="math-container">$X$</span> is a random variable following uniform <span class="math-container">$\mathcal U(-1,1)$</span> distribution, then <span class="math-container">$X$</span> and <span class="math-container">$-X$</span> are identically distributed, but obviously <span class="math-container">$X$</span> and <span class="math-container">$-X$</span> are not almost surely equal, in fact <span class="math-container">$P(X=-X)=0$</span> </p>
|
990,930 | <p>Graphing this function is difficult as many overlaps exist and finding a viewing window is hard.</p>
<p>What's a good algebraic method to solve this problem? </p>
| Community | -1 | <p><strong>Hint</strong></p>
<p>Use the relation $\cos^2x+\sin^2 x=1$ to find a quadratic equation with unkown $\cos x$. Solve it and find the value of $x$ in the desired interval.</p>
|
29,766 | <p>I'm looking for a news site for Mathematics which particularly covers recently solved mathematical problems together with the unsolved ones. Is there a good site MO users can suggest me or is my only bet just to google for them?</p>
| Helge | 3,983 | <p>Another suggestion <a href="http://pjm.math.berkeley.edu/scripts/coming.php?jpath=annals" rel="nofollow"> Annals: to appear</a>. Also other top journals. If a big problem gets solved, its solution probably gets submitted to a journal of this type, so its to appear lists are what you are looking for. Of course, you only learn about the solution of the problem a few years late (refereeing takes time), but you can be almost certain that the solution is actually correct.</p>
|
174,165 | <p>I have Maths test tomorrow and was just doing my revision when I came across these two questions. Would anyone please give me a nudge in the right direction?</p>
<p>$1)$ If $x$ is real and $$y=\frac{x^2+4x-17}{2(x-3)},$$ show that $|y-5|\geq2$ </p>
<p>$2)$ If $a>0$, $b>0$, prove that $$\left(a+\frac1b\right)\left(2b+\frac1{2a}\right)\ge\frac92$$</p>
| hxthanh | 58,554 | <p>$\displaystyle \cos\frac{n\pi}{2}=\{1,0,-1,0\}=2\left\lfloor\frac{n}{4}\right\rfloor+2\left\lfloor\frac{n+1}{4}\right\rfloor+1-n$</p>
<p>$\displaystyle \sin\frac{n\pi}{2}=\{0,1,0,-1\}=n-2\left\lfloor\frac{n+1}{4}\right\rfloor-2\left\lfloor\frac{n+2}{4}\right\rfloor$</p>
|
1,149,561 | <p>I've tried using mods but nothing is working on this one: solve in positive integers $x,y$ the diophantine equation $7^x=3^y-2$.</p>
| HSN | 58,629 | <p>The Euler characteristic is multiplicative in the sense that if $M$ and $N$ are two spaces $\chi(M\times N) = \chi(M)\cdot\chi(N)$. Thus, if a space $M$ has Euler characteristic $0$ or $1$, this means that for any other space $N$ either $\chi(M\times N) = 0\cdot\chi(N)=0 = \chi(M)$, or $\chi(M\times N) = 1\cdot\chi(N) =\chi(N)$ holds, even though these spaces generally live in different dimensions.</p>
<p>An easy example to look at, is the circle $S^1$. It has Euler characteristic $0$, hence $(S^1)^n = S^1\times\ldots\times S^1$ has Euler characteristic $0$ as well, for any $n$. Does a circle have the same homology as the torus? In a similar vein, I'm sure you can cook up many more examples.</p>
<p>As for your last question, I don't think that one can say much in general about spaces with the same Euler characteristic, but non-identical homology groups.</p>
|
3,568,050 | <blockquote>
<p>Let <span class="math-container">$R$</span> be an equivalence relation in the set <span class="math-container">$A$</span> and <span class="math-container">$a,b \in A$</span>. Show that <span class="math-container">$R(a)=R(b)$</span> <strong>iff</strong> <span class="math-container">$aRb$</span>.</p>
</blockquote>
<p>We must prove two implications here. First, that <span class="math-container">$$R(a)=R(b) \Rightarrow aRb$$</span></p>
<p>Note that <span class="math-container">$R(a):= \{b\in A : aRb\}$</span>, and <span class="math-container">$R(b):=\{a\in A : bRa\}$</span>. Also since these sets are equal, they must be subsets of each other <span class="math-container">$$R(a) \subseteq R(b)$$</span></p>
<p>It follows that <span class="math-container">$\forall b \in A$</span> : <span class="math-container">$$b\in R(a) \Rightarrow b \in R(b)$$</span></p>
<p>If <span class="math-container">$b$</span> is an element of both of these equivalence classes, then certainly <span class="math-container">$b=a$</span>. From the fact that <span class="math-container">$$R(b)\subseteq R(a)$$</span></p>
<p>it follows similarly, that <span class="math-container">$a=b$</span>. I'm pretty sure this shows
<span class="math-container">$aRb$</span>, but I'm not sure how I should word it. As for <span class="math-container">$$aRb \Rightarrow R(a)=R(b)$$</span></p>
<p>Since <span class="math-container">$R$</span> is an equivalence relation, we have by symmetry that, <span class="math-container">$\forall a,\forall b \in A$</span> <span class="math-container">$$aRb \Rightarrow bRa$$</span></p>
<p>This is where I am at the moment. I just wanted to check if the proof is ok up to this point.</p>
| mathcounterexamples.net | 187,663 | <p>I would do something like that.</p>
<p>Suppose first that <span class="math-container">$R(a)=R(b)$</span>.
As <span class="math-container">$R$</span> is reflexive, we have <span class="math-container">$aRa$</span> which implies <span class="math-container">$a\in R(a)$</span>. By hypothesis, we also have <span class="math-container">$a \in R(b)$</span>, i.e <span class="math-container">$aRb$</span>.</p>
<p>Conversely let’s suppose that <span class="math-container">$aRb$</span> and take <span class="math-container">$c \in R(a)$</span>, which means <span class="math-container">$cRa$</span>. By transitivity, we also have <span class="math-container">$cRb$</span>, i.e. <span class="math-container">$c \in R(b)$</span>. Therefore, we have proven that <span class="math-container">$R(a)\subseteq R(b)$</span>. Now, by symmetry if we suppose <span class="math-container">$aRb$</span>, we also have <span class="math-container">$bRa$</span>. With a similar proof that what we just did, we can conclude that <span class="math-container">$R(b) \subseteq R(a)$</span>. Finally <span class="math-container">$R(a)=R(b)$</span>.</p>
<p><em>Overall, I would suggest that you use more explicitly the properties of an equivalence relation.</em> </p>
|
4,002,458 | <p>I'm a geometry student. Recently we were doing all kinds of crazy circle stuff, and it occurred to me that I don't know why <span class="math-container">$\pi r^2$</span> is the area of a circle. I mean, how do I <em>really</em> know that's true, aside from just taking my teachers + books at their word?</p>
<p>So I tried to derive the formula myself. My strategy was to fill a circle with little squares. But I couldn't figure out how to generate successively smaller squares in the right spots. So instead I decided to graph just one quadrant of the circle (since all four quadrants are identical, I can get the area of the easy +x, +y quadrant and multiply the result by 4 at the end) and put little rectangles along the curve of the circle. The more rectangles I put, the closer I get to the correct area. If you graph it out, my idea looks like this:</p>
<p><a href="https://i.stack.imgur.com/5JMSb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5JMSb.png" alt="Approximating circle area using rectangles" /></a></p>
<p>Okay, so to try this in practice I used a Python script (less tedious):</p>
<pre><code>from math import sqrt, pi
# explain algo of finding top right quadrant area
# thing with graphics via a notebook
# Based on Pythagorean circle function (based on r = x**2 + y**2)
def circle_y(radius, x):
return sqrt(radius**2 - x**2)
def circleAreaApprox(radius, rectangles):
area_approx = 0
little_rectangles_width = 1 / rectangles * radius
for i in range(rectangles):
x = radius / rectangles * i
little_rectangle_height = circle_y(radius, x)
area_approx += little_rectangle_height * little_rectangles_width
return area_approx * 4
</code></pre>
<p>This works. The more rectangles I put, the wrongness of my estimate goes down and down:</p>
<pre><code>for i in range(3):
rectangles = 6 * 10 ** i
delta = circleAreaApprox(1, rectangles) - pi # For a unit circle area: pi * 1 ** 2 == pi
print(delta)
</code></pre>
<h3>Output</h3>
<pre><code>0.25372370203838557
0.030804314363409357
0.0032533219749364406
</code></pre>
<p>Even if you test with big numbers, it just gets closer and closer forever. Infinitely small rectangles <code>circleAreaApprox(1, infinity)</code> is presumably the true area. But I can't calculate that, because I'd have to loop forever, and that's too much time. How do I calculate the 'limit' of a for loop?</p>
<p>Ideally, in an intuitive way. I want to reduce the magic and really understand this, not 'solve' this by piling on more magic techniques (like the <span class="math-container">$\pi \times radius^2$</span> formula that made me curious in the first place).</p>
<p>Thanks!</p>
| J.G. | 56,861 | <p>In the limit as your mini squares shrink to no size, the area goes from being a sum of their areas to an integral. Consider the circle <span class="math-container">$x^2+y^2=r^2$</span>. In the positive quadrant, one quarter of its area is<span class="math-container">$$\int_0^r\sqrt{r^2-x^2}dx=\int_0^{\pi/2}r^2\cos^2tdt$$</span>(by substituting <span class="math-container">$x=r\sin t$</span>). The mean values of <span class="math-container">$\cos^2t$</span> and <span class="math-container">$\sin^2t$</span> sum to <span class="math-container">$1=\cos^2t+\sin^2t$</span>, and are equal as the functions differ only in a phase shift, so the average of each function is <span class="math-container">$\tfrac12$</span>, and we're integrating over a half-period of <span class="math-container">$\cos^2t$</span>, but the full period just reflects this. So the circle's area is <span class="math-container">$4\tfrac{\pi}{2}\tfrac{r^2}{2}=\pi r^2$</span>.</p>
<p>This isn't how people originally worked it out, it's just what your method points us to. <a href="https://en.wikipedia.org/wiki/Area_of_a_circle#Rearrangement_proof" rel="nofollow noreferrer">The simplest solution</a> is to cut the circle into thinner and thinner sectors instead, and arrange these alternately into something resembling a rectangle, of height <span class="math-container">$r$</span>. Its width is a half-circumference <span class="math-container">$\pi r$</span>.</p>
<p><a href="https://upload.wikimedia.org/wikipedia/commons/f/fb/CircleArea.svg" rel="nofollow noreferrer"><img src="https://upload.wikimedia.org/wikipedia/commons/f/fb/CircleArea.svg" alt="Circle area diagram" /></a></p>
<p>This argument can be formalised with calculus too; it boils down to the infinitesimal area element being <span class="math-container">$\rho d\rho d\theta$</span>, with <span class="math-container">$\rho$</span> an arbitrary internal point's distance from the centre:<span class="math-container">$$\int_0^r\rho d\rho\int_0^{2\pi}d\theta=\tfrac12r^22\pi=\pi r^2.$$</span></p>
<p>But your approach is interesting from the perspective of more recent mathematics. A <a href="https://en.wikipedia.org/wiki/Monte_Carlo_method#Mersenne_twister_(MT19937)_in_Python_(a_Monte_Carlo_method_simulation)" rel="nofollow noreferrer">Monte Carlo method</a> computes probabilities from integrals or vice versa. That the circle comprises a proportion <span class="math-container">$\tfrac{\pi}{4}$</span> of the smallest enclosing square's area means a randomly chosen point in that square has probability <span class="math-container">$\tfrac{\pi}{4}$</span> of being in the circle.</p>
|
3,578,191 | <p>Without tables or a calculator, find the value of <span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span>.</p>
<p>I do not understand how the positive/negative signs are obtained as shown in the book; is there a formula for expanding these kind of things (what kind of expression is it, by the way?)?</p>
<p><a href="https://i.stack.imgur.com/TZjZo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TZjZo.png" alt="enter image description here"></a></p>
<p>This is my solution:</p>
<p><span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span></p>
<p><span class="math-container">$= \displaystyle\frac{[(\sqrt5+2)^3+(\sqrt5-2)^3][(\sqrt5+2)^3-(\sqrt5-2)^3]}{8\sqrt5}$</span></p>
<p><span class="math-container">$=\displaystyle\frac{(\sqrt5+2+\sqrt5-2)[(\sqrt5+2)^2\color{red}{+}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2](\sqrt5+2-\sqrt5+2)[(\sqrt5+2)^2\color{red}{-}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2]}{8\sqrt5}$</span></p>
<p><span class="math-container">$=\displaystyle\frac{[2\sqrt5(5+4\sqrt5+4+\color{red}{5-4}+5-4\sqrt5+4][4(5+4\sqrt5+4\color{red}{-(5-4)}+(5-4\sqrt5+4)]}{8\sqrt5}$</span></p>
<p><span class="math-container">$=\displaystyle\frac{2584\sqrt5}{8\sqrt5}$</span></p>
<p><span class="math-container">$=323$</span></p>
<p>Because of the multiplication, I still got the same answer as given in the book. However, is the book or I correct in terms of the positive/negative signs(in red)?</p>
| trancelocation | 467,003 | <p>It might be interesting that you may avoid tedious calculations with roots if you use recurrence relations:</p>
<ul>
<li>Set <span class="math-container">$t_1 =2+\sqrt 5$</span> and <span class="math-container">$t_2 = 2-\sqrt 5$</span>.</li>
</ul>
<p>So, the searched for value is
<span class="math-container">$$\frac{t_1^6-t_2^6}{8\sqrt{5}}$$</span></p>
<p>This is <span class="math-container">$a_6$</span> in the recurrence relation </p>
<p><span class="math-container">$$a_{n+2} - (t_1+t_2)a_{n+1} + t_1t_2 = a_{n+2} - 4a_{n+1} -a_n $$</span></p>
<p>with
<span class="math-container">$$a_0 = 0 \text{ and } a_1 = \frac{t_1-t_2}{8\sqrt{5}}=\frac 14$$</span></p>
<p>Now, calculating recursively you get </p>
<p><span class="math-container">$$a_6 = 4\left(76+\frac 14\right)+18 = 323$$</span></p>
|
4,506,151 | <p>Determine all functions <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> such that <span class="math-container">$$f(x f(x+y))+f(f(y) f(x+y))=(x+y)^{2}, \forall x,y \in \mathbb{R} \tag1)$$</span></p>
<p>My approach:
Let <span class="math-container">$x=0$</span>, we get
<span class="math-container">$$f(0)+f\left((f(y))^2\right)=y^2$$</span>
<span class="math-container">$\Rightarrow$</span>
<span class="math-container">$$f\left((f(y))^2\right)=y^2-f(0)\tag2 $$</span>
Let us assume <span class="math-container">$f(0)=k \ne 0$</span></p>
<p>Put <span class="math-container">$y=0$</span> above, we get
<span class="math-container">$$f(k^2)=-k$$</span>
Also put <span class="math-container">$y=-x$</span> in <span class="math-container">$(1)$</span>, we get
<span class="math-container">$$f(kf(x))+f(kf(-x))=0, \forall x \in \mathbb{R}$$</span>
Put <span class="math-container">$x=0$</span> above we get
<span class="math-container">$$f(k^2)=0$$</span>
<span class="math-container">$\Rightarrow$</span>
<span class="math-container">$f(k^2)$</span> has two different images <span class="math-container">$0,-k$</span> which contradicts that <span class="math-container">$f$</span> is a function. Hence <span class="math-container">$k=0 \Rightarrow f(0)=0$</span>.
So from <span class="math-container">$(2)$</span> we get:
<span class="math-container">$$f\left((f(y))^2\right)=y^2 \cdots (3)$$</span>
Now put <span class="math-container">$y=0, x=f(x)$</span> in <span class="math-container">$(1)$</span>, and use the fact <span class="math-container">$f(0)=0$</span>,we get
<span class="math-container">$$f\left((f(x))^2\right)=(f(x))^2$$</span>
Since <span class="math-container">$x$</span> is dummy variable, we get <span class="math-container">$$f\left((f(y))^2\right)=(f(y))^2 \cdots (4)$$</span>
From <span class="math-container">$(3),(4)$</span>, we get <span class="math-container">$$f(x)=\pm x$$</span></p>
<p>I just want to ask, is my approach fine? If not where is the flaw? Also other approaches are welcomed.</p>
| Bruno B | 1,104,384 | <p>By taking <span class="math-container">$f(y)^2 = y^2$</span> in <span class="math-container">$(3)$</span>, we get:
<span class="math-container">$$\forall y \in \mathbb{R},\quad f(y^2) = y^2$$</span>
Thus: <span class="math-container">$$\forall x \in \mathbb{R_+},\quad f(x) = x$$</span>
Now, let <span class="math-container">$x \in \mathbb{R}_-$</span>, and define <span class="math-container">$y := 1-x \in \mathbb{R}_+$</span>.<br />
Then, by inputting those in <span class="math-container">$(1)$</span>, and using that <span class="math-container">$f(x + y) = f(1) = 1$</span>, we obtain:
<span class="math-container">$$f(x) + y = f(x) + f(f(y)) = 1^2 = 1$$</span>
Therefore:
<span class="math-container">$$f(x) = 1 - y = x$$</span>
Hence <span class="math-container">$f$</span> is the identity on <span class="math-container">$\mathbb{R}$</span>.</p>
|
2,943,790 | <p>A function is said to be <em>continuous at zero</em> iff:</p>
<p><span class="math-container">$\lim_{x \rightarrow 0}{f(x)} = f(0)$</span></p>
<p>Could this be the same as saying:</p>
<ul>
<li>Let <span class="math-container">$\Delta$</span> = <em>the smallest open set containing zero</em></li>
<li><span class="math-container">$f(x) = f(0), \forall x \in \Delta$</span></li>
</ul>
<p>Am I misunderstanding what a limit is, or are these two definitions equivalent?</p>
<p>Edit: I've got a few responses saying that there is no such set as <span class="math-container">$\Delta$</span>. I agree. However, it seems to me that the expression:</p>
<p><span class="math-container">$\lim_{x \rightarrow 0}{f(x)} = f(0)$</span></p>
<p>is definitively claiming to evaluate <span class="math-container">$f$</span> at <em>some</em> unspecified value or values <span class="math-container">$x \neq 0$</span> but not claiming any particular values. To be specific, for any particular value you could name, the claim that the limit exists does <em>not</em> claim it needs to evaluate f at that point. So, what exactly is it claiming? </p>
| Mohammad Riazi-Kermani | 514,496 | <p>There is no smallest open set containing zero. </p>
<p>Any open set containing zero by definition contains an open interval containing zero and that interval contains a smaller open interval containing zero.</p>
|
2,943,790 | <p>A function is said to be <em>continuous at zero</em> iff:</p>
<p><span class="math-container">$\lim_{x \rightarrow 0}{f(x)} = f(0)$</span></p>
<p>Could this be the same as saying:</p>
<ul>
<li>Let <span class="math-container">$\Delta$</span> = <em>the smallest open set containing zero</em></li>
<li><span class="math-container">$f(x) = f(0), \forall x \in \Delta$</span></li>
</ul>
<p>Am I misunderstanding what a limit is, or are these two definitions equivalent?</p>
<p>Edit: I've got a few responses saying that there is no such set as <span class="math-container">$\Delta$</span>. I agree. However, it seems to me that the expression:</p>
<p><span class="math-container">$\lim_{x \rightarrow 0}{f(x)} = f(0)$</span></p>
<p>is definitively claiming to evaluate <span class="math-container">$f$</span> at <em>some</em> unspecified value or values <span class="math-container">$x \neq 0$</span> but not claiming any particular values. To be specific, for any particular value you could name, the claim that the limit exists does <em>not</em> claim it needs to evaluate f at that point. So, what exactly is it claiming? </p>
| José Carlos Santos | 446,262 | <p>There is no such thing as “the smallest open set containing <span class="math-container">$0$</span>”. That's so because if <span class="math-container">$A$</span> is an open subset of <span class="math-container">$\mathbb R$</span> (I'm assuming that you're working in <span class="math-container">$\mathbb R$</span>) and <span class="math-container">$0\in A$</span>, then <span class="math-container">$A\supset(-\varepsilon,\varepsilon)$</span>, for some <span class="math-container">$\varepsilon>0$</span>. If <span class="math-container">$A\varsupsetneq(-\varepsilon,\varepsilon)$</span>, take <span class="math-container">$A^\star=(-\varepsilon,\varepsilon)$</span>; otherwise, take <span class="math-container">$A^\star=\left(-\frac\varepsilon2,\frac\varepsilon2\right)$</span>. In each case, <span class="math-container">$A^\star$</span> is an open set containing <span class="math-container">$0$</span> and <span class="math-container">$A^\star\varsubsetneq A$</span>.</p>
|
1,715,265 | <p>I've tried a method similar to showing that $\mathbb{Q}(\sqrt2, \sqrt3)$ is a primitive field extension, but the cube root of 2 just makes it a nightmare.</p>
<p>Thanks in advance </p>
| Golan Levy | 664,446 | <p>I think there is a simpler solution.</p>
<p>Let <span class="math-container">$a = \sqrt{2} \sqrt[3]{2}$</span>.</p>
<p><span class="math-container">$a$</span> is clearly in the field extension and both generators can be generated by it:</p>
<p><span class="math-container">$\sqrt{2} = (\frac{a}{2})^{-3}$</span></p>
<p><span class="math-container">$\sqrt[3]{2} = (\frac{a}{2})^{-2}$</span>.</p>
|
3,360,396 | <p>In trying to answer <a href="https://math.stackexchange.com/q/1854193/104041">this question</a> on MSE, I got stuck. This taunts me because I think I should be able to do it.</p>
<h2>The Question:</h2>
<blockquote>
<p>Let <span class="math-container">$\phi : G\twoheadrightarrow H$</span> be an epimorphism of groups. Suppose <span class="math-container">$W$</span> and <span class="math-container">$K$</span> are conjugate in <span class="math-container">$H$</span>. Show that <span class="math-container">$\phi^{-1}(W)$</span> and <span class="math-container">$\phi^{-1}(K)$</span> are conjugate in <span class="math-container">$G$</span>.</p>
</blockquote>
<h2>My Attempt:</h2>
<p><a href="https://math.stackexchange.com/questions/1854193/pre-image-of-conjugate-subgroups?r=SearchResults#comment3795230_1854193"><em>Following @DerekHolt's comment . . .</em></a></p>
<blockquote>
<p>It is true and straightforward <span class="math-container">$g\phi^{-1}(W)g^{\color{red}{-1}} = \phi^{-1}(K)$</span>. Just do the normal thing and show that any element of the LHS is in the RHS and vice versa.</p>
</blockquote>
<p><em>So here goes . . .</em></p>
<p>Let <span class="math-container">$h\in g\phi^{-1}(W)g^{-1}$</span> for some <span class="math-container">$g\in G$</span>. Then there exists a <span class="math-container">$w\in W$</span> for which <span class="math-container">$h=g\phi^{-1}(w)g^{-1}$</span>. But since <span class="math-container">$gWg^{-1}=K$</span>, there exists a <span class="math-container">$k\in K$</span> such that . . . I don't know.</p>
<p>It's not immediately obvious what I should do, even given the hint. I think I did something simple wrong already.</p>
<hr>
<p><em>NB:</em> It's 00:41 here . . . now. That's my excuse.</p>
| Robert Shore | 640,080 | <p>Choose <span class="math-container">$x \in \phi^{-1}(W).$</span> Then <span class="math-container">$\phi(x) \in W$</span>, so <span class="math-container">$\exists h \in H \text{ such that } h\phi(x)h^{-1} \in K.$</span> Because <span class="math-container">$\phi$</span> is an epimorphism, <span class="math-container">$\exists g \in G \text{ with } \phi(g)=h$</span>, so <span class="math-container">$\phi(gxg^{-1})=h\phi(x)h^{-1} \in K$</span> and <span class="math-container">$gxg^{-1} \in \phi^{-1}(K)$</span>. Thus, <span class="math-container">$\phi^{-1}(K) \subseteq g^{-1}\phi^{-1}(W)g$</span>. Similarly, <span class="math-container">$\phi^{-1}(W)$</span> is contained in a conjugate of <span class="math-container">$\phi^{-1}(K)$</span> so equality follows.</p>
|
1,893,540 | <p>I've been asked to prove the following,
if $x - ε ≤ y$ for all $ε>0$ then $x ≤ y$.
I tried proof by contrapositive, but I keep having trouble choosing the right $ε$. Can you guys help me out? </p>
| fleablood | 280,126 | <p>For shits and giggles and the learning experience it brings, here's a proof intended to reinforce your concepts of sup/inf.</p>
<p>$x - \epsilon \le y \forall \epsilon >0$</p>
<p>$x - y \le \epsilon \forall \epsilon > 0$</p>
<p>So $x-y$ is a lower bound for $\{\epsilon > 0\} = (0,\infty) $.</p>
<p>So $x-y \le \inf (0,\infty) $.</p>
<p>It's good practise to see if you can prove $\inf (0,\infty) =0$. I'm not going to do it here. Let's assume you've seen it done.</p>
<p>$x-y \le \inf (0,\infty) = 0$.</p>
<p>So $x \le y $.</p>
<p>We're done.</p>
<p>Okay the contra positive proof is easier, more direct and better. But analysis classes are going to expect us to get up to speed on bound proofs very quickly and with little practise. </p>
<p>We might as well get comfortable with them.</p>
|
2,913,974 | <p>In an additive category, we say that an object $A$ is compact if the functor $\text{Hom}(A, -)$ respects coproducts. That is, if the canonical morphism
$$
\coprod_{i} \text{Hom} \left( A, X_{i} \right) \longrightarrow \text{Hom} \left( A, \coprod_{i} X_{i} \right)
$$
is a bijection. Suppose $A \oplus B$ is compact. Why are the summands $A$ and $B$ compact? Everywhere claims this is obvious and provides no justification, but I cannot see why this is true. </p>
| Community | -1 | <p>More generally, a retract of a compact object is compact. Recall that expressing $U$ as a retraction of $V$ is to give morphisms</p>
<p>$$ U \xrightarrow{i} V \xrightarrow{p} U $$</p>
<p>whose composite is the identity on $U$. This is also called a split idempotent, since $ip$ is an idempotent map $V \to V$.</p>
<p>Note that any functor applied to a retract diagram gives another retract diagram.</p>
<p>The slick way to carry out the proof is to use the fact that the retract $U$ is both the equalizer and the coequalizer of the parallel maps $1_V$ and $ip$ (by the maps $i$ and $p$ respectively), and apply the general facts</p>
<p>$$ \hom(\mathop{\mathrm{colim}}_{i \in I} X_i , Y) \cong \lim_{i \in I} \hom(X_i, Y) $$
$$ \mathop{\mathrm{colim}}_{i \in I} \mathop{\mathrm{colim}}_{j \in J} X_{i,j} \cong
\mathop{\mathrm{colim}}_{j \in J} \mathop{\mathrm{colim}}_{i \in I} X_{i,j} $$</p>
|
2,435 | <p>I'm not sure we already have something similar, but I'm working on more code inspections for the IntelliJ plugin and it's always a good idea to ask the community. Since it doesn't really fit on main, I'm posting it here on Meta.</p>
<p>Linting is an excellent way to point the developer to probable errors that he might have overlooked. With a dynamic language like the one of Mathematica, we are a bit restricted with what we can do, since we cannot evaluate code and since most things require evaluation to be sure if they are a bug or not. Nevertheless, there are checks we can do. For instance <code>If[a=b, ..]</code> is most likely a bug and even if the developer knew what he did, it is a bad style.</p>
<p>There are trickier examples like <code>If[a<5,...]</code>. This looks okay but knowing that <code>a<5</code> stays unevaluated if the comparison cannot be done, it is a source of error because you end up with the unevaluated <code>If</code> expression in your wrong result and debugging might be complicated.</p>
<p>In both examples, wrapping <code>TrueQ</code> around the condition resolves the issue and although there might still be a bug, at least you can be sure your <code>If</code> expression is evaluated to some branch.
Other common sources of error are, e.g. <code>x_?testFunc[#]&</code> or implicit multiplication through linebreaks.</p>
<p><strong>Question:</strong> What are common bugs in your code and could they have been pointed out by a linter? If you like to share your thoughts, please provide one issue per answer, so that others can vote. I'm looking forward to your suggestions and see if I can implement some of them in IntelliJ.</p>
<hr>
<p>Example issue: With the <a href="https://mathematica.stackexchange.com/a/176489/187">alternative layout for packages</a> which was pointed out by Leonid, we can use <em>directives</em> for a static code analyzer to easily export symbols or declare them as package symbols. As Leonid pointed out, the directives need to be on their own source-line with nothing else on it. So for the directives</p>
<pre><code>PackageScope["myFunc"]
PackageExport["MyExportedFunc"]
</code></pre>
<p>I implemented the following rules</p>
<ol>
<li>They need to be on their own source line with nothing else on it</li>
<li>Their string argument must be a valid identifier</li>
</ol>
<p><a href="https://i.stack.imgur.com/3bO61.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3bO61.gif" alt="enter image description here"></a></p>
| Roman | 26,598 | <blockquote>
<h1>Status Completed</h1>
</blockquote>
<p>I agree with @CE that precedence issues would be great to point out, and maybe suggest to the programmer to use more parentheses. The relative precedences of <code>@</code>, <code>@@</code>, <code>@@@</code>, <code>/@</code>, <code>/.</code>, <code>//.</code>, etc. and the scopes of any associated slot operators <code>#</code>, <code>##</code>, etc. are sometimes difficult to track; can't expect anyone to remember the full <a href="https://reference.wolfram.com/language/tutorial/OperatorInputForms.html" rel="nofollow noreferrer">Operator Precedence Table</a>. I tend to use the CTRL-. selection trick to group-select according to precedence (that would also be great if you don't yet have it in IntelliJ); but complex cases still exceed my mental capacity.</p>
<p>For example, quick, without looking it up, does</p>
<pre><code>f /@ {1, 2, 3} /. i_ -> i^2
</code></pre>
<p>give <code>{f[1]^2, f[2]^2, f[3]^2}</code> or <code>{f[1], f[4], f[9]}</code>? In my opinion such edge cases deserve a lint warning.</p>
<h2>Comment halirutan:</h2>
<p>As said in the comment, structure-based selection already worked for a long time and you can hit <code>Ctrl</code>+<code>W</code> to expand or <code>Ctrl</code>+<code>Shift</code>+<code>W</code> to shrink the selection</p>
<p><img src="https://i.stack.imgur.com/P0bcU.gif" alt="img"></p>
<p>For highly complex expressions, rather than making a warning message which probably would annoy experienced users, maybe a "Can you parenthesize this expression for me"-action would be more convenient. With this, the user can quickly check how an expression is parsed</p>
<p><a href="https://i.stack.imgur.com/ih2ZK.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ih2ZK.gif" alt="enter image description here"></a></p>
<p>This feature is now integrated into the plugin. When you are editing a file and want to quickly see the parenthesitation (if this is a word), you can press <kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>Shift</kbd>+<kbd>9</kbd>. This will put your file in read-only mode and display all parenthesis. Pressing <kbd>Esc</kbd> brings you back to edit-mode.</p>
<p>Note:</p>
<ul>
<li>The shortcut <kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>Shift</kbd>+<kbd>9</kbd> is really just <kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>(</kbd> on US layouts, which should make it easy to remember</li>
<li>You can also use the menu entry <strong>Wolfram Language | Show Parentheses</strong></li>
</ul>
|
2,435 | <p>I'm not sure we already have something similar, but I'm working on more code inspections for the IntelliJ plugin and it's always a good idea to ask the community. Since it doesn't really fit on main, I'm posting it here on Meta.</p>
<p>Linting is an excellent way to point the developer to probable errors that he might have overlooked. With a dynamic language like the one of Mathematica, we are a bit restricted with what we can do, since we cannot evaluate code and since most things require evaluation to be sure if they are a bug or not. Nevertheless, there are checks we can do. For instance <code>If[a=b, ..]</code> is most likely a bug and even if the developer knew what he did, it is a bad style.</p>
<p>There are trickier examples like <code>If[a<5,...]</code>. This looks okay but knowing that <code>a<5</code> stays unevaluated if the comparison cannot be done, it is a source of error because you end up with the unevaluated <code>If</code> expression in your wrong result and debugging might be complicated.</p>
<p>In both examples, wrapping <code>TrueQ</code> around the condition resolves the issue and although there might still be a bug, at least you can be sure your <code>If</code> expression is evaluated to some branch.
Other common sources of error are, e.g. <code>x_?testFunc[#]&</code> or implicit multiplication through linebreaks.</p>
<p><strong>Question:</strong> What are common bugs in your code and could they have been pointed out by a linter? If you like to share your thoughts, please provide one issue per answer, so that others can vote. I'm looking forward to your suggestions and see if I can implement some of them in IntelliJ.</p>
<hr>
<p>Example issue: With the <a href="https://mathematica.stackexchange.com/a/176489/187">alternative layout for packages</a> which was pointed out by Leonid, we can use <em>directives</em> for a static code analyzer to easily export symbols or declare them as package symbols. As Leonid pointed out, the directives need to be on their own source-line with nothing else on it. So for the directives</p>
<pre><code>PackageScope["myFunc"]
PackageExport["MyExportedFunc"]
</code></pre>
<p>I implemented the following rules</p>
<ol>
<li>They need to be on their own source line with nothing else on it</li>
<li>Their string argument must be a valid identifier</li>
</ol>
<p><a href="https://i.stack.imgur.com/3bO61.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3bO61.gif" alt="enter image description here"></a></p>
| user42582 | 42,582 | <p>I'm not sure if this qualifies as a separate claim related to operator precedence. I include it as an answer simply to facilitate the discussion format, even though I believe it should be a comment, instead.</p>
<p>I consistently have had a hard time with <code>expr//f/*g</code> when <code>g</code> is an anonymous function (ie defined '<em>then and there</em>' like eg <code>f/*#^2&</code>). The issue becomes more complicated when <em>more</em> functions are involved or when the complexity of the functions involved, increases.</p>
<p>A close second, which is probably too idiosyncratic or personal, has to do with confusing <code>Options</code> with <code>Attributes</code> and using <code>SetOptions</code> (like one would have used <code>SetAttributes</code>) when defining new functions. Obviously, using <code>SetOptions</code> instead of <code>Options</code> on a symbol that has no default <code>Options</code> causes a <code>SetOptions::optnf</code> message.</p>
|
2,300,613 | <p>I tried to calculate few derivatives, but I cant get $f^{(n)}(z)$ from them. Any other way? </p>
<p>$$f(z)=\frac{e^z}{1-z}\text{ at }z_0=0$$</p>
| Simply Beautiful Art | 272,831 | <p>Hint:</p>
<p>$$\frac1{1-z}=\sum_{n=0}^\infty z^n$$</p>
<p>$$e^z=\sum_{n=0}^\infty\frac{z^n}{n!}$$</p>
<p>Now apply <a href="https://en.wikipedia.org/wiki/Cauchy_product#Cauchy_product_of_two_power_series" rel="nofollow noreferrer">Cauchy products</a> to see that</p>
<p>$$\frac{e^z}{1-z}=\sum_{n=0}^\infty z^n\sum_{k=0}^n\frac1{k!}=\sum_{n=0}^\infty e_n(1)z^n$$</p>
<p>where $e_n(x)$ is the <a href="http://mathworld.wolfram.com/ExponentialSumFunction.html" rel="nofollow noreferrer">exponential sum formula</a>.</p>
|
957,400 | <p>S: Every employee who is honest and persistent is successful or bored.</p>
<p>Would this statement be the negations, converse, or contrapositive of S?</p>
<p>-> All employees who are dishonest or not persistent must be unsuccessful and not bored.</p>
| amWhy | 9,003 | <p>The altered statement is the converse of the contrapositive of $S$.</p>
<p>Contraposive of $S$: "All employees who are unsuccessful and not bored are dishonest or not persistent."</p>
<p>Converse of the contrapositive of $S$: "All employees who are dishonest or not persistent are unsuccessful and not bored."</p>
|
598,962 | <p>I have to determine the following:</p>
<p>$$\lim_{x \rightarrow 0}\frac{9}{x}\left(\frac{3}{(x+3)^3}-\frac{1}{9}\right)$$</p>
<p>I've got so far:</p>
<p>$$\lim_{x \rightarrow 0}\frac{9}{x}\left(\frac{3}{(x+3)^3}-\frac{1}{9}\right)= \lim_{x \rightarrow 0}\left(\frac{27}{x(x+3)^3}-\frac{1}{x}\right)=\lim_{x \rightarrow 0} \left(\frac{27-(x+3)^3}{x(x+3)^3}\right)=\cdots$$</p>
<p>How to go on? I've got $\frac{\infty}{0}...$</p>
| DeepSea | 101,504 | <p>Let $L$ = the limit in question, then use definition of derivative for $f(x) = \frac{3}{(x + 3)^3}$, then $L = 9f'(0) = 9\left(\frac{-9}{(0 + 3)^4}\right) = -1$.</p>
|
90,070 | <h2>Question:</h2>
<p>Let <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> be an orthogonal matrix and let <span class="math-container">$\varepsilon>0$</span>. Then does there exist a rational orthogonal matrix <span class="math-container">$B\in\mathbb{R}^{n \times n}$</span> such that <span class="math-container">$\|A-B\|<\varepsilon$</span>?</p>
<h2>Definitions:</h2>
<ul>
<li>A matrix <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> is an <em>orthogonal matrix</em> if <span class="math-container">$A^T=A^{-1}$</span></li>
<li>A matrix <span class="math-container">$A\in\mathbb{R}^{n \times n}$</span> is a <em>rational matrix</em> if every entry of it is rational.</li>
</ul>
| Igor Rivin | 11,142 | <p>Yes. It is a theorem of Cayley that the mapping $S \rightarrow (S-I)^{-1}(S+1)$ gives a correspondence between the set of $n\times n$ skew-symmetric matrices over $\mathbb{Q}$ and the set of $n\times n$ orthogonal matrices which do not have one as an eigenvalue. Since the mapping is nice, and rational skew-symmetric matrices are dense in the set of all skew-symmetric matrices, you have your result. For more, see <a href="http://www.math.upenn.edu/~pemantle/Summer2007/Library/liebeck-osborne.pdf">the very nice paper by Liebeck and Osborne</a></p>
|
4,515,488 | <p>I am making a computer program to play cards, for this algorithm to work I need to deal cards out randomly.
However, I know that some people cannot have some cards due to the rules of the card game.</p>
<p>To elaborate on this, imagine we have 3 players: <em>a</em>, <em>b</em> and <em>c</em>. Also, there are 4 cards left to divide: 1, 2, 3 and 4. I know from <em>c</em> that he cannot have card 1 or card 2, I know from player <em>a</em> that he cannot have card 3. Each player is listed below with their corresponding possible cards.</p>
<pre><code>a b c
1 1
2 2
3 3
4 4 4
</code></pre>
<p>Also, I know that player <em>a</em> must receive 1 card, player <em>b</em> must receive 2 and player <em>c</em> must receive 1 card (for a total of 4 cards). Note, that it does not matter in which order a player receives his cards.</p>
<p><strong>My Question:</strong> Is there some algorithm that can deal the cards out randomly (with arbitrary amounts of cards and 3 players) such that each possible deal is equally likely?</p>
<p><strong>My attemps on solving this problem:</strong> First, I enlisted each possible solution (note that switching the cards around in the middle column doesn't influence the relative chances of the possibilities).</p>
<pre><code>a b c
1 2 3 4
1 2 4 3
2 1 3 4
2 1 4 3
4 1 2 3
</code></pre>
<p>And then I noticed that every algorithm that I could think of did not sample the above possibilities with equal chances.
I could, however, come up with one algorithm which is to generate a random partition of 1, 2, 3 and 4 and check if it is valid. If its not I generate a random partition again and check it, and so on.
Altough this gives me a random sample where all options are equally likely, as we move on to more and more cards this algorithm takes up a lot of time. I have had no succes on finding speedups or different algorithms to solve this problem.</p>
| jorisperrenet | 1,049,661 | <p>So, I've been busy with implementing the algorithm of @kodlu.
Here is my Python code for anyone walking into a similar problem.</p>
<pre><code>import random
random.seed(0)
### Which card each player can receive
poss = [{1, 2, 4}, {1, 2, 3, 4}, {3, 4}]
N = [1, 2, 1]
num_players = len(poss) # in my case: 3 players
# The final deal
deal = [set() for _ in range(num_players)]
# The cards that need to be divided
cards = set().union(*poss)
assert len(cards) == sum(N) # all cards must be divided
# The allowed (card,player) pairs
L = []
for player in range(num_players):
L += [(card, player) for card in poss[player]]
while True:
repeat = True
while repeat: # we repeat this untill L doesn't change
repeat = False
### e.g. if a player has 3 allowed cards and he needs 3 cards, we can
# give all cards to that player. This is needed to avoid impossible games.
unique = {player: 0 for player in range(num_players)}
unique_card = {player: [] for player in range(num_players)}
for card, player in L:
unique[player] += 1
unique_card[player].append(card)
# Discard those cards
for player in range(num_players):
if unique[player] == N[player] and N[player] != 0:
cards_to_receive = set(unique_card[player])
# As all remaining cards for the player are divided he does not
# need any more cards.
N[player] = 0
# Add the cards to the final deal
deal[player].update(cards_to_receive)
# We can discard all pairs of the player and all pairs containing
# a card that has just been given to this player.
L = [pair for pair in L
if (pair[1] != player and
pair[0] not in cards_to_receive)
]
cards -= cards_to_receive
repeat = True
# If there are no cards left to divide, break
# (else, randint gives an error)
if L == []:
break
# Draw a random (card,player) pair from L
rand = random.randint(0, len(L)-1)
card, player = L[rand]
# Add the card to the final deal
deal[player].add(card)
# Discard this pair and this card
N[player] -= 1
if N[player] == 0:
# Discard this player
L = [i for i in L if i[1] != player]
cards.discard(card)
L = [i for i in L if i[0] != card]
# Check i the answer is correct
if cards != set():
print('Something went wrong...')
print(f'Leftover: {cards}')
print('Maybe try to deal again')
print(deal)
</code></pre>
<p>Although this works in most cases, I could find cases for which the program found an impossible situation, a simple fix is trying to deal the cards out again.
However, for my use (and the cases my program gives it) it works just fine and I do not need to re-deal the cards.</p>
<p>Still, I did not find more elegant solutions to this problem (where there is no need to re-deal), so help is still very much appreciated.</p>
|
3,395,098 | <p>I am trying to work out for what <span class="math-container">$\lambda_1, \lambda_2 > 0$</span> is it true that <span class="math-container">$f(y) = \lambda_1 e^{y-\lambda_1 e^y} + \lambda_2 e^{y-\lambda_2 e^y}$</span> is unimodal?</p>
<p>Experimentally it seems it is unimodal when <span class="math-container">$\lambda_1 < \lambda_2$</span> and <span class="math-container">$\frac{\lambda_2}{\lambda{1}} < 7.5$</span> .</p>
<p>To work this out I started with:</p>
<p><span class="math-container">$$\frac{d}{dy} \left(\lambda_1 e^{y-\lambda_1 e^y} + \lambda_2 e^{y-\lambda_2 e^y} \right) = \lambda_1 e^{y - \lambda_1 e^y} (1 - \lambda_1 e^y ) + \lambda_2 e^{y - \lambda_2 e^y} (1 - \lambda_2 e^y )$$</span></p>
<p>It seems we then need to check when</p>
<p><span class="math-container">$$\lambda_1 e^{y - \lambda_1 e^y} (1 - \lambda_1 e^y ) + \lambda_2 e^{y - \lambda_2 e^y} (1 - \lambda_2 e^y ) = 0$$</span></p>
<p>has more than one solution when solved for <span class="math-container">$y \in \mathbb{R}$</span>. How can we determine the conditions under which it has different numbers of solutions?</p>
<h1>Added:</h1>
<p>Substituting <span class="math-container">$z = e^y$</span> and dividing by <span class="math-container">$e^{y-1}$</span> we are trying to determine how many solutions</p>
<p><span class="math-container">$$
\lambda_1 e^{1-\lambda_1 z}(1-\lambda_1 z) +\lambda_2e^{1-\lambda_2 z}(1-\lambda_2 z) = 0
$$</span></p>
<p>has with <span class="math-container">$z > 0$</span>.</p>
<h1>Examples:</h1>
<p>Example <span class="math-container">$\lambda_1 = 1, \lambda_2 = 7$</span> with only one mode (code in python):</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
def pdf_func(y, params):
return sum([lambd*np.exp(y - lambd * np.exp(y)) for lambd in params])
params = [1, 7]
xs = np.linspace(-10,10,1000)
plt.plot(xs, [pdf_func(y, params) for y in xs])
</code></pre>
<p><a href="https://i.stack.imgur.com/iN2Yd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iN2Yd.png" alt="enter image description here" /></a></p>
<p>Example <span class="math-container">$\lambda_1 = 1, \lambda_2 = 50$</span> with two modes:</p>
<p><a href="https://i.stack.imgur.com/HFAnp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HFAnp.png" alt="enter image description here" /></a></p>
<h1>Questions</h1>
<ul>
<li>How can one prove (assuming it is true) that that the number of local maxima that <span class="math-container">$f(y)$</span> has is either 1 or 2 and there are no other possibilities?</li>
<li>Is it true that for <span class="math-container">$\lambda_2 > \lambda_1 > 0$</span>, there exists a threshold <span class="math-container">$c$</span> so that if <span class="math-container">$\frac{\lambda_2}{\lambda_1} < c$</span> then <span class="math-container">$f(y)$</span> is unimodal and if not it has two local maxima? (My guess is that the answer is yes and this threshold is around <span class="math-container">$7.5$</span>.)</li>
</ul>
| River Li | 584,414 | <p><strong>The proof is not hard but long</strong>. </p>
<p>Let <span class="math-container">$w = \lambda_1 \mathrm{e}^y$</span> and <span class="math-container">$c = \frac{\lambda_2}{\lambda_1}$</span>.
Let <span class="math-container">$$g(w) = w\mathrm{e}^{-w} + cw \mathrm{e}^{-cw}.$$</span></p>
<p>We first give the following auxiliary result. The proof is given later.</p>
<p><strong>Fact 1</strong>: Let <span class="math-container">$w_1 = \frac{c+1 - \sqrt{c^2-6c+1}}{2c}$</span> and <span class="math-container">$w_2 = \frac{c+1 + \sqrt{c^2-6c+1}}{2c}$</span>. Then we have:</p>
<p>i) <span class="math-container">$g'(w_1) < 0$</span> for <span class="math-container">$c > 3 + 2\sqrt{2}$</span>.</p>
<p>ii) <span class="math-container">$g'(w_2) = 0$</span> has exactly one solution on <span class="math-container">$(3+2\sqrt{2}, \infty)$</span>, denoted by <span class="math-container">$c_0 \approx 7.566278$</span>.</p>
<p>iii) <span class="math-container">$g'(w_2) \le 0$</span> for <span class="math-container">$3 + 2\sqrt{2} < c \le c_0$</span>. </p>
<p>iv) <span class="math-container">$g'(w_2) > 0$</span> for <span class="math-container">$c > c_0$</span>.</p>
<p>Now let us proceed. We give the results for <span class="math-container">$c \ge 1$</span> as follows. The proof is given later.</p>
<p><strong>Lemma 1</strong>: If <span class="math-container">$1\le c \le c_0$</span>, then <span class="math-container">$g(w)$</span> is unimodal on <span class="math-container">$(0, \infty)$</span>. If <span class="math-container">$c > c_0$</span>, then <span class="math-container">$g(w)$</span> has exactly two local maxima on <span class="math-container">$(0, \infty)$</span>.</p>
<p>For <span class="math-container">$0 < c \le 1$</span>, simply let <span class="math-container">$c_1 = \frac{1}{c}$</span> and <span class="math-container">$w_1 = cw$</span> to get <span class="math-container">$w\mathrm{e}^{-w} + cw\mathrm{e}^{-cw}
= c_1w_1\mathrm{e}^{-c_1w_1} + w_1\mathrm{e}^{w_1}$</span>. Thus, we immediately have the following results:</p>
<p><strong>Lemma 2</strong>: If <span class="math-container">$\frac{1}{c_0} \le c \le 1$</span>, then <span class="math-container">$g(w)$</span> is unimodal on <span class="math-container">$(0, \infty)$</span>. If <span class="math-container">$0 < c < \frac{1}{c_0}$</span>, then <span class="math-container">$g(w)$</span> has exactly two local maxima on <span class="math-container">$(0, \infty)$</span>.</p>
<p><span class="math-container">$\phantom{2}$</span></p>
<p><strong>Proof of Fact 1</strong>: Note that <span class="math-container">$\frac{1}{c} < w_1 < w_2 < 1$</span>. We have
<span class="math-container">\begin{align}
g'(w_1) &= \mathrm{e}^{-w_1}c(cw_1-1)\big(\frac{1-w_1}{c(cw_1-1)} - \mathrm{e}^{(1-c)w_1}\big), \\
g'(w_2) &= \mathrm{e}^{-w_2}c(cw_2-1)\big(\frac{1-w_2}{c(cw_2-1)} - \mathrm{e}^{(1-c)w_2}\big).
\end{align}</span>
Let
<span class="math-container">\begin{align}
h(w_1) &= \ln \frac{1-w_1}{c(cw_1-1)} - (1-c)w_1, \\
h(w_2) &= \ln \frac{1-w_2}{c(cw_1-1)} - (1-c)w_1.
\end{align}</span></p>
<p>For <span class="math-container">$c\in (3+2\sqrt{2}, \infty)$</span>, let <span class="math-container">$t = c + \sqrt{c^2-6c+1}$</span>, then we have <span class="math-container">$t \in ( 3 + 2\sqrt{2}, \infty)$</span> and <span class="math-container">$c = \frac{t^2-1}{2t-6}$</span>.
With this substitution (actually the so-called Euler substitution), we have
<span class="math-container">\begin{align}
h(w_1) &= \ln \frac{2(t-3)^3}{(t+1)^3(t-1)} + \frac{2(t^2-2t+5)}{(t+1)(t-3)} \triangleq h_1(t), \\
h(w_2) &= \ln\frac{8(t-3)}{(t-1)^3(t+1)} + \frac{t^2-2t+5}{2t-2} \triangleq h_2(t).
\end{align}</span>
We have
<span class="math-container">\begin{align}
h_1'(t) &= -\frac{(t^2-6t+1)(t^2-10t+5)}{(t-3)^2(t-1)(t+1)^2}, \\
h_2'(t) &= \frac{(t^2-6t+1)(t^2-4t-1)}{2(t-1)^2(t+1)(t-3)}.
\end{align}</span>
It is easy to prove that <span class="math-container">$h_1'(t) > 0$</span> for <span class="math-container">$t\in (3+2\sqrt{2}, 5+2\sqrt{5})$</span> and <span class="math-container">$h_1'(t) < 0$</span> for <span class="math-container">$t\in (5+2\sqrt{5}, \infty)$</span>.
Thus, <span class="math-container">$h_1(t)$</span> is strictly increasing on <span class="math-container">$(3+2\sqrt{2}, 5+2\sqrt{5})$</span> and strictly decreasing on <span class="math-container">$(5+2\sqrt{5}, \infty)$</span>.
Note that <span class="math-container">$h_1(5+2\sqrt{5}) < 0$</span>. Thus, we have <span class="math-container">$h_1(t) < 0$</span> for <span class="math-container">$t\in (3+2\sqrt{2}, \infty)$</span>.
Thus, <span class="math-container">$g'(w_1) < 0$</span> for <span class="math-container">$w\in (3 + 2\sqrt{2}, \infty)$</span>.</p>
<p>Also, it is easy to prove that <span class="math-container">$h_2'(t) > 0$</span> for <span class="math-container">$t\in (3 + 2\sqrt{2}, \infty)$</span>. Thus, <span class="math-container">$h_2(t)$</span> is strictly increasing on <span class="math-container">$(3 + 2\sqrt{2}, \infty)$</span>.
Note also that <span class="math-container">$h_2(3+2\sqrt{2}) < 0$</span> and <span class="math-container">$h_2(\infty) = \infty$</span>. Thus, <span class="math-container">$h_2(t)=0$</span> has exactly one solution on <span class="math-container">$(3 + 2\sqrt{2}, \infty)$</span>, \
denoted by <span class="math-container">$t_0 \approx 11.15109339$</span>.
Also, <span class="math-container">$h_2(t) < 0$</span> for <span class="math-container">$t\in (3+2\sqrt{2}, t_0)$</span> and <span class="math-container">$h_2(t) > 0$</span> for <span class="math-container">$t\in (t_0, \infty)$</span>.
Let <span class="math-container">$c_0 = \frac{t_0^2-1}{2t_0-6}\approx 7.566278$</span>. We have <span class="math-container">$g'(w_2) < 0$</span> for <span class="math-container">$3 + 2\sqrt{2} < c < c_0$</span>, and <span class="math-container">$g'(w_2) > 0$</span> for <span class="math-container">$c > c_0$</span>.
The desired result follows. This completes the proof of Fact 1.</p>
<p><span class="math-container">$\phantom{2}$</span></p>
<p><strong>Proof of Lemma 1</strong>: If <span class="math-container">$c = 1$</span>, we have <span class="math-container">$g(w) = 2w\mathrm{e}^{-w}$</span> which is unimodal on <span class="math-container">$(0, \infty)$</span>.</p>
<p>In the following, assume that <span class="math-container">$c > 1$</span>. We have
<span class="math-container">$$g'(w) = \mathrm{e}^{-w}\big(1 - w - c(cw-1)\mathrm{e}^{(1-c)w}\big).$$</span>
Clearly, <span class="math-container">$g'(w) = 0, \ w\in (0, \infty)$</span> is equivalent to
<span class="math-container">$$\ln \frac{1-w}{c(cw-1)} = (1-c)w, \quad \frac{1}{c} < w < 1.$$</span>
Let
<span class="math-container">$$h(w) = \ln \frac{1-w}{c(cw-1)} - (1-c)w.$$</span>
We have
<span class="math-container">$$h'(w) = \frac{-(c-1)(cw^2 - (c+1)w + 2)}{(cw-1)(1-w)}.$$</span>
There are two possible cases:</p>
<p>1) If <span class="math-container">$1 < c \le 3+2\sqrt{2}$</span>, we have <span class="math-container">$cw^2 - (1+c)w + 2 \ge 0$</span> for <span class="math-container">$w\in (-\infty, \infty)$</span>,
with equality only if <span class="math-container">$c = 3 + 2\sqrt{2}$</span> and <span class="math-container">$w = 2-\sqrt{2}$</span>.
Thus, <span class="math-container">$h'(w)\le 0$</span> for <span class="math-container">$\frac{1}{c} < w < 1$</span>, with equality only if <span class="math-container">$c = 3 + 2\sqrt{2}$</span> and <span class="math-container">$w = 2-\sqrt{2}$</span>.
Thus, <span class="math-container">$h(w)$</span> is strictly decreasing on <span class="math-container">$\frac{1}{c} < w < 1$</span>.
Note also that <span class="math-container">$h(\frac{1}{c}^{+}) = \infty$</span> and <span class="math-container">$h(1^{-}) = -\infty$</span>.
Thus, <span class="math-container">$h(w) = 0$</span> has exactly one solution on <span class="math-container">$\frac{1}{c} < w < 1$</span>.
Thus, <span class="math-container">$g'(w)=0$</span> has exactly one solution on <span class="math-container">$(0, \infty)$</span>.
Note also that <span class="math-container">$g'(0) > 0$</span> and <span class="math-container">$g'(\infty) = -\infty$</span>. Thus, <span class="math-container">$g(w)$</span> is unimodal on <span class="math-container">$(0, \infty)$</span>.</p>
<p>2) If <span class="math-container">$c > 3 + 2\sqrt{2}$</span>, then <span class="math-container">$h'(w) < 0$</span> for <span class="math-container">$w\in (\frac{1}{c}, w_1)$</span>, <span class="math-container">$h'(w) > 0$</span> for <span class="math-container">$w\in (w_1, w_2)$</span> and
<span class="math-container">$h'(w) < 0$</span> for <span class="math-container">$w\in (w_2, 1)$</span> where <span class="math-container">$w_1, w_2$</span> are the two distinct real solutions of <span class="math-container">$h'(w)=0$</span> on <span class="math-container">$(\frac{1}{c}, 1)$</span> given by
<span class="math-container">$$w_1 = \frac{c+1 - \sqrt{c^2-6c+1}}{2c}, \quad w_2 = \frac{c+1 + \sqrt{c^2-6c+1}}{2c}.$$</span>
Thus, <span class="math-container">$h(w)$</span> is strictly decreasing on <span class="math-container">$(\frac{1}{c}, w_1)$</span>, strictly increasing on <span class="math-container">$(w_1,w_2)$</span>,
and strictly decreasing on <span class="math-container">$(w_2, 1)$</span>. There are two possible cases:</p>
<p><strong>Case I</strong> <span class="math-container">$3+2\sqrt{2} < c \le c_0$</span>:
From Fact 1, we have <span class="math-container">$h(w_2)\le 0$</span>. Since <span class="math-container">$h(w_1) < h(w_2) \le 0$</span> and <span class="math-container">$h(\frac{1}{c}^{+}) = \infty$</span>, there exists <span class="math-container">$d\in (\frac{1}{c}, w_1)$</span>
such that <span class="math-container">$h(d) = 0$</span>, <span class="math-container">$h(w) > 0$</span> for <span class="math-container">$w\in (\frac{1}{c}, d)$</span>, <span class="math-container">$h(w) < 0$</span> for <span class="math-container">$w\in (d, w_2)$</span>
and <span class="math-container">$h(w) < 0$</span> for <span class="math-container">$w\in (w_2, 1)$</span>. Note that for <span class="math-container">$w\in (\frac{1}{c}, 1)$</span>,
<span class="math-container">$$g'(w) = \mathrm{e}^{-w}c(cw-1)\Big(\frac{1-w}{c(cw-1)} - \mathrm{e}^{(1-c)w}\Big).$$</span>
Thus, <span class="math-container">$g'(w) > 0$</span> for <span class="math-container">$w \in (\frac{1}{c}, d)$</span>, <span class="math-container">$g'(w)<0$</span> for <span class="math-container">$w\in (d, w_2)$</span> and <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$w\in (w_2, 1)$</span>.
Also, clearly <span class="math-container">$g'(w) > 0$</span> for <span class="math-container">$w\in (0, \frac{1}{c}]$</span> and <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$[1, \infty)$</span>.
Thus, <span class="math-container">$g'(w)>0$</span> for <span class="math-container">$w\in (0, d)$</span>, <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$w\in (d, w_2)$</span> and <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$w \in [w_2, \infty)$</span>.
Thus, <span class="math-container">$g(w)$</span> is strictly increasing on <span class="math-container">$(0, d)$</span>, and strictly decreasing on <span class="math-container">$(d, \infty)$</span>.
Thus, <span class="math-container">$g(w)$</span> is unimodal on <span class="math-container">$(0, \infty)$</span>.</p>
<p><strong>Case II</strong> <span class="math-container">$c > c_0$</span>: From Fact 1, we have <span class="math-container">$h(w_1) < 0$</span> and <span class="math-container">$h(w_2)> 0$</span>.
Note also that <span class="math-container">$h(\frac{1}{c}^{+}) = \infty$</span>.
Thus, there exists <span class="math-container">$d_1\in (\frac{1}{c}, w_1)$</span>, <span class="math-container">$d_2\in (w_1, w_2)$</span> and <span class="math-container">$d_3\in (w_2, 1)$</span> such that
<span class="math-container">$h(w) > 0$</span> for <span class="math-container">$w \in (\frac{1}{c}, d_1)$</span>, <span class="math-container">$h(w) < 0$</span> for <span class="math-container">$w\in (d_1, d_2)$</span>,
<span class="math-container">$h(w) > 0$</span> for <span class="math-container">$w\in (d_2, d_3)$</span> and <span class="math-container">$h(w) < 0$</span> for <span class="math-container">$w\in (d_3, 1)$</span>.
Note that for <span class="math-container">$w\in (\frac{1}{c}, 1)$</span>,
<span class="math-container">$$g'(w) = \mathrm{e}^{-w}c(cw-1)\Big(\frac{1-w}{c(cw-1)} - \mathrm{e}^{(1-c)w}\Big).$$</span>
Thus, <span class="math-container">$g'(w) > 0$</span> for <span class="math-container">$w \in (\frac{1}{c}, d_1)$</span>, <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$w\in (d_1, d_2)$</span>,
<span class="math-container">$g'(w) > 0$</span> for <span class="math-container">$w\in (d_2, d_3)$</span> and <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$w\in (d_3, 1)$</span>.
Also, clearly <span class="math-container">$g'(w) > 0$</span> for <span class="math-container">$w\in (0, \frac{1}{c}]$</span> and <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$[1, \infty)$</span>.
Thus, <span class="math-container">$g'(w)>0$</span> for <span class="math-container">$w\in (0, d_1)$</span>, <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$w\in (d_1, d_2)$</span>,
<span class="math-container">$g'(w) > 0$</span> for <span class="math-container">$w\in (d_2, d_3)$</span> and <span class="math-container">$g'(w) < 0$</span> for <span class="math-container">$w\in (d_3, \infty)$</span>.
Thus, <span class="math-container">$g(w)$</span> is strictly increasing on <span class="math-container">$(0, d_1)$</span>, strictly decreasing on <span class="math-container">$(d_1, d_2)$</span>,
strictly increasing on <span class="math-container">$(d_2, d_3)$</span>, and strictly decreasing on <span class="math-container">$(d_3, \infty)$</span>.
Thus, <span class="math-container">$g(w)$</span> has exactly two local maxima on <span class="math-container">$(0, \infty)$</span>. This completes the proof of Lemma 1.</p>
|
1,081,021 | <p>In what follows I'm only considering positive real valued functions.</p>
<p>Everywhere I look about the definition of the Lebesgue integral it is required to consider a measurable function. Why do we not define the integral for non-measurable functions? From what I see we require measurablility of the simple functions that approximate f, not f itself. The definition I'm considering is given a measure space $X$ with measure $\mu$ and a measurable function $f$ we define</p>
<p>$$
\int_E f \, \mathrm{d}\mu = \sup_{s \in S} \int_X s \,\mathrm{d}\mu
$$
where $S = \{ s : X \to [0, \infty) \mid 0 \le s \le f, s \text{ is simple, measurable} \}$.</p>
<p>For example consider $\mathbb{R}$ with the sigma algebra $\varnothing, \mathbb{R}$ with measure $\mu$ given by $\mu(\varnothing) = 0, \mu(\mathbb{R}) = 1$ and consider $f = \chi_{[0,1]}$ then why can't we say that
$$
\int_{\mathbb{R}} f \,\mathrm{d} \mu = 0
$$
(since the only measurable simple function such that $0\le s \le f$ is $s = 0$) which would follow the definition above? Is this not well defined? In general I'm struggling to see why measurable functions (other than measurable simple functions) are used.</p>
| Darrin | 117,053 | <p>In addition to the previous advice, note that the function you gave does not "approximate" $f$. An approximating sequence $\{s_n\}$ for $f$ (which $f$ would have iff $f$ were measurable provided the measure space for the domain of $f$ is complete) would need to be within a distance of $\epsilon$ from $f$ for any given $\epsilon >0$. However, the function $s$ you gave as an "approximation" cannot approximate $f$, in the sense that the sequence $\{s_n\}$ with $s_n=s$ $ \forall n \in \mathbb{N}$ gets no closer than $1$ from $\chi_{[0,1]}$. </p>
<p>Again, it is important to note that a function $f$ on a complete measure space $X$ is measurable if and only if $f$ is the pointwise limit of some sequence of simple functions -or, trivially, is a simple function itself. Consequently, any sequence of "simple" functions approximating a nonmeasurable function must contain a "simple" function with the characteristic function of a nonmeasurable set as part of its construction. In a sense, this is why you must include the assumption that $f$ is measurable in your definition for the Lebesgue integral. </p>
<p>To wit, recall that the integral of a characteristic function is the measure of the pullback set in your domain; in the case of your $f$, since $f$ is characteristic, the integral, were it to be defined, is the measure of the pullback, $$\int f d\mu = \mu\{f^{-1}\{1\}\}=\mu\{[0,1]\},$$ </p>
<p>but you have not defined the measure for $[0,1]$, which is not even in your $\sigma$-algebra; nor can we infer the measure of $[0,1]$ from the definition you gave for your measure space, as your collection of measurable subsets is already a closed $\sigma$-algebra that is $\sigma$-finite under $\mu$ (hence, your measure space cannot even be <em>extended</em>, in the usual Caratheodory way, to include $[0,1]$ with an accompanying well-defined measurement). </p>
<p>As a curiosity tangential to your question, it is possible for nonmeasurable functions to arise from limits of simple functions in a complete measure space, but such a collection of simple functions must be uncountable. For instance, with Lebesgue measure on $\mathbb{R}$, take $f=\chi_V$ to be the characteristic function of the (uncountable and nonmeasurable) Vitali set $V$ on $[0,1]$, and consider the (uncountable) collection of measurable functions $\{\chi_v\}, v\in V$. Then $\chi_V=\sup \{\chi_v\}_{v\in V}$. Were we to define an integral as you wish, then in this case, you may want to say that the integral $\int \chi_V = \sup \{ \int \chi_v \} = 0$. But, again, since $\chi_V$ is itself characteristic, we should then have $\mu(V)=0$, but $\mu(V)$ is not defined for $V$ under the Lebesgue measure. </p>
<p>To address your request for a resource, see Royden's Real Analysis, 4th ed., chapters 17 and 18 (particularly pp. 362-363 were helpful as a reference to me for this post). </p>
|
434,290 | <p>According to the <a href="http://arxiv.org/abs/0910.5922" rel="nofollow">equation 4</a>,
$$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)\tag{1}$$
what conditions makes, $$\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)=1$$
so the equation (1) will be </p>
<p>$$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}$$
The author used the <a href="http://arxiv.org/abs/hep-ph/9503217" rel="nofollow">article reference</a> to establish the equation
$$\frac{1}{2} \Gamma_{lin}= \frac{1}{\tau_{linear}} \approx \frac{1.196}{\omega_{mass}} \approx \frac{.846}{R^2}$$
but I didn't get any argument there, can you explain this a bit please.</p>
| xpaul | 66,420 | <p>We can use the following way to solve. It is very simple. In fact
\begin{eqnarray}
I&=&\int_0^\infty \frac{\ln x}{1+x^4}dx=\int_0^1 \frac{\ln x}{1+x^4}dx+\int_1^\infty \frac{\ln x}{1+x^4}dx\\
&=&\int_0^1 \frac{\ln x}{1+x^4}dx-\int_0^1 \frac{x^2\ln x}{1+x^4}dx\tag{1}\\
&=&\int_0^1\sum_{n=0}^\infty(-1)^n(x^{4n}-x^{4n+2})\ln xdx\tag{2}\\
&=&\sum_{n=0}^\infty(-1)^{n+1}\left(\frac{1}{(4n+1)^2}-\frac{1}{(4n+3)^2}\right)\\
&=&\sum_{n=-\infty}^\infty(-1)^{n+1}\frac{1}{(4n+1)^2}\\
&=&-\frac{\sqrt{2}\pi}{16}\tag{3}.
\end{eqnarray}
Here, for (1), (2), and (3), we used the substitute $x\to\frac{1}{x}$, $\int_0^1x^n\ln xdx=-\frac1{(n+1)^2}$, and
$$ \sum_{n=-\infty}^\infty(-1)^{n}\frac{1}{(an+b)^2}=\frac{\pi^2a^2\cos\frac{b\pi}{a}}{\sin^2\frac{b\pi}{a}} $$
respectively.</p>
|
680,205 | <p>Milnor lemma 2 pg 34
"Any orientation preserving diffeomorphism f on $R^m$ is smoothly homotopic to the identity"</p>
<p>So he proves that $f\simeq df_0$ ,which he says is clearly homotopic to the identity.
Can you explain me why?</p>
<p>Here I found two explanations I don't understand:
1) $Gl^{+}(m,\mathbb{R})$ is path connected. Why is $df_0 \in Gl^{+}(m,\mathbb{R})$? What prevents $df_{0}\in Gl^{-}(m,\mathbb{R})$?</p>
<p>2) $df_0$ is isomorphic everywhere and thus(why?) isotopic to identity.</p>
<p>Thanks</p>
| Wisław | 78,540 | <p>This is not really an answer to your specific question, but another way of proving the lemma.</p>
<p>Since $f$ is an orientation preserving diffeomorphism on $\mathbb R^m$ it must have degree 1. By the Hopf degree theorem (p. 51 in Milnor), it is smoothly homotopic to the identity.</p>
|
945,104 | <p>7 people are attending a concert.</p>
<p>(a) In how many different ways can they be seated in a row?</p>
<p>(b) Two attendees are Alice and Bob. What is the probability that Alice sits next to Bob?</p>
<p>(c) Bob decides to make Alice a rainbow necklace with 7 beads, each painted a different
colour on one side (red, orange, yellow, blue, green, indigo, violet), placed on a chain
that is then closed to form a circle. How many different necklaces can he make? (Since
the beads can slide along the chain, the necklace with beads R O Y G B I V would be
considered the same as O Y G B I V R for example. The beads are plain on the back, so
the necklace cannot be turned over.)</p>
<p>How should i approach these questions? Are they correct?</p>
<p>for the first one i understand that it is a permutation.
Therefore would a) = 7! = 5040 possible different ways of sitting in a row </p>
<p>b) p(7,2)
= 7!/(7-2)! = 5040/120 = 42 therefore probabiltiy = 42/5040 = 0.0083%</p>
<p>c) =6!/2
because the first bead doesnt matter and over 2 as it can either go left or right.</p>
| André Nicolas | 6,312 | <p>For Question (b), we want to count the number of seatings in which Alice and Bob are neighbours. We are assuming the seating is random. That may not be reasonable, if we consider Bob's actions described in (c).</p>
<p>The leftmost of the two seats occupied by our two heroes can be chosen in $6$ ways. For each such way, Alice and Bob can occupy that seat and the next one in $2$ ways. And then the rest of the people can fill in the rest of the spots in $5!$ ways, for a total of $(6)(2)(5!)$.</p>
<p>For the probability, divide by $7!$. Simplify. Fairly quickly we arrive at $\frac{2}{7}$. </p>
<p><strong>Another way:</strong> There are $\binom{7}{2}$ equally likely ways to choose a <em>set</em> of two seats. Of these, $6$ sets have Alice and Bob next to each other. So the required probability is $\frac{6}{\binom{7}{2}}$. This simplifies to $\frac{2}{7}$.</p>
<p><strong>Still another way:</strong> The probability that Alice occupies an end seat is $\frac{2}{7}$. If she does, the probability Bob is next to her is $\frac{1}{6}$.</p>
<p>The probability that Alice occupies a non-end seat is $\frac{5}{7}$. If she does, the probability Bob is next to her is $\frac{2}{6}$. Thus the required probability is $\frac{2}{7}\cdot\frac{1}{6}+\frac{5}{7}\cdot\frac{2}{6}$. This simplifies to $\frac{2}{7}$. </p>
|
213,198 | <p>Given the expression:</p>
<pre><code>Simplify[Reduce[Exists[{x},a x^2+b x+ c==0],x,Reals]]
</code></pre>
<p>The answer comes out as:</p>
<pre><code>(b==0 && ((c>0 && a<0)||(a>0 && c<0)))||(b!=0 && 4 a c<=b^2)||c==0
</code></pre>
<p>However, surely this is just the same as:</p>
<pre><code>b^2-4ac >= 0
</code></pre>
<p>Since if a or c is 0 this is always true and this one statement covers all the cases.</p>
<p>So why doesn't it simplify to this? Or is there a way to make it simplify to this?</p>
| Carl Woll | 45,431 | <p>The first predicate is false for a->0, b->0, c->1:</p>
<pre><code>(b==0 && ((c>0 && a<0)||(a>0 && c<0)))||(b!=0 && 4 a c<=b^2)||c==0 /. {a->0, b->0, c->1}
</code></pre>
<blockquote>
<p>False</p>
</blockquote>
<p>The second predicate is true for this case:</p>
<pre><code>b^2 - 4 a c >= 0 /. {a->0, b->0, c->1}
</code></pre>
<blockquote>
<p>True</p>
</blockquote>
|
213,198 | <p>Given the expression:</p>
<pre><code>Simplify[Reduce[Exists[{x},a x^2+b x+ c==0],x,Reals]]
</code></pre>
<p>The answer comes out as:</p>
<pre><code>(b==0 && ((c>0 && a<0)||(a>0 && c<0)))||(b!=0 && 4 a c<=b^2)||c==0
</code></pre>
<p>However, surely this is just the same as:</p>
<pre><code>b^2-4ac >= 0
</code></pre>
<p>Since if a or c is 0 this is always true and this one statement covers all the cases.</p>
<p>So why doesn't it simplify to this? Or is there a way to make it simplify to this?</p>
| user13892 | 13,892 | <p><code>Exists</code> and <code>ForAll</code> are qualifier statements and you can attempt to <code>Resolve</code> them to remove the qualifiers as follows:</p>
<pre><code>qualifierStatement=Exists[x,a x^2+b x+c==0]
</code></pre>
<p>Now to resolve it under the real domain as follows:</p>
<pre><code>resolvedStatement=Resolve[qualifierStatement,Reals]
</code></pre>
<p>Now to get your condition you need to ensure that the quadratic actually exist which is only true as long as <code>a!=0</code>, i.e. the highest term of power 2 exists. So we get the condition as follows:</p>
<pre><code>conditionForQuadratic=Simplify[resolvedStatement,Assumptions->a!=0]
</code></pre>
<blockquote>
<p>4 a c <= b^2</p>
</blockquote>
<p>Note: using <code>Reduce</code> is not appropriate here since it has general equation and inequality solving algorithms which does more than just <code>Resolve</code> the qualifiers. <code>Reduce</code> expands and splits the conditions too much for <code>Simplify</code> to bring them together again!</p>
|
815,195 | <p>I am working on an old qualifying exam problem and I can't seem to really get anywhere. I would love some help. Thank you.</p>
<p>Let $f$ be a polynomial such that
$|f(z)| ≤ 1 − |z|^2 + |z|^{1000}$
for all $z ∈ C.$ Prove that $|f(0)| ≤ 0.2.$</p>
| Umberto P. | 67,536 | <p>The minimum value of $\phi(t) = 1 - t^2 + t^{1000}$ on the set $[0,\infty)$ is found easily enough: since $\phi'(t) = -2t + 1000 t^{999} = -2t(1-500t^{998})$ the minimum occurs at $t_0 = \sqrt[998]{1/500}$. Thus if $|z| = t_0$, we have
$$|f(z)| \le 1 - t_0^2 + t_0^{1000}$$
which is (computation omitted thanks to Wolfram Alpha) less than $0.2$. The maximum modulus principle now implies that $|f(z)| \le 0.2$ for all $|z| \le t_0$.</p>
|
1,447,547 | <p>$$x^3>x$$</p>
<p>Steps I took:</p>
<p>$$x^{ 3 }-x>0$$</p>
<p>$$x(x^{ 2 }-1)>0$$</p>
<p>$$x(x-1)(x+1)>0$$</p>
<p>Now I see that all three linear factors must equal a positive value when multiplied. </p>
<p>I took each linear factor and set each to either greater than or less than $0$ since the solution set can either be all positive or two negative and one positive.</p>
<p>$$x>0\quad or\quad x< 0,\quad x>1\quad or\quad x<1,\quad x>-1\quad or\quad x <-1$$ </p>
<p>Now where do I go from here? Or am I doing this all wrong?</p>
| Curious | 141,191 | <p>By solving corresponding equality, we get 3 values -1,0,1. This will give us 4 intervals $(-\infty,-1), (-1,0), (0,1), (1, \infty)$. By checking on this intervals, the inequality is satisfied for intervals $(1,\infty)$ and $(-1,0)$</p>
|
2,904,912 | <p>$$24a(n)=26a(n-1)-9a(n-2)+a(n-3)$$
$$a(0)=46, a(1)=8, a(2)=1$$
$$\sum\limits_{k=3}^{\infty}a(k)=2^{-55}$$
How can I prove it?</p>
| lhf | 589 | <p>Let $S=\sum\limits_{k=3}^{\infty}a(k)$. Then
$$
24S = 26(S+a(2))-9(S+a(1)+a(2))+(S+a(0)+a(1)+a(2))
$$
This gives $S=0$.</p>
|
1,178,080 | <p>How to calculate the number of solutions of the equation $x_1 + x_2 + x_3 = 9$ when $x_1$, $x_2$ and $x_3$ are integers which can only range from <code>1</code> to <code>6</code>.</p>
| AvZ | 171,387 | <p>We can find the number of solutions using binomial theorem.<br>
The coefficient of $x^9$ in the following will be the required answer.
$$(x+x^2+\cdots+x^6)^3$$
This above, is a Geometric Progression. Therefore,
$$=\left (\frac{x-x^7}{1-x}\right )^3$$
$$=(x-x^7)^3(1-x)^{-3}$$
Now apply binomial theorem to get the coefficient of $x^9$
$$\left (\binom{3}{0}x^3-\binom{3}{1}x^9+\binom{3}{2}x^{15}-\binom{3}{3}x^{21} \right )\left (\binom{2}{0}+\binom{3}{1}x+\binom{4}{2}x^2+\cdots\infty\right )$$
We can neglect all terms with exponent $>9$
$$\left (\binom{3}{0}x^3-\binom{3}{1}x^9\right )\left (\binom{2}{0}+\binom{3}{1}x+\binom{4}{2}x^2+\cdots+\binom{11}{9}x^9\right )$$
We get the the coefficeient of $x^9$ as
$$\binom{3}{0}\binom{8}{6}-\binom{3}{1}\binom{2}{0}$$
$$=28-3$$
$$=25$$</p>
|
1,178,080 | <p>How to calculate the number of solutions of the equation $x_1 + x_2 + x_3 = 9$ when $x_1$, $x_2$ and $x_3$ are integers which can only range from <code>1</code> to <code>6</code>.</p>
| Lorence | 220,813 | <p>There are six situations.
126,135,144,225,234 and 333,it's easy to know there is no other situation.
So,we can add each of them together using permutation theorem.
$$P_3^3+P_3^3+\frac{P_3^3}{P_2^2}+\frac{P_3^3}{P_2^2}+P_3^3+\frac{P_3^3}{P_3^3}=25$$</p>
|
563,499 | <p>What's the summation of the following expression;</p>
<p>$$\sum_{k=1}^{n+3}\left(\frac{1}{2}\right)^{k}\left(\frac{1}{4}\right)^{n-k}$$
The solution is said to $$2\left(\frac{1}{4} \right)^{n}\left(2^{n+3}-1\right)$$</p>
<p>But I'm getting $$\left(\frac{1}{4} \right)^{n}\left(2^{n+3}-1\right).$$
How is this possible?
$$\sum_{k=1}^{n+3}\left(2 \times\frac{1}{4}\right)^{k}\left(\frac{1}{4}\right)^n\left(\frac{1}{4}\right)^{-k} \rightarrow \left(\frac{1}{4}\right)^n \sum_{k=1}^{n+3} 2^k\left(\frac{1}{4}\right)^k \left(\frac{1}{4}\right)^{-k}\rightarrow \left(\frac{1}{4}\right)^n\left(2^{n+3}-1\right)$$</p>
| Steve Kass | 60,500 | <p>Let $n$=0. The sum is then $\sum_{k=1}^{3}\left(\frac{1}{2}\right)^{k}\left(\frac{1}{4}\right)^{-k}= \frac{1}{2}\cdot4+\frac{1}{4}\cdot16+\frac{1}{8}\cdot64=2+4+8=14$. This equals $2\left(\frac{1}{4} \right)^{0}\left(2^{0+3}-1\right)=2\cdot 7$, and it does not equal $\left(\frac{1}{4} \right)^{n}\left(2^{n+3}-1\right)$, which equals 7.</p>
<p>How are you getting that the sum equals $\left(\frac{1}{4} \right)^{n}\left(2^{n+3}-1\right)$?</p>
|
563,499 | <p>What's the summation of the following expression;</p>
<p>$$\sum_{k=1}^{n+3}\left(\frac{1}{2}\right)^{k}\left(\frac{1}{4}\right)^{n-k}$$
The solution is said to $$2\left(\frac{1}{4} \right)^{n}\left(2^{n+3}-1\right)$$</p>
<p>But I'm getting $$\left(\frac{1}{4} \right)^{n}\left(2^{n+3}-1\right).$$
How is this possible?
$$\sum_{k=1}^{n+3}\left(2 \times\frac{1}{4}\right)^{k}\left(\frac{1}{4}\right)^n\left(\frac{1}{4}\right)^{-k} \rightarrow \left(\frac{1}{4}\right)^n \sum_{k=1}^{n+3} 2^k\left(\frac{1}{4}\right)^k \left(\frac{1}{4}\right)^{-k}\rightarrow \left(\frac{1}{4}\right)^n\left(2^{n+3}-1\right)$$</p>
| Steve Kass | 60,500 | <p>The formula $$\sum_{k=1}^{n}ar^{k}=a\left(\frac{1-r^{n}}{1-r}\right)$$ is incorrect. The correct formula is $$\sum_{{\Large k=}{\Huge0}}^{\Huge n-1}ar^{k}=a\left(\frac{1-r^{n}}{1-r}\right)$$ or $$\sum_{k=1}^{n}ar^{k}=a{\Large{r}}\left(\frac{1-r^{n}}{1-r}\right)$$. With $a=1$ and $r=2$, you should have $\sum_{k=1}^{n+3}1\cdot2^{k}=2\left(\frac{1-2^{n+3}}{1-2}\right)=1\cdot2\left(2^{n+3}-1\right)$</p>
|
3,156,643 | <blockquote>
<p>Prove that <span class="math-container">$\sin(x) < x$</span> when <span class="math-container">$0<x<2\pi.$</span></p>
</blockquote>
<p>I have been struggling on this problem for quite some time and I do not understand some parts of the problem. I am supposed to use rolles theorem and Mean value theorem</p>
<p>First using the mean value theorem I got <span class="math-container">$\cos(x) = \dfrac {\sin(x)}x$</span>
and since <span class="math-container">$1 ≥ \cos x ≥ -1$</span> , <span class="math-container">$1 ≥ \dfrac {\sin(x)}x$</span> which is <span class="math-container">$x ≥ \sin x$</span> for all <span class="math-container">$x ≥ 0$</span>.</p>
<p>Here the first issue is that I didn't know how to change <span class="math-container">$≥$</span> to <span class="math-container">$>$</span>. </p>
<p>The second part is proving when <span class="math-container">$x<2\pi$</span> and this part I have no idea.</p>
<p>I know that <span class="math-container">$2\pi > 1$</span> , and <span class="math-container">$1 ≥ \sin x$</span> and my thought process ends here.</p>
| little o | 543,867 | <p>Let <span class="math-container">$f(x) = \sin x, x \in \Bbb R.$</span> Let <span class="math-container">$x \in (0,1)$</span> then using Lagrange's MVT on the interval <span class="math-container">$[0,x]$</span> we get <span class="math-container">$$\frac {f(x)-f(0)} {x-0} = f'(c)$$</span> where <span class="math-container">$c \in (0,x) \subset (0,1).$</span>Therefore we have <span class="math-container">$$\frac {\sin x} {x} = \cos c < 1,$$</span> since <span class="math-container">$0<c<1$</span> and <span class="math-container">$\cos x$</span> is strictly decreasing on <span class="math-container">$\left (0, {\pi} \right )$</span> and hence on <span class="math-container">$(0,1).$</span> Hence for all <span class="math-container">$x \in (0,1)$</span> we have <span class="math-container">$\sin x < x.$</span> Also for <span class="math-container">$x=1$</span> we have <span class="math-container">$\sin x = \sin 1 < \sin \left (\frac {\pi} {2} \right ) = 1,$</span> since <span class="math-container">$1 < \frac {\pi} {2}$</span> and <span class="math-container">$\sin x$</span> is strictly increasing on <span class="math-container">$\left (0,\frac {\pi} {2} \right ).$</span> Also for <span class="math-container">$x > 1$</span> we have <span class="math-container">$\sin x \leq 1 < x.$</span> So we have <span class="math-container">$\sin x < x$</span> for all <span class="math-container">$x > 0.$</span></p>
|
555,239 | <p>Since the polynomial has three irrational roots, I don't know how to solve the equation with familiar ways to solve the similar question. Could anyone answer the question?</p>
| Robert Israel | 8,508 | <p>These are the equations of (fairly small) ellipses in the $x-y$ plane. Plot and count.</p>
|
1,878,573 | <p><a href="https://i.stack.imgur.com/3iZQ8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3iZQ8.png" alt="enter image description here"></a></p>
<p>I cannot get the $f'(0)$ by using L'Hôpital's rule, because it appears recurrence item. Can you help me?</p>
| Barry Cipra | 86,747 | <p>By the definition of the derivative, a function $f$ is differentiable at $0$ if and only if the limit</p>
<p>$$\lim_{x\to0}{f(x)-f(0)\over x}$$</p>
<p>exists. In this case, $f(0)$ is <em>defined</em> to be $0$, so the question of differentiability boils down to examining</p>
<p>$$\lim_{x\to0}{(e^{x^2}-e^{-x^2})\sin({1\over x^3})\over x}$$</p>
<p>Note first that</p>
<p>$$\left|{e^{x^2}-e^{-x^2}\over x}\sin\left({1\over x^3}\right) \right|\le \left|{e^{x^2}-e^{-x^2}\over x} \right|\quad\text{for }x\not=0$$</p>
<p>since $|\sin\theta|\le1$ for all $\theta$. L'Hopital shows</p>
<p>$$\lim_{x\to0}{e^{x^2}-e^{-x^2}\over x}=\lim_{x\to0}{2xe^{x^2}-(-2x)e^{-x^2}\over 1}=\lim_{x\to0}2x(e^{x^2}+e^{-x^2})=0$$</p>
<p>Therefore, by the Squeeze Theorem, the limit with the sine function also tends to $0$. I.e., we have</p>
<p>$$f'(0)=\lim_{x\to0}{(e^{x^2}-e^{-x^2})\sin({1\over x^3})\over x}=0$$</p>
<p>Remark: If $f(0)$ had been defined to be anything other than $0$, $f$ would <em>not</em> be differentiable at $0$ for the simple reason that $f$ would not be <em>continuous</em> at $0$. </p>
|
3,861,324 | <p>Given a nonzero column vector <span class="math-container">$A$</span>=<span class="math-container">$[a_1 a_2.......a_n]^T$</span>. Find the non zero eigen values and eigen vectors for <span class="math-container">$A$$A^T$</span>.</p>
<p>I have no idea.what theorem should I apply or what I have to do to solve this. I know that fact <span class="math-container">$A$$A^T$</span> and <span class="math-container">$A^T$$A$</span> have the same eigen values and if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be square matrix of same order the <span class="math-container">$AB$</span> and <span class="math-container">$BA$</span> have the same eigen values. Is it related to this statements?</p>
| Brian Fitzpatrick | 56,960 | <p>Suppose <span class="math-container">$\boldsymbol{v},\boldsymbol{w}\in\Bbb R^n$</span> are represented as column vectors. Recall that the <em>dot product</em> of <span class="math-container">$\boldsymbol{v}$</span> and <span class="math-container">$\boldsymbol{w}$</span> may be written as <span class="math-container">$\boldsymbol{v}\cdot\boldsymbol{w}=\boldsymbol{v}^\intercal\boldsymbol{w}$</span>.</p>
<p>Your matrix is of the form <span class="math-container">$M=\boldsymbol{v}\boldsymbol{v}^\intercal$</span> where <span class="math-container">$\boldsymbol{v}\in\Bbb R^n$</span>.</p>
<p>First, note that
<span class="math-container">$$
M\boldsymbol{v}=\boldsymbol{v}\boldsymbol{v}^\intercal\boldsymbol{v}=\lVert\boldsymbol{v}\rVert^2\cdot\boldsymbol{v}\tag{1}
$$</span>
Here, we have used the identity <span class="math-container">$\boldsymbol{v}^\intercal\boldsymbol{v}=\boldsymbol{v}\cdot\boldsymbol{v}=\lVert\boldsymbol{v}\rVert^2$</span>.</p>
<p>Next, suppose <span class="math-container">$\boldsymbol{w}$</span> is any vector orthogonal to <span class="math-container">$\boldsymbol{v}$</span>, so that <span class="math-container">$\boldsymbol{v}\cdot\boldsymbol{w}=0$</span>. Then
<span class="math-container">$$
M\boldsymbol{w}=\boldsymbol{v}\boldsymbol{v}^\intercal\boldsymbol{w}=0\cdot\boldsymbol{v}=\boldsymbol{O}\tag{2}
$$</span>
Do you see how the equations (1) and (2) relate to the eigenvalue/eigenvector problem?</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.