qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,549,181
<p><strong>Calculate Using Polar Coordinates</strong></p> <p>This is a drawing I made to illustrate the problem. <a href="http://tube.geogebra.org/m/ZzvL0a38" rel="nofollow">http://tube.geogebra.org/m/ZzvL0a38</a></p> <p>$$\int_{\frac 12}^{1} \int_{0}^{\sqrt{1-x^2}} 1 \quad dydx $$</p> <p>What I am confused about in this problem is how does one redefine the upper and lower bound coordinates? From what I can see this is a type 1 domain. I know that I am supposed to use the unit circle, but I am unsure on how to proceed.</p> <p>This is what I have so far</p> <p>$$\iint r dr dϴ $$</p>
Michael
293,919
<p>$\frac{1}{2}≤x≤1$</p> <p>$0≤y≤\sqrt{1-x^2}$</p> <p>Sketch the region which satisfies the above. Then change the region from $dy dx$ to $dx dy$. The limits become $\frac{1}{2}≤x≤\sqrt{1-y^2}$ and $0≤y≤\frac{\sqrt3}{2}$</p> <p>As suggested above integrating it without changing limits is probably easier.</p>
1,549,181
<p><strong>Calculate Using Polar Coordinates</strong></p> <p>This is a drawing I made to illustrate the problem. <a href="http://tube.geogebra.org/m/ZzvL0a38" rel="nofollow">http://tube.geogebra.org/m/ZzvL0a38</a></p> <p>$$\int_{\frac 12}^{1} \int_{0}^{\sqrt{1-x^2}} 1 \quad dydx $$</p> <p>What I am confused about in this problem is how does one redefine the upper and lower bound coordinates? From what I can see this is a type 1 domain. I know that I am supposed to use the unit circle, but I am unsure on how to proceed.</p> <p>This is what I have so far</p> <p>$$\iint r dr dϴ $$</p>
Jan Eerland
226,665
<p>HINT:</p> <p>$$\int_{\frac{1}{2}}^{1}\left[\int_{0}^{\sqrt{1-x^2}}1\space\text{d}y\right]\space\text{d}x=\int_{\frac{1}{2}}^{1}\left[\left[y\right]_{0}^{\sqrt{1-x^2}}\right]\space\text{d}x=\int_{\frac{1}{2}}^{1}\sqrt{1-x^2}\space\text{d}x=$$</p> <hr> <p>For the integrand $\sqrt{1-x^2}$, substitute $x=\sin(u)$ and $\text{d}x=\cos\space\text{d}u$.</p> <p>Then $\sqrt{1-x^2}=\sqrt{1-\sin^2(u)}=\sqrt{\cos^2(u)}$.</p> <p>This substitution is invertible over $\frac{\pi}{6}&lt;u&lt;\frac{\pi}{2}$ with inverse $u=\arcsin(x)$.</p> <p>This gives a new lower bound $u=\arcsin\left(\frac{1}{2}\right)=\frac{\pi}{6}$ and upper bound $u=\arcsin(1)=\frac{\pi}{2}$:</p> <hr> <p>$$\int_{\frac{\pi}{6}}^{\frac{\pi}{2}}\cos(u)\sqrt{\cos^2(u)}\space\text{d}u=$$</p> <hr> <p>Simplify $\cos(u)\sqrt{\cos^2(u)}$ assuming $\frac{\pi}{6}&lt;u&lt;\frac{\pi}{2}$:</p> <hr> <p>$$\int_{\frac{\pi}{6}}^{\frac{\pi}{2}}\cos^2(u)\space\text{d}u=$$ $$\int_{\frac{\pi}{6}}^{\frac{\pi}{2}}\left(\frac{1}{2}\cos(2u)+\frac{1}{2}\right)\space\text{d}u=$$ $$\frac{1}{2}\int_{\frac{\pi}{6}}^{\frac{\pi}{2}}\cos(2u)\space\text{d}u+\frac{1}{2}\int_{\frac{\pi}{6}}^{\frac{\pi}{2}}1\space\text{d}u$$</p>
1,549,181
<p><strong>Calculate Using Polar Coordinates</strong></p> <p>This is a drawing I made to illustrate the problem. <a href="http://tube.geogebra.org/m/ZzvL0a38" rel="nofollow">http://tube.geogebra.org/m/ZzvL0a38</a></p> <p>$$\int_{\frac 12}^{1} \int_{0}^{\sqrt{1-x^2}} 1 \quad dydx $$</p> <p>What I am confused about in this problem is how does one redefine the upper and lower bound coordinates? From what I can see this is a type 1 domain. I know that I am supposed to use the unit circle, but I am unsure on how to proceed.</p> <p>This is what I have so far</p> <p>$$\iint r dr dϴ $$</p>
Emilio Novati
187,568
<p>From elementar trigonometry we see that the area is $A=\dfrac {\pi}{6}-\dfrac{\sqrt{3}}{8}$ where $ \dfrac {\pi}{6}$ is the area of the circular segment of central angle $\theta =\dfrac{\pi}{3}$ and $\dfrac{\sqrt{3}}{8}$ is the area of the triangle of basis $\dfrac{1}{2}$ and height $\dfrac{\sqrt{3}}{2}$.</p> <p>We can find this with a double integral in polar coordinates using the figure in the answer of E.H.E., but with a little change of the limits of integration ( the radius goes from $\dfrac{1}{2\cos \theta}$, on the vertical line thorough $x=1/2$, to $1$, on the circle):</p> <p>$$ A=\int_0^{\frac{\pi}{3}}\int _{\frac{1}{2\cos \theta}}^1 r dr d \theta= \int_0^{\frac{\pi}{3}}\dfrac{1}{2}\left(1-\dfrac{1}{4\cos^2 \theta} \right) d \theta= $$ $$ =\dfrac{1}{2}\int_0^{\frac{\pi}{3}}d \theta -\dfrac{1}{8}\int_0^{\frac{\pi}{3}} \dfrac{1}{\cos^2 \theta} d \theta=\dfrac{\pi}{6}-\dfrac{1}{8}\left( \tan (\pi/3)-\tan(0) \right)=\dfrac{\pi}{6}-\dfrac{\sqrt{3}}{8} $$</p>
4,474,803
<p>Let <span class="math-container">$f$</span> be a differentiate function and consider two operators <span class="math-container">$$ A(f(x))=\int_0^1 \frac{ d}{dx}f(x t^\mu) dt,\\ B(f(x))=\mu xf(x)+\int_0^x f(t) dt, $$</span> where <span class="math-container">$\mu $</span> is a parameter.</p> <p>I need to prove that</p> <p><span class="math-container">$$ A \circ B =B \circ A = I $$</span></p> <p>where <span class="math-container">$I$</span> is the identity operator <span class="math-container">$I(f(x))=f(x).$</span></p> <p>It is easy to prove it for powers <span class="math-container">$x^n$</span> and then for power series but it would be interesting to perform direct calculations and prove it for any function <span class="math-container">$f.$</span></p> <p>But I was confused with calculation of <span class="math-container">$A(B(f(x))$</span>. Any help?</p>
ryang
21,813
<blockquote> <p>Example: <span class="math-container">$$\phi_1 := \forall x\; P(x )\rightarrow \exists y\; T(y) \\ \phi_2 := \color{green}{\Big(}\forall x\;\exists y\; P(x) \color{green}{\Big)}\rightarrow T(y) \\$$</span></p> </blockquote> <p><span class="math-container">$$\phi_3 := \forall x\;\exists y \Big( P(x )\rightarrow T(y) \Big)\\ \phi_4 := \exists x\;\exists y\;\Big( P(x )\rightarrow T(y)\Big)$$</span></p> <p>Formulae <span class="math-container">$1$</span> and <span class="math-container">$4$</span> are equivalent, whereas neither formula <span class="math-container">$2$</span> nor <span class="math-container">$3$</span> are equivalent to any of the others.</p> <p>Therefore, non-collision is not a sufficient condition for a formula to be equivalent to the new one created by simply shifting its quantifiers to the front.</p> <hr /> <p><strong>Addendum</strong></p> <blockquote> <p>can you provide a counterexample where <span class="math-container">$\phi_1$</span> and <span class="math-container">$\phi_3$</span> are not equals? or an explanation</p> </blockquote> <p>Consider the set of integers. Let<br>  <span class="math-container">$P(x) := x$</span> is even, and<br>  <span class="math-container">$T(x) := x$</span> is smaller than itself.</p> <p>For <span class="math-container">$x=2,$</span> no integer can make <span class="math-container">$\Big( P(x )\rightarrow T(y) \Big)$</span> true; thus, <span class="math-container">$\phi_3$</span> is false.</p> <p>On the other hand, <span class="math-container">$\phi_1$</span> is vacuously true.</p>
176,240
<p>I <a href="https://math.stackexchange.com/questions/2832912/sum-infty-infty-frac-exp-n21-4n2-in-closed-form/2832965#2832965">came across</a> a possible bug in Mathematica's evaluation of integrals that I thought I'd share. Consider the code:</p> <pre><code>zmax = 4; a = Integrate[Exp[-x^2] Abs[Sin[x]], {x, -z , z }] /. z -&gt; zmax b = NIntegrate[Exp[-x^2] Abs[Sin[x]], {x, -zmax, zmax}] a - b </code></pre> <p>If <code>zmax</code> is smaller than $\pi$ it evaluates close to $0$ (to the precision limit) while for <code>zmax</code> greater than $\pi$ it evaluates to $-4.50908\cdot 10^{-6}$ (which happens to be roughly $4\cdot \int_{\pi}^{\infty}e^{-x^2}\sin(x){\rm d}x$). The condition $z&lt; \pi$ is not listed in Matematica's (11.2.0.0) analytical expression for the integral <code>a</code> above:</p> <pre><code>ConditionalExpression[( Sqrt[\[Pi]] (2 Erfi[1/2] - Abs[Sin[z]] Csc[z] (Erfi[1/2 - I z] + Erfi[1/2 + I z])))/(2 E^(1/4)), Re[z] &gt;= 0 &amp;&amp; Im[z] == 0] </code></pre> <p>Anybody knows why this happens? If I were to guess, based on the result above, it looks like Mathematica uses the symmetry to rewrite the integral as $2$ times the integral over $[0,z]$ and then for some reason removes the absolute value over $\sin(x)$ (which is allowed only if $z&lt;\pi$), but forgets to mention it.</p>
halirutan
187
<p>The bad news is that your code spiked at about 60 GB RAM on my machine before the result is returned. The problem is clearly the <code>Import</code> routine. I would like to offer an alternative if you have more than 22 GB of RAM available.</p> <pre><code>data = Import[file, {"GIF", "RawData"}]; </code></pre> <p>This gives you the list of frames. Each frame is a matrix of color-indices. So to reconstruct the first frame, you need to extract the color-map.</p> <pre><code>colorMap = With[{values = Import[file, {"GIF", "GlobalColorMap"}]}, Thread[Range[Length[values]] -&gt; values]]; </code></pre> <p>This needs another 6 GB of RAM, so if you are unsure, then first extract this map and store it as mx locally. The map itself is tiny but again, <code>Import</code> seems to like eating RAM.</p> <p>Then you could do</p> <pre><code>data[[1]] /. rules // Image[#, "Byte"] &amp; </code></pre> <p>to get the first frame as an image. But since you are not interested in the frames, but you only need a flattened list, you can directly work on <code>data</code>.</p> <p>I tested it with Mathematica 11.3 on OS X.</p>
3,916,261
<p>In the book Optimal Transport for Applied Mathematicians, Theorem 1.37, the authors states an inequality, but I'm having trouble seeing how one shows that the inequality is indeed true.</p> <p><strong>Definition</strong>: Given a function <span class="math-container">$c:X\times Y \to \mathbb R$</span>, we say that <span class="math-container">$\Gamma \subset X \times Y$</span> is <span class="math-container">$c$</span>-cyclically monotone (<span class="math-container">$c$</span>-CM) if for every <span class="math-container">$k \in \mathbb N$</span>, every permutation <span class="math-container">$\sigma$</span> and every finite family of points <span class="math-container">$(x_1,y_1),...,(x_k,y_k) \in \Gamma$</span> we have <span class="math-container">$$ \sum^k_{i=1} c(x_i,y_i) \leq \sum^k_{i=1} c(x_i,y_{\sigma(i)}) $$</span></p> <p>Now, assume that <span class="math-container">$\Gamma$</span> is <span class="math-container">$c$</span>-CM. Then, fix <span class="math-container">$(x_0, y_0) \in \Gamma$</span>, and define: <span class="math-container">$$ -\psi(y) = \inf\{ -c(x_n,y) +c(x_n,y_{n-1})-c(x_{n-1},y_{n-1})+...+ c(x_1,y_0) - c(x_0,y_0) : n \in \mathbb N, (x_i,y_i) \in \Gamma \quad \forall i=1,...,n \} $$</span></p> <p>How does one then shows that fo <span class="math-container">$(x,y) \in \Gamma$</span> and <span class="math-container">$\bar y \in \text{Proj}_y \circ\Gamma$</span>, we can then write: <span class="math-container">$$ -\psi(y) \leq - c(x,y) + c(x,\bar y) - \psi(\bar y) $$</span></p> <p>In the proof, the author claims that this inequality follows from the very definition of the function <span class="math-container">$\psi$</span>. What I found odd was that since <span class="math-container">$y$</span> and <span class="math-container">$\bar y$</span> are arbitrary, then I could just swap them, obtaining the opposite inequality, and hence, I actually have an equality. Which would be kind of odd. Anyways, how can you prove this inequality? And is it actually an equality?</p>
Jair Taylor
28,545
<p>There is a difference between the <em>range</em>, or the <em>image</em>, of a function, and the 'codomain'.</p> <p>When we write <span class="math-container">$f: A \rightarrow B$</span>, this means that for every <span class="math-container">$a \in A$</span> there is a unique value <span class="math-container">$b \in B$</span> with <span class="math-container">$f(a) = b$</span>. However not every <span class="math-container">$b \in B$</span> needs to have an <span class="math-container">$a \in A$</span> with <span class="math-container">$f(a) = b$</span>. If the latter property is satisfied, <span class="math-container">$f$</span> is called <em>surjective</em>, or <em>onto</em>. <span class="math-container">$B$</span> is called the <em>codomain</em> of <span class="math-container">$f$</span>. Sometimes the definition <em>function</em> includes a codomain, so that the codomain must be specified. But mathematicians tend to be loose with this, so that we could write <span class="math-container">$\sin : \mathbb{R} \rightarrow \mathbb{R}$</span> just as well as <span class="math-container">$\sin : \mathbb{R} \rightarrow \mathbb{C}$</span>.</p> <p>On the other hand, the <em>image</em>, or the <em>range</em>, of <span class="math-container">$f$</span> is the set <span class="math-container">$\{b \in B: \exists a \in A \, f(a) = b\}$</span> is the set of all elements of <span class="math-container">$B$</span> that are actually mapped onto by <span class="math-container">$f$</span>. So the <em>image</em> of <span class="math-container">$\sin$</span> is <span class="math-container">$[-1,1]$</span>, regardless of whether the codomain is considered to be <span class="math-container">$\mathbb{R}$</span> or <span class="math-container">$[-1,1]$</span>.</p> <p>I'm not sure about the definition of <em>value set</em>, so I can't say which definition of meant, but judging from the answer I assume your professor is asking about the range.</p>
281,325
<p>I am not very familiar with Stone-Čech compactification, but I would like to understand why the remainder $\beta\mathbb{R}\backslash\mathbb{R}$ has exactly two connected components.</p>
Brian M. Scott
12,042
<p><span class="math-container">$\newcommand{\cl}{\operatorname{cl}}$</span>For any Tikhonov space <span class="math-container">$X$</span> let <span class="math-container">$X^*=\beta X\setminus X$</span>. Let <span class="math-container">$\Bbb H=[0,\to)$</span> and <span class="math-container">$\Bbb L=(\leftarrow,0]$</span>. <span class="math-container">$\Bbb H$</span> is <span class="math-container">$C^*$</span>-embedded in <span class="math-container">$\Bbb R$</span>, so <span class="math-container">$\cl_{\beta\Bbb R}\Bbb H=\beta\Bbb H$</span>, and of course <span class="math-container">$\cl_{\beta\Bbb R}\Bbb L=\beta\Bbb L$</span>. Note that if <span class="math-container">$x\in\Bbb H^*$</span>, and <span class="math-container">$V$</span> is a nbhd of <span class="math-container">$x$</span> in <span class="math-container">$\Bbb R$</span>, then <span class="math-container">$V\cap\Bbb H$</span> is unbounded. Thus, the Čech-Stone extension <span class="math-container">$\beta f$</span> of the function</p> <p><span class="math-container">$$f:\Bbb R\to\left[-\frac{\pi}2,\frac{\pi}2\right]:x\mapsto\tan^{-1}x$$</span></p> <p>must map <span class="math-container">$\beta\Bbb H^*$</span> to <span class="math-container">$\pi/2$</span>. Similarly, <span class="math-container">$\beta f$</span> must map <span class="math-container">$\beta\Bbb L^*$</span> to <span class="math-container">$-\pi/2$</span>. But <span class="math-container">$\Bbb R^*=\Bbb H^*\cup\Bbb L^*$</span>, so <span class="math-container">$\beta f$</span> maps <span class="math-container">$\Bbb R^*$</span> onto a discrete two-point space, and <span class="math-container">$\Bbb R^*$</span> cannot be connected.</p> <p>To complete the argument, we need only show that <span class="math-container">$\Bbb H^*$</span> is connected, since clearly <span class="math-container">$\Bbb L^*\cong\Bbb H^*$</span>. Suppose, to get a contradiction, that there is a continuous surjection <span class="math-container">$f:\Bbb H^*\to\{0,1\}$</span>. <span class="math-container">$\Bbb H$</span> is locally compact, so it is open in <span class="math-container">$\beta\Bbb H$</span>, and <span class="math-container">$\Bbb H^*$</span> is therefore a compact subset of <span class="math-container">$\beta\Bbb H$</span>. It follows that <span class="math-container">$\Bbb H^*$</span> is <span class="math-container">$C$</span>-embedded in <span class="math-container">$\beta\Bbb H$</span> and hence that <span class="math-container">$f$</span> extends to a continuous <span class="math-container">$\hat f:\beta\Bbb H\to\Bbb R$</span>. Fix <span class="math-container">$p,q\in\Bbb H^*$</span> such that <span class="math-container">$f(p)=0$</span> and <span class="math-container">$f(q)=1$</span>, and let <span class="math-container">$$U=\hat f^{-1}\left[\left(-\frac14,\frac14\right)\right]\quad\text{and}\quad V=\hat f^{-1}\left[\left(\frac34,\frac54\right)\right]\;.$$</span></p> <p><span class="math-container">$U\cap\Bbb H$</span> and <span class="math-container">$V\cap\Bbb H$</span> are both unbounded, so the intermediate value theorem ensures that <span class="math-container">$$A=\left\{x\in\Bbb H:\hat f(x)=\frac12\right\}$$</span> is unbounded. But then <span class="math-container">$f(x)=\hat f(x)=\frac12$</span> for some <span class="math-container">$x\in\Bbb H^*$</span>, which is the desired contradiction.</p> <p>It follows that <span class="math-container">$\Bbb H^*$</span> and <span class="math-container">$\Bbb L^*$</span> are the two components of <span class="math-container">$\Bbb R^*$</span>.</p> <p>(This is adapted from Gillman &amp; Jerison, <em>Rings of Continuous Functions</em>, which is still very useful, despite being over <span class="math-container">$50$</span> years old now.)</p>
1,726,540
<p>How should $x^{\frac{1}{x}}$ be differentiated? I know the answer is $$\frac{1-\ln(x)}{x^{2-\frac{1}{x}}}$$</p> <p>but I do not understand how to get there.</p> <h2>Attempt at solution.</h2> <p>I believe the following is true:</p> <p>$$ \begin{aligned}\frac{d}{dx}x^u&amp;=ux^{u-1}\cdot u^\prime\\ \frac{d}{dx}a^x&amp;=a^x\cdot\ln(a) \end{aligned}$$ but I don't know what to do when both the base and the exponent are functions of $x$.</p>
Community
-1
<p>As there is a rule for the sum and product, there is a rule for powers:</p> <p>$$\left(u^v\right)'=\left(e^{v\ln u}\right)'=(v\ln u)'e^{v\ln u}=(u'v+v'u\ln u)u^{v-1}.$$</p>
2,636,679
<p>For which values of x do the vectors $(x,0,x + 1),(1,x,1),(0,x,−x)$ form a basis for R3?</p> <p>Is 1 a correct answer to this. The vectors would each be linearly independent, however this seems too simple and I'm guessing theres an error somewhere.</p> <p>If 1 works then it would seem any real number would work since the location of the 0 in the first and last vector will always mean the three vectors are linearly independent, right?</p>
Siong Thye Goh
306,553
<p>Guide:</p> <ul> <li><p>Put those vectors as rows of a matrix, compute the determinant.</p></li> <li><p>If they form a basis, the determinant would not take value zero, hence you just have to find the root of the charactheristic polynomial and avoid those root to make it a basis.</p></li> <li><p>Remark: Find all values rather than find one that works.</p></li> </ul> <p>Edit:</p> <p>$$\begin{bmatrix} 1 &amp; x &amp; 1 \\ 0 &amp; x &amp; - x \\ x &amp; 0 &amp; x+ 1\end{bmatrix}$$</p> <p>Perform $R_3-xR_1$: </p> <p>$$\begin{bmatrix} 1 &amp; x &amp; 1 \\ 0 &amp; x &amp; - x \\ 0 &amp; -x^2 &amp; 1\end{bmatrix}$$</p> <p>Perform $R_3+xR_2$:</p> <p>$$\begin{bmatrix} 1 &amp; x &amp; 1 \\ 0 &amp; x &amp; - x \\ 0 &amp; 0 &amp; 1-x^2\end{bmatrix}$$</p> <p>To make it nonsingular, each column must have a pivot point, hence we require $x \neq 0$ and $1-x^2 \neq 0$.</p> <p>Hence the conditions that you need are $x \neq 0$ and $1-x^2 \neq 0$.</p>
3,025,375
<p>What is<span class="math-container">$$\lim_{n→∞}n^3(\sqrt{n^2+\sqrt{n^4+1}}-n\sqrt2)?$$</span>So it is<span class="math-container">$$\lim_{n→∞}\frac{n^3(\sqrt{n^2+\sqrt{n^4+1}})^2-(n\sqrt{2})^2}{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}=\lim_{n→∞}\frac{n^3(n^2+\sqrt{n^4+1}-2n^2)}{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}.$$</span> I do not know what to do next, because my resuts is <span class="math-container">$∞$</span> but the answer from book is <span class="math-container">$\dfrac{1}{4\sqrt{2}}$</span>.</p>
user
505,767
<p><strong>HINT</strong></p> <p>You only need to apply the trick twice, indeed we have that</p> <p><span class="math-container">$$\sqrt{n^2+\sqrt{n^4+1}}-n\sqrt{2}=(\sqrt{n^2+\sqrt{n^4+1}}-n\sqrt{2})\cdot\frac{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}=$$</span><span class="math-container">$$=\frac{n^2+\sqrt{n^4+1}-2n^2}{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}$$</span></p> <p>and</p> <p><span class="math-container">$$\frac{\sqrt{n^4+1}-n^2}{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}=\frac{\sqrt{n^4+1}-n^2}{\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2}}\cdot \frac{\sqrt{n^4+1}+n^2}{\sqrt{n^4+1}+n^2}=$$</span><span class="math-container">$$=\frac{1}{(\sqrt{n^2+\sqrt{n^4+1}}+n\sqrt{2})(\sqrt{n^4+1}+n^2)}$$</span></p> <p>Can you conclude form here?</p>
1,944,630
<p>whether every positive definite matrix has to be symmetric? If not, what will be the example?</p>
angryavian
43,949
<p>By <a href="https://en.wikipedia.org/wiki/Positive-definite_matrix" rel="nofollow"><strong>definition</strong></a> it is assumed to be symmetric.</p>
1,234,320
<p>It is given that $4 x^4 + 9 y^4 = 64$.</p> <p>Then what will be the maximum value of $x^2 + y^2$?</p> <p>I have done it using the sides of a right-angled triangle be $2x , 3y $ and hypotenuse as 8 .</p>
Simon S
21,495
<p>One more stab at the problem. Instead of working with $x$, let's work with $u = x^2$ as it makes the algebra slightly easier. </p> <p>As $4x^4 + 9y^4 = 64$, we have $y^2 = \frac{1}{3}\sqrt{64-4u^2}$, taking only the positive square roots as $y^2 \geq 0$. Hence want to maximize</p> <p>$$f(u) = x^2 + y^2 = u + y^2 = u + \frac{1}{3}\sqrt{64-4u^2}$$</p> <p>Setting the first derivative equal to zero,</p> <p>$$0 = \frac{df}{du} = 1 - \frac{4}{3}u \frac{1}{\sqrt{64-4u^2}}$$</p> <p>$$\Longrightarrow \ 64 - 4u^2 = \frac{16}{9}u^2 \hspace{30 mm}$$</p> <p>$$\Longrightarrow \ \ u^2 = \frac{9}{52}\cdot 64 \ \ \text{ i.e., } u = \frac{12}{\sqrt{13}} \hspace{10 mm}$$</p> <p>(We take only the positive square root in the last step as $u = x^2 \geq 0$.) </p> <p>Using the earlier expression for $y^2$, we find for $u = x^2 = \frac{12}{\sqrt{13}}$ that</p> <p>$$y^2 = \frac{1}{3}\sqrt{64 - 4\cdot \frac{144}{13}} = \frac{16}{3\sqrt{13}}$$</p> <p>Hence</p> <p>$$x^2 + y^2 = \frac{12}{\sqrt{13}} + \frac{16}{3\sqrt{13}} = \frac{52}{3\sqrt{13}} = \frac{4}{3}\sqrt{13}$$</p> <p>This is a maximum as there are other points satisfying the constraint for which $x^2 + y^2 &lt; \frac{4}{3}\sqrt{13}$. For example, for $(x,y) = (2,0)$, $$x^2 + y^2 = 2^2 + 0^2 = 4 &lt; \frac{4}{3}\sqrt{13} \ \text{ as } 16 &lt; \frac{16}{9}\cdot 13$$</p>
2,558,619
<p>Compute $$ \int_{-\infty}^{\infty} t^2 \delta(\sin(t)) e^{-|t|} \mathrm dt $$ In closed form, where $\delta(t)$ is the <a href="http://mathworld.wolfram.com/DeltaFunction.html" rel="nofollow noreferrer">Dirac Delta function </a>.</p> <p>My attempt: </p> <p>$$ \int_{-\infty}^{\infty} t^2 \delta(\sin(t)) e^{-|t|} \mathrm dt = \int_{-\infty}^{\infty} \delta(\sin(t))t^2 e^{-|t|}\mathrm dt $$</p> <p>Then noting that $\sin(t)$ is zero whenever $t=n\pi$. By formula (2) and (7) in the above link,</p> <p>\begin{align} \int_{-\infty}^{\infty} \delta(\sin(t))t^2 e^{-|t|} \mathrm dt&amp; = \sum_{n=-\infty}^{\infty} \frac{(n\pi)^2e^{-|n\pi|}}{|\cos(n\pi)|} \\&amp; =2\pi^2\sum_{n=0}^{\infty} \frac{(n)^2e^{-n\pi}}{1} \end{align}</p> <p>However , I am stuck here, i do not know how to procede, Wolfram Alpha tells me that this sum doesn't converge so how can i compute it in closed form? I can only assume I have gone about this the wrong way or made a mistake. Any help would be great.</p>
Enrico M.
266,764
<p>Actually Wolfram Alpha is wrong.</p> <p>The series</p> <p>$$\sum_{n = 0}^{+\infty} n^2 e^{-n\pi}$$</p> <p>Does converge to </p> <p>$$\frac{e^{\pi } \left(1+e^{\pi }\right)}{\left(e^{\pi }-1\right)^3}$$</p> <p>Which is easy provable by using differentiation under the summation sign together with the geometric series.</p>
1,797,243
<p>Find the relative extrema of the function by applying the first-derivative test:</p> <p>$$f(x)=x^5-5x^3-20x-2$$</p> <p>So I found the $f'(x)$</p> <p>$$f'(x) = 5x^4-15x^2-20$$</p> <p>Now, I'm trying to find the critical values, which $x=0$ or undefined, so I can apply the first-derivative test. However, I can't simpliy this. How can I find the relative extrema now? Thanks. </p>
Laila
316,954
<p>To find the critical values, set $f'(x)=0$<br/>This function certainly can be simplified:$f'(x)=5x^4−15x^2−20=5(x^4-3x^2-4)=5(x^2-4)(x^2+1)=5(x+2)(x-2)(x^2+1)$ <br/></p>
2,738,023
<p>We know we can check if two vectors are 'orthogonal' by doing an inner product.</p> <p>$a*b=0$</p> <p>tells us that these two vectors are orthogonal</p> <p>here comes the question:</p> <p>if there a way to compute if they are 'parallel'? i.e., they are pointing at the same direction. </p>
Michael Hoppe
93,935
<p>Two vectors are parallel iff their Gram-determinant, that is the area of the parallelogram spanned by the vectors, vanishes.</p>
4,240,892
<p><strong>My attempt</strong>: <span class="math-container">$f(x+y)=f(x)f(y)$</span></p> <p>Differenting wrt <span class="math-container">$y$</span>,</p> <p><span class="math-container">$$f'(x+y)=f(x)f'(y)$$</span></p> <p>Putting <span class="math-container">$y=0$</span>,</p> <p><span class="math-container">$$f'(x)=f(x)f'(0)=kf(x)$$</span></p> <p><span class="math-container">$$\frac{dy}{dx}=ky$$</span></p> <p>We know, <span class="math-container">$y \neq 0$</span> because <span class="math-container">$y$</span> is different everywhere so dividing by y and integrating both sides</p> <p><span class="math-container">$$\int \frac{dy}{y}= \int dx$$</span></p> <p><span class="math-container">$$\ln \lvert y\rvert = kx + c$$</span></p> <p>(We can get <span class="math-container">$c=0$</span> by putting <span class="math-container">$x=y=0$</span> in the first equation)</p> <p><span class="math-container">$$y = \pm {e}^{kx}$$</span></p> <p><strong>Doubt</strong>: Shouldn't we get <span class="math-container">$y = \pm {a}^{kx}$</span> as solution where <span class="math-container">$a$</span> <span class="math-container">$\in$</span> <span class="math-container">$R$</span> except <span class="math-container">$0$</span> ?</p>
Elias Costa
19,266
<p>An alternative way. First find <span class="math-container">$f(0)$</span>. Note that, for <span class="math-container">$y=0$</span> we have <span class="math-container">$$ f(x)=f(0)\cdot f(x), \mbox{ for all } x\in \mathbb{R} $$</span></p> <ul> <li><p>If there is <span class="math-container">$x_{0}\in \mathbb{R}$</span> such that <span class="math-container">$f(x_{0})\neq 0$</span> then <span class="math-container">$$ f(0)=1. $$</span> Otherwise, <span class="math-container">$ f $</span> will be trivially indeed null.</p> </li> <li><p>We have <span class="math-container">$$f(x)\cdot f(-x)=1,\quad\mbox{ for all } x\in \mathbb{R}-\{0\}$$</span></p> </li> <li><p>If <span class="math-container">$n\in \mathbb{N}$</span> then <span class="math-container">$$f(n\cdot x) = f(x)^n,\quad \mbox{ for all } x\in \mathbb{R}-\{0\}$$</span></p> </li> <li><p>If <span class="math-container">$m\in \mathbb{Z}$</span> then <span class="math-container">$$f(m\cdot x) = f(x)^m,\quad \mbox{ for all } x\in \mathbb{R}-\{0\}$$</span></p> </li> <li><p>If <span class="math-container">$q\in \mathbb{Z}-\{0\} $</span> then <span class="math-container">$$f\left(\frac{1}{q}\cdot x\right) = f(x)^{\frac{1}{q}},\quad \mbox{ for all } x\in \mathbb{R}-\{0\}$$</span></p> </li> <li><p>If <span class="math-container">$q\in \mathbb{Z}-\{0\} $</span> and <span class="math-container">$p\in\mathbb{Z}$</span> then <span class="math-container">$$f\left(\frac{p}{q}\cdot x\right) = f(x)^{\frac{p}{q}},\quad \mbox{ for all } x\in \mathbb{R}-\{0\}$$</span></p> </li> <li><p>If <span class="math-container">$r\in\mathbb{Q}$</span> then <span class="math-container">$$f\left(r\right) = f(1)^{r}$$</span></p> </li> <li><p>We can choose <span class="math-container">$f(1)$</span> like any positive real number. As any <span class="math-container">$x\in \mathbb{R}$</span> is limit of a sequence of rational numbers <span class="math-container">$x=\lim_{n\to \infty}r_n$</span> and any exponential function is continuous <span class="math-container">$\lim_{n\to \infty}f(1)^{r_{n}}=f(1)^{\lim_{n\to \infty}r_{n}}$</span> we have <span class="math-container">$$ f(x)=f(1)^x $$</span></p> </li> </ul>
633,482
<p>Consider a cube that exactly fills a certain cubical box. As in Examples 8.7 and 8.10, the ways in which the cube can be placed into the box corresponds to a certain group of permutations of the vertices of the cube. This is the group of <strong>group of rigid motions (or rotations) of the cube</strong>. (It should not be confused with th e <em>group of symmetries of the figure</em>, which will be discussed in the exercises of Section 12.) How many elements does this group have? Argue geometrically that this group has at least three different subgroups of order 4 and at least four different subgroups of order 3.</p> <p><strong>Fraleigh Solution</strong> The group has $24$ elements. </p> <p>The first subgroup of order $4.$ For any one of the $6$ faces can be on top, and for each such face on top, the cube can be rotated in $4$ different positions leaving that face on top. The $4$ such rotations, leaving the top face on top and the bottom face on the bottom, form a cyclic subgroup of order $4$. </p> <p>The second rotation group of order $4$ is formed by the rotations leaving the front and back faces in those positions.</p> <p>The third rotation groups of order $4$ is formed by the rotations leaving the side faces in those positions. </p> <p>One exhibits a subgroup of order $3$ by $\color{red}{\text{taking hold of a pair of diagonally opposite vertices and rotating through the three possible positions}}$, corresponding to the three edges emanating from each vertex.<br> There are $4$ such diagonally opposite pairs of vertices, giving the desired $4$ groups of order $3$.</p> <p>Question 1. I feel this is easier than <a href="https://math.stackexchange.com/a/632954/54547">my other post</a>. But I can't see how 'taking hold a pair of diagonally opposite vertices' causes 'three possible positions'? I tried the animation at that other post but no luck. </p> <p>Question 2. How do you decide on the classifications of the rotations of a shape? These two solutions just say what they are. They never revealed how they prefigured this group has at least three different subgroups of order 4 and at least four different subgroups of order 3. I don't mean just playing around with the shape. I tried that and it didn't help me here. </p> <p>Question 3. Does this solution break down the type of rotations differently than the other post? That solution talks about " a line joining the centers of opposite faces" and "a line joining diagonally opposite vertices ". This one doesn't. </p> <p>This is from John B. Fraleigh page 86 exercise 8.45 A First Course in Abstract Algebra</p>
Christian Blatter
1,303
<p>Look at the following figure (from an analysis textbook). For more background, in particular in connection with Lie algebras/groups see the Wikipedia article about the <a href="http://en.wikipedia.org/wiki/Exponential_map" rel="noreferrer">exponential map</a>. The exponential series does indeed appear there.</p> <p><img src="https://i.stack.imgur.com/s3AvO.jpg" alt="enter image description here"></p>
463,814
<p>I am struggling to find the values of these integrals after trying many substitution it did't worked for me </p> <p>1) $$ \int_{0}^{a}\frac{dx}{\sqrt{ax-x^2}} $$ 2) $$ \int_{}^{}\frac{3x^5dx}{1+x^{12}} $$</p>
Ron Gordon
53,268
<p>For the first integral, write</p> <p>$$ax = \left (\frac{a}{2}\right)^2 - \left ( \frac{a}{2}-x\right)^2$$</p> <p>and substitute $y = a/2-x$, then $y=(a/2) \sin{\theta}$. The result I get is $\pi$, independent of $a$.</p> <p>For the second integral, substitute $y=x^6$, and use </p> <p>$$\int \frac{dy}{1+y^2} = \arctan{y}+C$$</p>
21,257
<p>I am interested in examples where the <a href="http://en.wikipedia.org/wiki/Shooting_method" rel="nofollow">Shooting Method</a> has been used to find solutions to systems of ordinary differential equations that are either </p> <ul> <li>reasonably large systems, or </li> <li>the search algorithm in the shooting parameters is somewhat prohibitive because of the nature of the solutions, or</li> <li>both of the above.</li> </ul> <p>Any references, descriptions, recent progress, folklore, in the ballpark would be of interest. Feel free to interpret "reasonably large" subjectively if necessary. </p>
Yossi Farjoun
3,578
<p>I don't know about state-of-the-art and I'm not sure if this is the kind of thing you were looking for...however in my two first papers I've used the shooting method in a parameter space that was originally too big (4 and 6 dimensions if I recall correctly) and the problem was that with randomly chosen parameters the numerical solver would not reach the other end of the domain and so I could not use a root-finding algorithm to search for the correct initial conditions.</p> <p>The problem was there were unstable directions in the ODE and thus even with the correct initial conditions, the numerical noise would grow so large that you would not reach the other side.</p> <p>My solution was to find more natural variables to use (using an algebraic similarity solution that satisfies the boundary conditions) and to rewrite the system in terms of the new variables. In the new variables the similarity solution is a fixed point and one can reach this fixed point only via its <b>stable manifold</b>, which had a lower dimension than the original space (in my case...). This allowed the root-finding algorithm to kick in and find a solution. </p> <p>OK, This was a little vague. Here are the two papers (shameless plug): <p> <a href="http://arxiv.org/abs/0711.0730" rel="nofollow">http://arxiv.org/abs/0711.0730</a> <p> <a href="http://arxiv.org/abs/0711.0734" rel="nofollow">http://arxiv.org/abs/0711.0734</a></p> <p>(Added later:)</p> <p>Recently I've been working on another problem that has highly unstable directions and there I use the <em>collocation</em> method, which (AFAIK) basically amounts so splitting the domain into many smaller part, doing shooting on each part, and trying to get the pieces to match up. If the problem is linear, this is a simple linear problem, if the problem isn't linear you need a non-linear root finder. I didn't write the code for the collocation, Matlab does it for me...look up BVP4C or BVP5C. </p> <p>In writing this answer, I looked for "collocation method" online and found very little that seemed relevant. So I can only refer you to the Matlab function. perhaps someone else can find a reference that is relevant here.</p>
3,579,739
<p>Its also mentioned that this question is an example of <strong>Sampling without replacement</strong></p> <p>My question is , by the general method of how I do these kind of problems , I would assume that there are two possibilities</p> <ol> <li><p>You chose a defective bottle and then a non defective which will have the probability of </p> <pre><code> 1/5 X 4/4 = 4/20 OR </code></pre></li> <li><p>Choosing a non defective one first and then a defective one which will have the probability of</p> <pre><code>4/5 x 1/4 =4/20 </code></pre></li> </ol> <p>Total probability is 8/20= 2/5 which doesn't make sense at all, since there is just one bottle to begin with But the problem is I cant find a fault in my logic. Though I feel both the options are redundant , isn't both two different ways of selecting the pair?. Or is the fact that the phrase "one after the other" is absent a reason on why this might be a wrong approach?</p> <pre><code> Thank you in advance </code></pre>
Haran
438,557
<p>Your logic is perfectly sound. The answer is <span class="math-container">$\frac{2}{5}$</span>. I think you find it odd that this is more than the probability of choosing one bottle and it being fractured, but this is actually what you must expect. </p> <p>Think of the bottles trying to 'clear an exam'. If the exam is easy, more students tend to pass. Thus, each student has a higher chance of passing. You allowing more bottles to 'pass' (choosing more bottles) gives more chance for the fractured bottle to pass too! In the extreme case where all bottles pass, this becomes obvious.</p>
3,287,067
<p>I'm reading a solution to the following exercise:</p> <p>"Assume that <span class="math-container">$\lim_{x\to c}f\left(x\right)=L$</span>, where <span class="math-container">$L\ne0$</span>, and assume <span class="math-container">$\lim_{x\to c}g\left(x\right)=0.$</span> Show that <span class="math-container">$\lim_{x\to c}\left|\frac{f\left(x\right)}{g\left(x\right)}\right|=\infty.$</span>" </p> <p>And at some point in the proof the following step appears:</p> <p>"Choose <span class="math-container">$\delta_1$</span> so that <span class="math-container">$0&lt;\left|x-c\right|&lt;\delta _1$</span> implies <span class="math-container">$\left|f\left(x\right)-L\right|&lt;\frac{|L|}{2}$</span>. <strong>Then we have <span class="math-container">$\left|f\left(x\right)\right|\ge\frac{\left|L\right|}{2}$</span></strong>."</p> <p>It's precisely the implication in bold that I'm struggling to understand. How does the writer go from <span class="math-container">$\left|f\left(x\right)-L\right|&lt;\frac{\left|L\right|}{2}$</span> to <span class="math-container">$\left|f\left(x\right)\right|\ge\frac{\left|L\right|}{2}$</span>? </p> <p>I'm probably failing to see something that may be very clear, but I've been attempting unsuccessfully to reach the conclusion algebraically long enough, and can't quite see why it is true either! </p> <p>Here is the rest of the solution, if necessary. </p> <p>"Let <span class="math-container">$M&gt;0\ $</span> be arbitrary. [...]. Because <span class="math-container">$\lim_{x\to c}g\left(x\right)=0$</span>, we can choose <span class="math-container">$\delta_2$</span> such that <span class="math-container">$\left|g\left(x\right)\right|&lt;\frac{\left|L\right|}{2M}\ $</span>provided <span class="math-container">$0&lt;\left|x-c\right|&lt;\delta_2$</span>. </p> <p>Let <span class="math-container">$\delta=\min\left\{\delta_1,\delta_2\right\}.\ $</span> Then we have </p> <p><span class="math-container">$\left|\frac{f\left(x\right)}{g\left(x\right)}\right|\ge\left|\frac{\frac{\left|L\right|}{2}}{\frac{\left|L\right|}{2M}}\right|=M$</span> provided <span class="math-container">$0&lt;\left|x-c\right|&lt;\delta$</span>, as desired." </p>
azif00
680,927
<p>Should be <span class="math-container">$$\int_{-a}^a \cos \left( \frac{\pi x}{a}\right) dx =0$$</span> for all <span class="math-container">$a\neq 0$</span>.</p>
4,207,823
<p>I came across the following question in a set of lecture notes:</p> <p>If <span class="math-container">$(G, *)$</span> is a group, when, in general, is it true that there exists <span class="math-container">$X \subset \mathbb{R}^n$</span> such that <span class="math-container">$G \cong \text{Sym}(X)$</span>? Or, less formally, when does an abstract group have a geometric realization?</p> <p>In this context, <span class="math-container">$\text{Sym}(X)$</span> is the group of isometries of <span class="math-container">$\mathbb{R}^n$</span> that permute <span class="math-container">$X$</span>.</p> <p>I'm quite confused about this. As far as I understand, the elements of the group act on the vertices by some group action. I suppose in order for <span class="math-container">$G$</span> to be identified with the group of symmetries, the action should be faithful, and transitive. But I know far too little about groups in order to be able to say anything more general or detailed about what we can deduce about the group structure. I also don't understand the importance of the <span class="math-container">$X$</span> being elements of <span class="math-container">$\mathbb{R}^n$</span>, although this is a key part of the question.</p> <p>So, summing up: what conditions does such a group need to satisfy, and how would the answer differ if we drop the condition that <span class="math-container">$X \subset \mathbb{R}^n$</span>?</p>
Moishe Kohan
84,907
<p>Even the question about existence of <span class="math-container">$n$</span> and <span class="math-container">$X\subset {\mathbb R}^n$</span> such that <span class="math-container">$G&lt; Sym(X)$</span> has negative answer. The reason is that <span class="math-container">$Sym(X)$</span> is a subgroup of the group of Euclidean isometries, which, in turn, is isomorphic to a subgroup of the general linear group <span class="math-container">$GL(n+1, {\mathbb R})$</span> consisting of matrices of the block-diagonal form <span class="math-container">$$ \left[\begin{array}{cc} A&amp; b\\ 0 &amp; 1\end{array}\right] $$</span> where <span class="math-container">$A$</span> is a square <span class="math-container">$n\times n$</span> orthogonal matrix and <span class="math-container">$b$</span> is a column-vector. Here the corresponding Euclidean isometry is <span class="math-container">$$ {\mathbf x}\mapsto A{\mathbf x} + b. $$</span></p> <p>At the same time, there are many examples of groups (even finitely generated ones) which are not isomorphic to subgroups of <span class="math-container">$GL(N, {\mathbb R})$</span> for any <span class="math-container">$N$</span>, see for instance <a href="https://math.stackexchange.com/questions/85308/can-every-group-be-represented-by-a-group-of-matrices?noredirect=1&amp;lq=1">here</a>. My personal favorite is BS(2,3) from the answer given by user1729.</p>
3,740,501
<p>This question builds upon a previous question I asked <a href="https://math.stackexchange.com/questions/3548807/is-at-least-1-of-4-non-concyclic-points-contained-in-the-circle-through-the-othe">here</a></p> <p>The answers helped me understand that for <span class="math-container">$4$</span> points that are not concyclic, and for which no <span class="math-container">$3$</span> fall on a straight line, of the <span class="math-container">$4$</span> circles that can be drawn through triples of the points, exactly <span class="math-container">$1$</span> or <span class="math-container">$2$</span> of the circles will contain the point they don't pass through.</p> <p>I'm looking for an elementary proof (i.e. one that a high school geometry student could understand) as to why a point must be contained in a circle through <span class="math-container">$3$</span> given points. Consider the diagram below:</p> <p><a href="https://i.stack.imgur.com/3Rnhj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Rnhj.jpg" alt="enter image description here" /></a></p> <p>DLeMeur's answer to my original question helped me understand that circle <span class="math-container">$ABD$</span> will contain <span class="math-container">$C$</span> if and only if <span class="math-container">$D$</span> is placed in one of the gray areas. The arguments that I can make for this are only sort of convincing, but not really &quot;air tight&quot;.</p> <p>Case 1: <span class="math-container">$D$</span> is in the circular segment cut off by chord <span class="math-container">$\overline{AB}$</span>. Then circle <span class="math-container">$ABD$</span> has a greater radius than circle <span class="math-container">$ABC$</span>, and since <span class="math-container">$D$</span> and <span class="math-container">$C$</span> are on opposite sides of <span class="math-container">$\overleftrightarrow{AB}$</span>, <span class="math-container">$C$</span> must be contained in circle <span class="math-container">$ABD$</span>.</p> <p>Case 2: <span class="math-container">$D$</span> is outside circle <span class="math-container">$ABC$</span>, on the same side of <span class="math-container">$\overleftrightarrow{AB}$</span> as point <span class="math-container">$C$</span>. Again, circle <span class="math-container">$ABD$</span> has greater radius than circle <span class="math-container">$ABC$</span>, thus the entire portion of circle <span class="math-container">$ABC$</span> below <span class="math-container">$\overleftrightarrow{AB}$</span> is contained in circle <span class="math-container">$ABD$</span>.</p> <p>These arguments seem like they are missing some details. For instance, if someone asked, &quot;How do you know circle <span class="math-container">$ABD$</span> has greater radius than circle <span class="math-container">$ABC$</span>?&quot; I do not have a good answer. I would appreciate any input!</p>
Antonio Hernandez
796,809
<p>Another way to do, only with the formal definition of the Laplace transform. Then <br /> <span class="math-container">$\mathcal{L}\lbrace f\rbrace=\displaystyle\int_{0}^{4} 0dt+\int_{4}^{\infty}e^{-st}(t^{2}-8t+23)dt$</span>.</p>
159,948
<p>If I have this function (just and example) </p> <pre><code> Plot3D[-Abs[Cos[ y] Sqrt[x] Sqrt[ y] Cos[ x]], {x, 0, 20}, {y, 0, 20}] </code></pre> <p>I am interested in finding the locations of the smallest (minimum) 6 values of this function? How can I do that. Apparently, Findminimum and its likes give me one value.</p> <p>Thanks for help.</p>
aardvark2012
45,411
<p>Depending on how similar your actual function is to your example, you could try giving <a href="http://reference.wolfram.com/language/ref/FindMinimum.html" rel="nofollow noreferrer"><code>FindMinimum</code></a> (see below) or <a href="http://reference.wolfram.com/language/ref/FindArgMin.html" rel="nofollow noreferrer"><code>FindArgMin</code></a> a range of reasonable starting values and then just pick the ones you need.</p> <pre><code>f[x_, y_] := -Abs[Cos[y] Sqrt[x] Sqrt[y] Cos[x]] sol = FindArgMin[{f[x, y], {x, y} ∈ Rectangle[{0, 0}, {20, 20}]}, {{x, #1}, {y, #2}}] &amp; @@@ Tuples[Range[0, 20, π], 2] Show[Plot3D[f[x, y], {x, 0, 20}, {y, 0, 20}], Graphics3D[{PointSize[Large], Red, Point[{#1, #2, f[##]} &amp; @@@ sol]}], ViewPoint -&gt; {-1.5, -2, 0}] </code></pre> <p><a href="https://i.stack.imgur.com/zaBNx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zaBNx.png" alt="enter image description here"></a></p> <p>Then</p> <pre><code>SortBy[{#1, #2, f[##]} &amp; @@@ sol, Last][[1 ;; 6]] (* {{18.876, 18.876, -18.8628}, {15.7397, 18.876, -17.222}, {18.876, 15.7397, -17.222}, {15.7397, 15.7397, -15.7239}, {12.606, 18.876, -15.4082}, {18.876, 12.606, -15.4082}} *) </code></pre> <p>gives you the six minimum values and their locations.</p> <p>Or alternatively:</p> <pre><code>sol = FindMinimum[{f[x, y], {x, y} ∈ Rectangle[{0, 0}, {20, 20}]}, {{x, #1}, {y, #2}}] &amp; @@@ Tuples[Range[0, 20, π], 2]; SortBy[sol, First][[1 ;; 6]] (* {{-18.8628, {x -&gt; 18.876, y -&gt; 18.876}}, {-17.222, {x -&gt; 18.876, y -&gt; 15.7397}}, {-17.222, {x -&gt; 15.7397, y -&gt; 18.876}}, {-15.7239, {x -&gt; 15.7397, y -&gt; 15.7397}}, {-15.4082, {x -&gt; 12.606, y -&gt; 18.876}}, {-15.4082, {x -&gt; 18.876, y -&gt; 12.606}}} *) </code></pre> <p>and </p> <pre><code>Show[Plot3D[f[x, y], {x, 0, 20}, {y, 0, 20}], Graphics3D[{PointSize[Large], Red, Point[{x, y, First@#} /. Last@# &amp; /@ sol]}], ViewPoint -&gt; {-1.5, -2, 0}] </code></pre> <p>gives the same plot. (But I find the <code>Rule</code> format can be a little unwieldy.)</p>
159,948
<p>If I have this function (just and example) </p> <pre><code> Plot3D[-Abs[Cos[ y] Sqrt[x] Sqrt[ y] Cos[ x]], {x, 0, 20}, {y, 0, 20}] </code></pre> <p>I am interested in finding the locations of the smallest (minimum) 6 values of this function? How can I do that. Apparently, Findminimum and its likes give me one value.</p> <p>Thanks for help.</p>
José Antonio Díaz Navas
1,309
<p>Mathematically, it's simple, I think. The function is separable in both axis, so finding the minima along one axis, let say along the $x$-axis, you will have the corresponding ones along the other axis. All the minima in the $xy$-plane will be the combination of both.</p> <p>Now, along one axis, one function is an monotonic function ($\sqrt{x}$), and the other has the minima, so finding the minima for <code>-Abs[Cos[x]]</code> function (except for $x=0$, as $\cos(0)\sqrt{0}=0$) you will have those you are looking for:</p> <pre><code>Reduce[Cos[x]] == 1 || Cos[x] == -1 &amp;&amp; x &gt;= 0, x] (* (C[1] ∈ Integers &amp;&amp; x == 2 π C[1]) || (C[1] ∈ Integers &amp;&amp; ((C[1] &gt;= 1 &amp;&amp; x == -π + 2 π C[1]) || (C[1] &gt;= 0 &amp;&amp; x == π + 2 π C[1]))) *) </code></pre> <p>The same for the $y$-axis.</p> <pre><code>Reduce[Cos[y] == 1 || Cos[y] == -1 &amp;&amp; y &gt;= 0, y] (* (C[1] ∈ Integers &amp;&amp; y == 2 π C[1]) || (C[1] ∈ Integers &amp;&amp; ((C[1] &gt;= 1 &amp;&amp; y == -π + 2 π C[1]) || (C[1] &gt;= 0 &amp;&amp; y == π + 2 π C[1]))) *) </code></pre> <p>All your minima are all the combinations for <code>C[1]</code> and <code>C[2]</code> values:</p> <pre><code>data = Flatten[ Table[{π + i*π, π + j π, -Abs[ Cos[π + i* π] Sqrt[π + i* π] Sqrt[π + j* π] Cos[ π + j* π]]}, {i, 0, 6}, {j, 0, i}], 1] // N; Show[{Plot3D[-Abs[Cos[y] Sqrt[x] Sqrt[y] Cos[x]], {x, 0, 20}, {y, 0, 20}, PlotTheme -&gt; "Classic"], pts}] </code></pre> <p><a href="https://i.stack.imgur.com/fpcED.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fpcED.jpg" alt="enter image description here"></a></p>
1,426,264
<p>Let $H$, $K$ be groups, and suppose that $H \cong K \times H$. Does it necessarily follow that $K$ is trivial?</p>
joriki
6,622
<p>I'll answer the first question. (Asking more than one question per post is discouraged.)</p> <p>Since each edge is incident at two vertices and adjacent to two faces, your conditions imply</p> <p>$$e\ge\frac{4v}2=2v\;,$$ $$e\ge\frac{4f}2=2f\;,$$</p> <p>where $v$, $e$, $f$ are the numbers of vertices, edges and faces, respectively. Adding the two inequalities yields $e\ge v+f$, which contradicts <a href="https://en.wikipedia.org/wiki/Euler_characteristic#Polyhedra" rel="nofollow">Euler's polyhedron formula</a>, $v-e+f=2$.</p>
3,397,606
<p>Find <span class="math-container">$[z^N]$</span> for <span class="math-container">$\frac{1}{1-z} \left(\ln \frac{1}{1-z}\right)^2$</span>. Here <em>generalized harmonic numbers</em> should be used. </p> <p><span class="math-container">$$H^{(2)}_n = 1^2 + \frac{1}{2}^2 + \dots + \frac{1}{N}^2.$$</span></p> <p>For now, I was able to find <span class="math-container">$[z^N]$</span> for <span class="math-container">$\left(\ln \frac{1}{1-z}\right)^2$</span> which is the following convolution for <span class="math-container">$N \ge 2$</span>:</p> <p><span class="math-container">$$\sum_{1\le k \le N-1} \frac{1}{N-k}\frac{1}{k}$$</span></p> <p>Next step would be partial sum but I don't see how all this leads me to <em>generalized harmonic numbers</em>.</p>
Ali Shadhar
432,085
<p>Using the generalization:</p> <blockquote> <p><span class="math-container">$$\sum_{n=1}^\infty a_nx^n=\frac1{1-x}\sum_{n=1}^\infty (a_n-a_{n-1})x^n,\quad a_{0}=0$$</span></p> </blockquote> <p>Let <span class="math-container">$a_{n}=H_n^2$</span> to have</p> <p><span class="math-container">\begin{align} \sum_{n=1}^\infty H_n^2x^n&amp;=\frac1{1-x}\sum_{n=1}^\infty \left(H_n^2-H_{n-1}^2\right)x^n\\ &amp;=\frac1{1-x}\sum_{n=1}^\infty \left(\frac{2H_n}{n}-\frac1{n^2}\right)x^n\\ &amp;=\frac1{1-x}\cdot 2\sum_{n=1}^\infty\frac{H_n}{n}x^n-\frac{\operatorname{Li}_2(x)}{1-x}\\ &amp;=\frac1{1-x}\cdot 2\left(\operatorname{Li}_2(x)+\frac12\ln^2(1-x)\right)-\frac{\operatorname{Li}_2(x)}{1-x}\\ &amp;=\frac{\ln^2(1-x)}{1-x}+\frac{\operatorname{Li}_2(x)}{1-x}\\ &amp;=\frac{\ln^2(1-x)}{1-x}+\sum_{n=1}^\infty H_n^{(2)}x^n \end{align}</span></p> <p>Thus </p> <blockquote> <p><span class="math-container">$$\frac{\ln^2(1-x)}{1-x}=\sum_{n=1}^\infty (H_n^2-H_n^{(2)})x^{n}$$</span></p> </blockquote> <hr> <p>The proof of the generalization together with other identities can be found <a href="https://math.stackexchange.com/q/3366416">here</a>.</p>
783
<p>In a category I have two objects $a$ and $b$ and a morphism $m$ from $a$ to $b$ and one $n$ from $b$ to $a$. Is this always an isomorphism? Why is it emphasized that this has to be true, too: $m \circ n = \mathrm{id}_b$ and $n \circ m = \mathrm{id}_a$?</p> <p>I am looking for an example in which the id-part is not true and therefore $m$ and $n$ are not isomorphic.</p>
Doug Spoonwood
11,300
<p>To make this perhaps even clearer, say we have the objects $a=\{x, y\}$ and $b=\{z\}$. Define $m\colon a\to b$ by $x\mapsto z$, $y\mapsto z$ (the only possible morphism here). Define $n\colon b\to a$ by $z\mapsto x$. But, of course, $n\circ m$ doesn't equal $\mathrm{id}_a$, since $(n\circ m)(y)=n(m(y))=n(z)=x$, $(n\circ m)(x)=n(m(x))=n(z)=x$, so we have $n\circ m\colon a\to a$ is $x\mapsto x$, $y\mapsto x$. </p> <p>Perhaps if the cardinality of objects $a$ and $b$ equal each other, then a morphism $m$ from $a$ to $b$ and a morphism $n$ from $b$ to $a$, then <em>both</em> $m$ qualifies as an isomorphism and $n$ qualifies as an isomorphism also. </p>
1,660,231
<blockquote> <p>Show that the function $g(x)=\sqrt{x^2+x+2}$ is defined and is continuous on $\mathbb{R}$.</p> </blockquote> <p>I have tried completion of square for $$x^2+x+2=\left(x+\frac{1}{2}\right)^2+\frac{7}{4}$$ This means that range, $r\ge 7/4$ in domain $\mathbb{R}$.</p> <p>Cannot find any more logic in it, please help.</p>
Kamil Jarosz
183,840
<p>You have already shown that the inner function $x^2+x+2\geqslant 7/4$. The function $\sqrt t$ is defined for $t\geqslant 0$ and continuous for $t&gt;0$. If you have $$t=x^2+x+2$$ then the function $\sqrt t$ is defined and continuous as $t$ is continuous and $t\geqslant 7/4$ (here $t$ is a function of $x$).</p>
1,660,231
<blockquote> <p>Show that the function $g(x)=\sqrt{x^2+x+2}$ is defined and is continuous on $\mathbb{R}$.</p> </blockquote> <p>I have tried completion of square for $$x^2+x+2=\left(x+\frac{1}{2}\right)^2+\frac{7}{4}$$ This means that range, $r\ge 7/4$ in domain $\mathbb{R}$.</p> <p>Cannot find any more logic in it, please help.</p>
DonAntonio
31,254
<p>You need the expression within the square root is non-negative, and you already showed this since $\;x^2+x+2=\left(x+\frac12\right)^2+\frac74\;\leftarrow$ this is the sum of two non-negative expressions and thus is always non-negative, so your function's defined in all of $\;\Bbb R\;$.</p> <p>And it is continuous in all of $\;\Bbb R\;$ because it is the composition of two continuous functions: square root and a polynomial.</p>
664,347
<p>For each of the following values of ($a,b$), find the largest number that is not of the form $ax+by$ with $x\geq 0$ and $y \geq 0$.</p> <p>$(i) (a,b) = (3,7)$</p> <p>$(ii) (a,b) = (5,7)$</p> <p>$(iii) (a,b) = (4,11)$</p> <p>I know how to compute numbers of these forms (clear), but have no idea how to generate one that is not of that form. Then, of those, which is the largest? More importantly, how do I prove its the largest?</p>
D.L.
95,150
<p>For the first one, for example, you can see that if $n$ is an integer, $n$, $n-7$ or $n-14$ is a multiple of 3 (by looking what happens modulo 3). So, if $n\geq 14$, you can write $n-7.x=3.y$ with $x=0, 1,$ or $2$ and $y\geq 0$. Then, the number you look for is between $1$ and $13$ and you can look one by one and take the largest.</p> <p>For the other cases, you can try the same method, there will just be more cases to consider, I think (because a is larger).</p> <p>I have an other idea, I write it in a few minutes.</p> <p>(After verification, the idea is correct, but it is too much difficult to be interesting for this question).</p>
553,040
<p>The question is: A card is drawn from an ordinary pack(52 cards) and a gambler bets that either a spade or an ace is going to appear. The probability of his winning are? I think the answer is $\frac{16}{52} = \frac{4}{13}$. Did I go "probably" go wrong somewhere?</p>
Cheyne H
54,816
<p>There are 16 `good' cards (the 13 spades and the three other aces), out of 52 total, so you're correct. </p>
4,562,006
<p>Prove that <span class="math-container">$\text{Ker}(A)=\text{Ker}(A^2)$</span> if <span class="math-container">$A$</span> is a normal operator.</p> <p>I know that that <span class="math-container">$\text{Ker}(A)\subseteq \text{Ker}(A^2)$</span>, which I proved as follows: <span class="math-container">$x\in \text{Ker}(A) \implies Ax=0\implies(AA)x=A(Ax)=0\implies x\in \text{Ker}(A^2)$</span></p>
Just a user
977,740
<p>While this should be clear from the spectrum theorem, it can be established directly.</p> <p>If <span class="math-container">$A^2x=0$</span>, then <span class="math-container">$$0=(A^2x, A^2x)=(Ax, A^*A^2x)=(Ax, AA^*Ax)=(A^*Ax, A^*Ax)$$</span></p> <p>Thus <span class="math-container">$A^*Ax=0$</span>, hence <span class="math-container">$$0=(A^*Ax, x)=(Ax, Ax)$$</span></p> <p>So <span class="math-container">$Ax=0$</span>.</p>
479,851
<p>I toss a coin a many times, each time noting down the result of the toss. If at any time I have tossed more heads than tails I stop. I.e. if I get heads on the first toss I stop. Or if toss T-T-H-H-T-H-H I will stop. If I decide to only toss the coin at most 2n+1 times, what is the probability that I will get more heads than tails before I have to give up?</p>
Arthur
15,500
<p>Let $C_n$ be the $n$-th Catalan number, defined by $\frac{1}{n+1}\binom{2n}{n}$. Let's also say that you keep tossing even after you had more heads than tails, for good measure. Then there are $2^{2n + 1}$ different sequences of tosses you could've made. How many of them will at some point have more heads than tails?</p> <p>We will divide this by cases in the following way: Any sequence that does at some point have more heads than tails are grouped together according to the number of tosses it took to get more heads for the first time. That means, for instance, that half of all the sequences (the ones starting with head) are in group $0$. The ones that took $3$ coin tosses are in group $1$ and so on. The ones that made it to the more-heads-than-tails-side only on the last coin flip are in group $n$.</p> <p>How many sequences are in each group? Say we have the group labelled $i$. That means that for any sequence in this group, after $2i$ throws they have exactly as many heads as tails, and at no point before have they had more heads. There are $C_i\cdot 2^{2n-2i}$ tossing sequences like that. $C_i$ because that's how many ways you can reach the spot, and $2^{2n-2i}$ because that's how many ways it can go on after this, provided that the $(2i + 1)$th throw is a heads (which it is; that was the definition of group $i$).</p> <p>So we're left with calculating the following sum: $$ \sum_{i = 0}^n 2^{2n-2i}C_i = \sum_{i = 0}^n \frac{2^{2n-2i}}{i+1}\binom{2i}{i} $$ and then divide it by $2^{2n + 1}$, and you have your probability.</p> <p>PS. I don't know how to calculate the sum, but <a href="http://www.wolframalpha.com/input/?i=%5Csum_%7Bi+%3D+0%7D%5En+2%5E%7B2n-2i%7D%2acatalannumber%5Bi%5D" rel="nofollow">WolframAlpha</a> says it's equal to $$ 2^{2 n+1}-\frac{1}{2} \binom{2(n+1)}{n+1} $$ which suggests there are $\frac{1}{2}\binom{2(n+1)}{n+1}$ sequences of coin tosses that are $2n + 1$ long and that never have more heads than tails in any initial segment.</p>
18,136
<p>I introduced the hypercube (to undergraduate students in the U.S.) in the context of generalizations of the Platonic solids, explained its structure, showed it rotating. I mentioned <a href="https://matheducators.stackexchange.com/a/1824/511">Alicia Stott</a>, who discovered the <span class="math-container">$6$</span> regular polytopes in <span class="math-container">$\mathbb{R}^4$</span> (discovered after Schläfli). I sense they largely did not grasp what is the hypercube, let alone the other regular polytopes.</p> <p>I'd appreciate hearing of techniques for getting students to "grok" the fourth dimension.</p>
Pyrhos
13,764
<p>The best explanation of the general concept I've encountered so far is the introduction to a 4D game, Miegakure.</p> <p>The idea of extra dimensions is described in the jump from 2D to 3D first, which makes it much easier to visualize and extrapolate.</p> <p><a href="https://www.youtube.com/watch?v=9yW--eQaA2I" rel="noreferrer">https://www.youtube.com/watch?v=9yW--eQaA2I</a></p>
18,136
<p>I introduced the hypercube (to undergraduate students in the U.S.) in the context of generalizations of the Platonic solids, explained its structure, showed it rotating. I mentioned <a href="https://matheducators.stackexchange.com/a/1824/511">Alicia Stott</a>, who discovered the <span class="math-container">$6$</span> regular polytopes in <span class="math-container">$\mathbb{R}^4$</span> (discovered after Schläfli). I sense they largely did not grasp what is the hypercube, let alone the other regular polytopes.</p> <p>I'd appreciate hearing of techniques for getting students to "grok" the fourth dimension.</p>
Helder
5,848
<p>For more than four dimensions, I would consider treating each pixel in a gray scale image as a "dimension" and its brightness as the value of the corresponding "coordinate". Then, ℝ<sup><i>m</i> x <i>n</i></sup> is just the set of all images (including photos) of <i>m</i> pixels by <i>n</i> pixels.</p> <p>There is a video on YouTube which explores this approach (and a few geometric interesting aspects of these high dimension spaces): <a href="https://www.youtube.com/watch?v=BePQBWPnYuE" rel="nofollow noreferrer">My understanding of the Manifold Hypothesis | Machine learning</a></p>
18,136
<p>I introduced the hypercube (to undergraduate students in the U.S.) in the context of generalizations of the Platonic solids, explained its structure, showed it rotating. I mentioned <a href="https://matheducators.stackexchange.com/a/1824/511">Alicia Stott</a>, who discovered the <span class="math-container">$6$</span> regular polytopes in <span class="math-container">$\mathbb{R}^4$</span> (discovered after Schläfli). I sense they largely did not grasp what is the hypercube, let alone the other regular polytopes.</p> <p>I'd appreciate hearing of techniques for getting students to "grok" the fourth dimension.</p>
Ari
13,775
<p>Here's how i understood it, am 18 so correct me if I'm wrong</p> <p>This site was cool --> <a href="https://4dtoys.com/" rel="nofollow noreferrer">https://4dtoys.com/</a></p> <p>I think its important to have a physical understanding/visualization of the dimensions.</p> <p>1D point would be like a bead on a string. 2D shape is like a hockey puck on a hockey table, the plane. 3D shape is something like an apple or a cube in a room.</p> <p>note how a 3d shape like an apple is made up of many 2d slices/apple crossections, and how a 2d shape is made out of many lines, and that lines are a bunch of points.</p> <p>A super important part of these dimensions is that when a 1D shape is stuck along one dimension, another shape cannot exist at the same point in that one dimension without overlap. Basically two beads cannot both be at the 1 inch mark, 2 hockey pucks cannot both lay in the same place while still laying flat on their plane, and two people cannot take up the same space.</p> <p>The only way to have two objects lay in the same x dimensions is to add another dimension. Like put the bead on another string, or stack the puck on top of the other. From a certain perspective, these two objects are now totally overlapping, when in reality they are separate thanks to the additional dimension</p> <p>But what about 3D objects? Like the apple and the cube?</p> <p>Simply pick up the cube, and put the apple where the cube was. Now they have taken up the same overlapping space without actually overlapping. The 4th dimension here is time. If you "remove" that dimension then the apple and cube would overlap.</p> <p>oh, and stuff can disappear if it escapes your observable dimension. like how the apple slices get bigger and smaller 3d stuff can get bigger, smaller, and disappear in this dimension, but really still exist in another. </p> <p>And in the same way you can slice a cube in different ways to create shapes other than more rectangular prisms, these 3D slices of 4D things can be more than cubes.</p> <p>Idk, this stuff really is wack. But the idea is there. </p> <p>I think stuff like the Klein bottle needs this 4th dimension to exist since it overlaps itself in the 3rd dimension, and basically the parts that would've overlapped exist in different dimensions?</p>
4,038,685
<p>Suppose that <span class="math-container">$u(x)$</span> is continuous and satisfies the integral equation</p> <p><span class="math-container">\begin{equation}\label{1.4.1} u(x) = \int_0^x \sin(u(t))u(t)^p dt \end{equation}</span></p> <p>on the interval <span class="math-container">$0 \leq x \leq 1$</span>. show that <span class="math-container">$u(x) = 0$</span> on this interval if <span class="math-container">$p \geq 0$</span>.</p> <p>This is what I have:</p> <p>Since <span class="math-container">$\sin(u(t))u(t)^p$</span> is continuous, it follows from the integral definition that <span class="math-container">$u(x)$</span> is differentiable. Let us differentiate both sides of equation above with respect to <span class="math-container">$x$</span>. This yields:</p> <p><span class="math-container">\begin{equation} u'(x) = \sin(u(x))u(x)^p \end{equation}</span></p> <p>This ODE is separabale and becomes:</p> <p><span class="math-container">\begin{equation} \frac{1}{\sin(u(x))u(x)^p}du(x) = dx \end{equation}</span></p> <p>However, this doesn't seem easily solvable so I'm not sure how to show that <span class="math-container">$u(x) = 0$</span> from this.</p>
xpaul
66,420
<p>Use @Matthew Pilling's setting. Assume there exists <span class="math-container">$t\in(0,1)$</span> such that <span class="math-container">$u(t)&gt;0$</span>. Because <span class="math-container">$u$</span> is continuous and <span class="math-container">$u(0)=0$</span>, we can identify <span class="math-container">$a,b\in [0,1]$</span> such that <span class="math-container">$u(a)=0,a&lt;t&lt;b,$</span> and <span class="math-container">$u&gt;0$</span> on <span class="math-container">$(a,b)$</span>. Since <span class="math-container">$\sin\big(u(t)\big)$</span> is also continuous, we can find <span class="math-container">$c\in (a,b)$</span> such that <span class="math-container">$\sin \big(u(t)\big)&gt;0$</span> on <span class="math-container">$(a,c)$</span>. Now consider the IVP: <span class="math-container">$$\frac{dy}{dx}=y^p\sin(y),x\in(a,c).$$</span> So <span class="math-container">$$ y'=y^p\sin(y)\le y^{p+1}, x\in(a,c)$$</span> and hence <span class="math-container">$$ y^{-p-1}y'\le 1. \tag1$$</span> Integrating (1) from <span class="math-container">$x_0$</span> to <span class="math-container">$x$</span> (<span class="math-container">$x_0,x\in(a,c)$</span>, <span class="math-container">$x_0&lt;x$</span>), one has <span class="math-container">$$ -\frac1p\bigg[y^{-p}(x)-y^{-p}(x_0)\bigg]\le x-x_0\le1 $$</span> from which <span class="math-container">$$ y^{-p}(x)\ge y^{-p}(x_0)-p. \tag2$$</span> Choose <span class="math-container">$x_0$</span> close to <span class="math-container">$a$</span> such that <span class="math-container">$$ y^{-p}(x_0)&gt; p $$</span> and then from (2), one has <span class="math-container">$$ 0&lt; y^p(x)\le \frac{y^p(x_0)}{1-py^p(x_0)}. \tag3$$</span> Letting <span class="math-container">$x_0\to a^+$</span> in (3) gives <span class="math-container">$$ 0&lt;y^p(x)\le0, x\in(a,c)$$</span> which is absurd for <span class="math-container">$x\in(a,c)$</span>.</p>
648,066
<p>Let $\gamma(z_0,R)$ denote the circular contour $z_0+Re^{it}$ for $0\leq t \leq 2\pi$. Evaluate $$\int_{\gamma(0,1)}\frac{\sin(z)}{z^4}dz.$$</p> <p>I know that \begin{equation} \int_{\gamma(0,1)}\frac{\sin(z)}{z^4}dz = \frac{1}{z^4}\left(z-\frac{z^3}{3!}+\frac{z^5}{5!}-\cdots\right) = \frac{1}{z^3}-\frac{1}{6z}+\cdots \end{equation} but I'm not sure if I should calculate the residues and poles or to use Cauchy's formula?</p> <p>Using Cauchy's formula would give $$ \frac{2\pi i}{1!} \frac{d}{dz}\sin(z),$$ evaluated at $0$ gives $2\pi i$? I'm not sure though, any help will be greatly appreciated.</p>
mkl314
123,304
<p>The function $f(z)=\frac{\sin{z}}{z^4}$ has a third order pole $z=0$ inside $\gamma$. By Cauchy's residue theorem, the integral $$ \int_{\gamma}f(z)\,dz=2{\pi}i\cdot\underset{z=0}{\rm res}\, f(z),$$ where the residue at the third order pole $z=0$ can be calculated using formula $$\underset{z=0}{\rm res}\, f(z)=\frac{1}{(3-1)!}\cdot\lim_{z\to 0}[z^3\cdot f(z)]^{(3-1)}=\frac{1}{2}\cdot\biggl(\frac{\sin{z}}{z}\biggr)''\biggr|_{z=0}=-\frac{1}{6}. $$ Alternatively, though in fact by defintition, the residiue at $z=0$ can be found as the $c_{-1}$ coefficient at the term $z^{-1}$ of the Laurent series in the ring $0&lt;|z|&lt;\infty$ for $f(z)$ having two isolated singularities on the complex plane: a pole at $z=0$ and an essential singularity at $z=\infty$. More precisely, $$f(z)=\frac{1}{z^4}\cdot (z-\frac{z^3}{3!}+\frac{z^5}{5!}-\dots)=\frac{1}{z^3}-\frac{1}{6z}+\frac{z}{5!}-\dots\,,\quad 0&lt;|z|&lt;\infty, $$ whence readily follows $$\underset{z=0}{\rm res}\, f(z)\overset{def}{=}c_{-1}=-\frac{1}{6}\,.$$ Hence, by Cauchy's residue theorem, the integral $$ \int_{\gamma}f(z)\,dz=-\frac{2{\pi}i}{6}=-\frac{{\pi}i}{3}\,.$$</p>
2,254,025
<p><strong>Please verify my answer to the following differential equation:</strong> $$y''-xy'+y=0$$ Let $y = {\sum_{n=0}^\infty}C_nx^n$, then $y' = {\sum_{n=1}^\infty}nC_nx^{n-1}$ and $y''={\sum_{n=2}^\infty}n(n-1)C_nx^{n-2}$</p> <p>Substituting this to the equation we get</p> <p>$${\sum_{n=2}^\infty}n(n-1)C_nx^{n-2}-x{\sum_{n=1}^\infty}nC_nx^{n-1}+{\sum_{n=0}^\infty}C_nx^n = 0$$ $${\sum_{n=2}^\infty}n(n-1)C_nx^{n-2}-{\sum_{n=1}^\infty}nC_nx^n+{\sum_{n=0}^\infty}C_nx^n = 0$$</p> <p>Getting the $x^n$ term on all the terms $${\sum_{n=0}^\infty}(n+2)(n+1)C_{n+2}x^{n}-{\sum_{n=1}^\infty}nC_nx^n+{\sum_{n=0}^\infty}C_nx^n = 0$$</p> <p>Getting the $0$th term from the first and the third summations we get $$2C_2+C_0 + {\sum_{n=1}^\infty}(n+2)(n+1)C_{n+2}x^{n}-{\sum_{n=1}^\infty}nC_nx^n+{\sum_{n=1}^\infty}C_nx^n = 0$$</p> <p>Factoring $x^n$ we get $$2C_2+C_0 + {\sum_{n=0}^\infty}[(n+2)(n+1)C_{n+2}-nC_n+C_n]x^n= 0$$</p> <p><strong>i.</strong>$$2C_2+C_0 = 0 =&gt; C_2 = \frac{-C_0}{2}$$</p> <p><strong>ii.</strong>$$(n+2)(n+1)C_{n+2}-nC_n+C_n = 0$$</p> <p>Therefore solving <strong>ii.</strong> for $C_{n+2}$ $$C_{n+2}=\frac{(n-1)C_n}{(n+2)(n+1)}, n=0,1,2,3,...$$</p> <p>If $n = 0$,</p> <p>$$C_2 = \frac{-C_0}{2!}$$</p> <p>If $n=1$,</p> <p>$$C_3 = 0$$</p> <p>If $n=2$,</p> <p>$$C_4 = \frac{C_2}{3*4} = \frac{-C_0}{4!}$$</p> <p>If $n=3$,</p> <p>$$C_5 = \frac{2C_3}{4*5}=0$$</p> <p>If $n=4$, $$C_6 = \frac{3C_4}{5*6} = \frac{-C_0}{6!}$$</p> <p>Upon seeing the pattern we realize that if $n=2m$ then $$C_{2m} = \frac{-C_0}{2m!}$$</p> <p>And if $n=2m+1$ then $$C_{2m+1} = 0$$</p> <p>So the final answer would be $$y = {\sum_{n=0}^\infty}C_nx^n =&gt; {\sum_{m=0}^\infty}\frac{-C_0*x^{2m}}{2m!}$$</p>
TurlocTheRed
397,318
<p>You should be able to do a change of variable to transform this equation into Hermite's equation then compare the resulting coefficients to Hermite's. </p> <p>Alternatively, you can determine the Rodrigue's Operator formulation of solutions of the equation and verify the operators have the desired effect on your series. </p>
391,509
<p>We have $$\dfrac{1+2+3+...+ \space n}{n^2}$$</p> <p>What is the limit of this function as $n \rightarrow \infty$?</p> <p>My idea:</p> <p>$$\dfrac{1+2+3+...+ \space n}{n^2} = \dfrac{1}{n^2} + \dfrac{2}{n^2} + ... + \dfrac{n}{n^2} = 0$$</p> <p>Is this correct?</p>
amWhy
9,003
<p>Recall that $$1 + 2 + \cdots + n =\sum_{i = 1}^n i = \frac{n(n+1)}{2}$$</p> <p>So you want $$\lim_{n\to \infty}\dfrac{n(n+1)}{2n^2} = \lim_{n\to\infty}\frac{n^2+ n}{2n^2} = \lim_{n\to \infty} \left(\frac 12 + \frac 1{2n}\right)$$</p>
391,509
<p>We have $$\dfrac{1+2+3+...+ \space n}{n^2}$$</p> <p>What is the limit of this function as $n \rightarrow \infty$?</p> <p>My idea:</p> <p>$$\dfrac{1+2+3+...+ \space n}{n^2} = \dfrac{1}{n^2} + \dfrac{2}{n^2} + ... + \dfrac{n}{n^2} = 0$$</p> <p>Is this correct?</p>
egreg
62,967
<p>Your method is wrong. Consider the first values of the expression:</p> <ul> <li>for $n=1$ you get $1$;</li> <li>for $n=2$ you get $1/4+2/4=3/4$;</li> <li>for $n=3$ yoy get $1/9+2/9+3/9=6/9=2/3$.</li> </ul> <p>One could make the conjecture that these terms are always bigger than $1/2$, which can be proved by induction. Let $$ f(n)=\frac{1}{n^2}\sum_{i=1}^n i $$ We have $f(1)=1&gt;1/2$. Suppose the assertion holds for $f(n)$ and consider \begin{align} f(n+1)&amp;=\frac{1}{(n+1)^2}\sum_{i=1}^{n+1} i\\ &amp;=\biggl(\frac{1}{(n+1)^2}\sum_{i=1}^{n} i\biggr)+\frac{n+1}{(n+1)^2}\\ &amp;=\frac{n^2}{(n+1)^2}\biggl(\frac{1}{n^2}\sum_{i=1}^n i\biggr)+\frac{1}{n+1}\\ \text{(by induction hypothesis)}\qquad&amp;&gt;\frac{1}{2}\frac{n^2}{(n+1)^2}+\frac{1}{n+1}\\ &amp;=\frac{1}{2}\frac{n^2+2n+2}{(n+1)^2}\\ &amp;=\frac{1}{2}\biggl(\frac{n^2+2n+1}{(n+1)^2}+\frac{1}{(n+1)^2}\biggr)\\ &amp;=\frac{1}{2}\biggl(1+\frac{1}{(n+1)^2}\biggr)\\ &amp;&gt;\frac{1}{2} \end{align} Therefore your conclusion that the limit is $0$ cannot be true.</p>
3,249,391
<p>By Mathematica, we find <span class="math-container">$$\sum_{n=1}^\infty \frac{4^n}{n^3\binom{2n}{n}}=\pi^2\log(2)-\frac{7}{2}\zeta(3).$$</span></p> <blockquote> <p>How to find the closed form for general series: <span class="math-container">$$\sum_{n=1}^\infty \frac{4^n}{n^p\binom{2n}{n}}? \ \ (p\ge 3)$$</span></p> </blockquote>
Zacky
515,527
<p>We can make use of the following representation <span class="math-container">$$\sf 2\arcsin^2z=\sum\limits_{n\geq1}\frac {(2z)^{2n}}{n^2\binom {2n}n}, \ z\in[-1,1]$$</span> Which gives integrating once with respect to <span class="math-container">$\sf z$</span> from <span class="math-container">$\sf 0$</span> to <span class="math-container">$\sf x$</span>: <span class="math-container">$$\sf 4\int_0^x \frac{\arcsin^2 z}{z}dz =\sum_{n=1}^\infty \frac{(2x)^{2n}}{n^3 \binom{2n}{n}}$$</span> So the sum can be written as <span class="math-container">$$\sf S_3=\sum_{n=1}^\infty \frac{4^n}{n^3\binom{2n}{n}}=4\int_0^1 \frac{\arcsin^2 t}{t}dt$$</span> Now let <span class="math-container">$\sf t=\sin x$</span> and integrate by parts in order to get: <span class="math-container">$$\sf S_3=4\int_0^\frac{\pi}{2} x^2\cot xdx=-8\int_0^\frac{\pi}{2} x\ln(\sin x)dx$$</span> Also we can use the fourier series of log sine <span class="math-container">$$\sf S_3=8\ln 2 \int_0^\frac{\pi}{2} xdx+8\sum_{n=1}^\infty \frac{1}{n}\int_0^\frac{\pi}{2}x\cos(2nx)dx$$</span> The second integral is easily doable integrating by parts, thus: <span class="math-container">$$\sf S_3=\pi^2 \ln 2+2\sum_{n=1}^\infty \frac{(-1)^n-1}{n^3} = \boxed{\pi^2\ln 2 -\frac72\zeta(3)}$$</span></p> <hr> <p>For higher <span class="math-container">$p$</span> things will get quite complicate, but the approach is the same. For case <span class="math-container">$p=4$</span> we have: <span class="math-container">$$\sf \frac{4}{x}\int_0^x \frac{\arcsin^2 z}{z}dz =\sum_{n=1}^\infty \frac{4^{n}x^{2n-1}}{n^3 \binom{2n}{n}}$$</span> And integrating once again produces <span class="math-container">$$\sf 8\int_0^t\frac{1}{x}\int_0^x \frac{\arcsin^2 z}{z}dzdx =\sum_{n=1}^\infty \frac{4^{n}t^{2n}}{n^4 \binom{2n}{n}}$$</span> <span class="math-container">$$\sf \Rightarrow S_4=\sum_{n=1}^\infty \frac{4^{n}}{n^4 \binom{2n}{n}}=8\int_0^1\frac{1}{x}\int_0^x \frac{\arcsin^2 z}{z}dzdx$$</span> <span class="math-container">$$\sf =8\int_0^1\int_z^1 \frac{1}{x}\frac{\arcsin^2 z}{z}dx dz=-8\int_0^1 \frac{\arcsin^2 z \ln z}{z}dz$$</span> Set <span class="math-container">$z=\sin x$</span> and integrate by parts to get <span class="math-container">$$\sf S_4=-8\int_0^\frac{\pi}{2} x^2 \ln(\sin x)\cot x dx=8\int_0^\frac{\pi}{2} x\ln^2(\sin x)dx$$</span> <span class="math-container">$$=\boxed{8\operatorname{Li}_2\left(\frac12\right)+\frac13\ln^42 +4\zeta(2)\ln^2 2-\frac{19}{4}\zeta(4)}$$</span> See <a href="https://artofproblemsolving.com/community/c7h1874534p12729513" rel="nofollow noreferrer">here</a> for the above integral.</p> <hr> <p>Or for <span class="math-container">$p=5$</span> we have by the same approach: <span class="math-container">$$\sf 8\int_0^y\frac{1}{t}\int_0^t\frac{1}{x}\int_0^x \frac{\arcsin^2 z}{z}dzdxdt =\sum_{n=1}^\infty \frac{4^{n}y^{2n}}{n^5 \binom{2n}{n}}$$</span> <span class="math-container">$$\sf \sum_{n=1}^\infty \frac{4^{n}}{n^5 \binom{2n}{n}}=8\int_0^1 \int_z^1\int_z^1 \frac{\arcsin^2 z}{xtz}dxdtdz=8\int_0^1 \frac{\arcsin^2 z\ln^2 z}{z}dz$$</span> <span class="math-container">$$\sf \overset{z=\sin x}=8\int_0^\frac{\pi}{2}x^2\ln^2(\sin x)\cot x dx \overset{IBP}=-\frac{16}3\int_0^\frac{\pi}{2} x\ln^3(\sin x)dx$$</span> Furthermore <a href="https://arxiv.org/pdf/1705.04723.pdf" rel="nofollow noreferrer">this</a> paper may be useful.</p>
3,249,391
<p>By Mathematica, we find <span class="math-container">$$\sum_{n=1}^\infty \frac{4^n}{n^3\binom{2n}{n}}=\pi^2\log(2)-\frac{7}{2}\zeta(3).$$</span></p> <blockquote> <p>How to find the closed form for general series: <span class="math-container">$$\sum_{n=1}^\infty \frac{4^n}{n^p\binom{2n}{n}}? \ \ (p\ge 3)$$</span></p> </blockquote>
Dr. Wolfgang Hintze
198,592
<p>Setting <span class="math-container">$$n^{-p} = \frac{1}{\Gamma(p)} \int_{0}^{\infty} t^{p-1} e^{- n t}$$</span> </p> <p>I find for the sum</p> <p><span class="math-container">$$s(p) = \zeta(p) + \frac{1}{\Gamma(p)} \int_0^\infty t^{p-1}\frac{e^{- t/2} }{( 1-e^{-t} )^{\frac{3}{2}}} \arcsin(e^{-t/2})$$</span> </p>
2,821,308
<p>Consider below complex function $$H(\omega) = \dfrac{1}{i\omega} \left(e^{i\omega} - e^{-i\omega}\right)$$</p> <p>If I replace $i$ by $-i$ in $H(\omega)$, I get back the same $H(\omega)$.<br> Easy to see that $H(\omega) = \dfrac{2}{\omega}\sin(\omega t)$. </p> <p>Long back I heard somebody claim this, but I couldn't pursue further... Now in another topic(signals and systems), this exact property is being used again. </p> <p>I feel this has something to do with flip transform. Like, f(-x) flips the graph of f(x) around y axis. Since the functions see the opposite x values, so the graphs of them flip around y axis. If f(-x) = f(x), then the function is symmetrical around y axis and we call it an even function. Hmm... I couldn't connect this to the complex domain. Help appreciated..</p> <p>EDIT : Here $\omega$ is a real number (angular frequency)</p>
Rodrigo de Azevedo
339,790
<p>Using <strong>polar</strong> coordinates,</p> <p>$$\begin{aligned} x_1 &amp;= \cos (\theta)\\ x_2 &amp;= \sin (\theta)\end{aligned}$$</p> <p>we obtain the <strong>unconstrained</strong> $1$-dimensional maximization problem in $\theta$</p> <p>$$\text{maximize} \quad 1 + 2 \sin (2\theta)$$</p> <p>Differentiating the objective and finding where the derivative vanishes, we obtain $\cos(2\theta) = 0$, whose solution set is</p> <p>$$\left\{ \frac{\pi}{4} + k \frac{\pi}{2} : k \in \{0,1,2,3\} \right\}$$</p> <p>Thus, in terms of $x_1$ and $x_2$, the solution set is</p> <p>$$\left\{ \pm \left(\frac{\sqrt 2}{2},\frac{\sqrt 2}{2}\right), \pm \left(\frac{\sqrt 2}{2},-\frac{\sqrt 2}{2}\right)\right\}$$</p> <p>which is the same you obtained via other means.</p>
2,821,308
<p>Consider below complex function $$H(\omega) = \dfrac{1}{i\omega} \left(e^{i\omega} - e^{-i\omega}\right)$$</p> <p>If I replace $i$ by $-i$ in $H(\omega)$, I get back the same $H(\omega)$.<br> Easy to see that $H(\omega) = \dfrac{2}{\omega}\sin(\omega t)$. </p> <p>Long back I heard somebody claim this, but I couldn't pursue further... Now in another topic(signals and systems), this exact property is being used again. </p> <p>I feel this has something to do with flip transform. Like, f(-x) flips the graph of f(x) around y axis. Since the functions see the opposite x values, so the graphs of them flip around y axis. If f(-x) = f(x), then the function is symmetrical around y axis and we call it an even function. Hmm... I couldn't connect this to the complex domain. Help appreciated..</p> <p>EDIT : Here $\omega$ is a real number (angular frequency)</p>
Michael Rozenberg
190,319
<p>$(x_1-x_2)^2\geq0$ gives $$x_1x_2\leq\frac{x_1^2+x_2^2}{2}=\frac{1}{2}.$$ The equality occurs for $x_1=x_2$.</p> <p>Thus, $$-x_1^2-4x_1x_2-x_2^2=-1-4x_1x_2\geq-1-4\cdot\frac{1}{2}=-3.$$ The equality occurs for $$(x_1,x_2)\in\left\{\left(\frac{1}{\sqrt2},\frac{1}{\sqrt2}\right),\left(-\frac{1}{\sqrt2},-\frac{1}{\sqrt2}\right)\right\}$$ only.</p>
4,621,227
<p>This is a very soft and potentially naive question, but I've always wondered about this seemingly common phenomenon where a theorem has some method of proof which makes the statement easy to prove, but where other methods of proof are incredibly difficult.</p> <p>For example, proving that every vector space has a basis (this may be a bad example). This is almost always done via an existence proof with Zorn's lemma applied to the poset of linearly independent subsets ordered on set inclusion. However, if one were to suppose there exists a vector space <span class="math-container">$V$</span> with no basis, it seems (to me) that coming up with a contradiction given so few assumptions would be incredibly challenging.</p> <p>With that said, I had a few questions:</p> <ol> <li>Are there any other examples of theorems like this?</li> <li>Is this phenomenon simply due to the logical structure of the statements themselves, or is it something deeper? Is this something one can quantize in some way? That is, is there any formal way to study the structure of a statement, and determine which method of proof is ideal, and which is not ideal?</li> <li>With (1) in mind, are there ever any efforts to come up with proofs of the same theorem using multiple methods for the sake of interest?</li> </ol>
Dietrich Burde
83,966
<ol> <li><p>Yes, there are many other examples of this. More or less every famous result (well, you wanted a &quot;simple proof&quot; for it) will be &quot;incredibly challenging&quot;, or even impossible with a different proof. I think of Fermat's last theorem, the Poincare conjecture, or the weak Goldbach conjecture, just to name a few. Of course, the word &quot;simple proof&quot; depends on the context. Perhaps one day the proof for FLT is considered to be &quot;simple&quot; in comparison to other proofs.</p> </li> <li><p>No, I don't think this is apparent from the statement alone. Take Fermat's last theorem. How could one &quot;quantise&quot; this beforehand, that a solution without elliptic curves and without modular forms will be (probably) extremely challenging, and much more difficult than the proof we have?</p> </li> <li><p>Yes, they're famous theorems, where people have tried to find as many proofs as possible. Three examples are the Pythagorean theorem, the quadratic reciprocity law, or the fundamental theorem of algebra. The wikipedia article <a href="https://en.wikipedia.org/wiki/Proofs_of_quadratic_reciprocity" rel="nofollow noreferrer">here</a> mentions that &quot; Several hundred proofs of the law of quadratic reciprocity have been published.&quot;</p> </li> </ol>
4,311,731
<p>I was thinking about solutions of equation <span class="math-container">$x^2=i$</span> . First thought coming to my mind was <span class="math-container">$x=\pm \sqrt{i}$</span> . ( I know it's wrong ) . Then I thought if we solve this equation like real problem then <span class="math-container">$|x|=\sqrt i$</span> ( Again wrong ) .</p> <p>But it got me thinking that we define |z| as distance of complex number z from origin . But what if this distance is not real so let consider complex numbers z whose <span class="math-container">$|z|=i$</span> .</p> <blockquote> <p>Does this question make sense ? Does such numbers exist on complex x - y plane ? Do we need to include another dimensions for such numbers ?</p> </blockquote> <p>Sorry if this question seems silly and thanks for help !</p>
Rebellos
335,894
<p>As squid correctly pointed out in the comments, the distance is by definition <strong>real</strong>. We induce and construct norms over spaces that help us measure distance for this exact reason, regardless their elements.</p> <p>As of such, the expression <span class="math-container">$|z| = i$</span> doesn't stand.</p> <p>The standard norm over <span class="math-container">$\mathbb C$</span> that help us measure distance, is: <span class="math-container">$$z = a + ib: \quad a,b \in R$$</span></p> <p><span class="math-container">$$|\cdot|_\mathbb C : \mathbb C \to \mathbb R \quad \text{such that} \quad |z| = \sqrt{a^2 + b^2}.$$</span></p> <p>Finally, as you correctly stated, the expression <span class="math-container">$|z|$</span> means the distance of the complex number <span class="math-container">$z \in \mathbb C$</span> from the complex origin.</p>
237,960
<p>How could I solve this problem?</p> <blockquote> <p>Find the first digit of $2^{4242}$ without using a calculator.</p> </blockquote> <p>I know how to find the last digit with modular arithmetic, but I can't use that here.</p>
Old John
32,441
<p>This is probably not the answer you are looking for, and wil probably only be appreciated by people of my age ...</p> <p>I can still remember from school days that $\log_{10} 2 = 0,30102999$ (I always thought it was noteworthy that it is so close to $0.30103$) - people who went to school in the 1950's can probably recall using logs to base 10 for lots of tedious calculations.</p> <p>You can then do the multiplication by 4242 without a calculator, and get the fractional part ($=x$, say, but you are likely to need a calculator to find out the first digit of $10^x$, unless you have also memorised $\log 2, \log3, \dots, \log 9$ (I can't!)</p> <p>Edit:</p> <p>With a bit more digging in the recesses of my memory, I can just recall that $\log 3$ is something like $0.477$, so $\log 9 = 2 \log 3 = 0.954$, so that should do it ...</p>
1,626,840
<blockquote> <p>If I've played the lottery a certain $N$ number of times (and I didn't look into the results of each game), what this $N$ number must be to get me a $50\%$ chance of have already won at least one of my games? (imagine I 'll look up at all the results at once, after acheiving this chance) and the chance of winning this particularly lottery is $1$ in $1$ million.</p> </blockquote> <p>I was discussing this with my colleague and I particularly think that there's no exactly number $N$ of getting $50\%$ of chance of already have won. I think that when you ve played zero games the chance is $0\%$ and if you play an infinite numbers of games the chance is $100\%$ that you have won, but there s no ascending curve.</p>
Jimmy R.
128,037
<p>The number $X$ of successes in $N$ trials is binomially distributed with parameters $N$ and $p=\frac{1}{1000000}$. So, by complementarity the probability of at least one win, i.e. $P(X\ge 1)$, is equal to $1$ minus the probability of $0$ wins, i.e. $P(X=0)$. By the formula of the binomial distribution we get $$P(X\ge 1)=1-P(X=0)=1-\dbinom{n}{0}p^0(1-p)^N=1-(1-p)^N$$ Therefore, if you want at least $50\%$ chance to have won the lottery, then solve $$0.5\le P(X\ge 1)=(1-p)^N \iff N \ln(1-p)\le \ln 0.5 \iff N \ge \frac{\ln 0.5}{\ln(1-p)}$$ For $p=\frac{1}{1000000}$ this gives $$N\ge 693146.834 $$ and since $N$ is an integer it must be at least equal to $693147$ trials.</p>
3,783,186
<p>I am trying to prove that <span class="math-container">$$2≤\int_{-1}^1 \sqrt{1+x^6} \,dx ≤ 2\sqrt{2} $$</span> I learned that the equation <span class="math-container">$${d\over dx}\int_{g(x)}^{h(x)} f(t)\,dt = f(h(x))h'(x) - f(g(x))g'(x) $$</span> is true due to Fundamental Theorem of Calculus and Chain Rule, and I was thinking about taking the derivative to all side of the inequality, but I am not sure that it is the correct way to prove this. Can I ask for a help to prove the inequality correctly? Any help would be appreciated! Thanks!</p>
Mojbn
764,562
<p><span class="math-container">$ \int_{-1}^1 \sqrt{1+x^6}dx=2\int_{0}^1 \sqrt{1+x^6} dx$</span> because <span class="math-container">$\sqrt{1+x^6}$</span> is an even function. so we must show: <span class="math-container">$$2≤2\int_{0}^1 \sqrt{1+x^6} dx ≤ 2\sqrt{2}$$</span> or we must show: <span class="math-container">$$1≤\int_{0}^1 \sqrt{1+x^6} dx ≤ \sqrt{2}$$</span></p> <p><span class="math-container">$1≤\sqrt{1+x^6}$</span> then <span class="math-container">$\int_{0}^11dx\leq\int_{0}^1 \sqrt{1+x^6} dx$</span> we have <span class="math-container">$1≤\int_{0}^1 \sqrt{1+x^6} dx$</span> <span class="math-container">$$$$</span></p> <p>we have <span class="math-container">$1+x^6\leq2$</span> if <span class="math-container">$0\leq x \leq1$</span> and then we have <span class="math-container">$\sqrt{1+x^6}\leq \sqrt2$</span> if <span class="math-container">$0\leq x \leq1$</span> therefore : <span class="math-container">$$\int_{0}^1\sqrt{1+x^6}dx\leq \int_{0}^1\sqrt2dx=\sqrt2$$</span></p>
3,048,381
<p>We found the orthonormal basis for the eigen spaces.</p> <p>We got <span class="math-container">$C$</span> to be the matrix</p> <pre><code>[ 1/squareroot(2) 1/squareroot(6) 1/squareroot(3) -1/squareroot(2) 1/squareroot(6) 1/squareroot(3) 0 -2/squareroot(6) 1/squareroot(3) ] </code></pre> <p>And the original matrix <span class="math-container">$A $</span> is</p> <pre><code>[4 2 2 2 4 2 2 2 4] </code></pre> <p>After finding <span class="math-container">$C$</span>, my notes jump to:</p> <pre><code>therefore <span class="math-container">$C^-1 A C = $</span> [2 0 0 0 2 0 0 0 8] </code></pre> <p>They do not show any steps on how to calculate the inverse of <span class="math-container">$C$</span>. Is there an easy way of calculating it? How would I start off reducing it to RREF? How would I get rid of the square roots? (normally, I'm used to just dealing with regular integers).</p> <p>Thanks in advance!</p>
Digitalis
515,828
<p>Notice that <span class="math-container">$ A = \begin{bmatrix}4 &amp; 2 &amp; 2 \\ 2 &amp; 4 &amp; 2 \\ 2 &amp; 2 &amp; 4 \end{bmatrix}$</span> has eigenvalues <span class="math-container">$2$</span> and <span class="math-container">$8$</span>. Since there exists a basis of eigenvectors <span class="math-container">$v_1,v_2,v_3$</span> define the change of basis matrix <span class="math-container">$C = \begin{bmatrix} \vec{v_1} &amp; \vec{v}_2 &amp;\vec{v}_3\\\end{bmatrix}$</span>. Then <span class="math-container">$$ C^{-1} A C$$</span> is the expression of <span class="math-container">$A$</span> in the basis <span class="math-container">$ \{v_1, v_2,v_3\}$</span>. Since this is a basis of eigenvectors <span class="math-container">$C^{-1} AC$</span> is a diagonal matrix with the eigenvalues on the diagonal</p> <p><span class="math-container">$$ \begin{bmatrix} \lambda_1 &amp; 0&amp; 0 \\ 0 &amp; \lambda_2 &amp; 0 \\ 0 &amp; 0 &amp; \lambda_3 \end{bmatrix}$$</span> </p> <p>Where <span class="math-container">$\lambda_i$</span> is the eigenvalue corresponding to the eigenvector <span class="math-container">$v_i$</span>. It is not necessary to explicitly calculate the inverse of <span class="math-container">$C$</span> and multiply the three matrices together. If you still wish to do it check out nicomezi's answer. </p>
3,176,593
<p>I'm having a hard time understanding how to find all solutions of the form <span class="math-container">$a_n = a^{(h)}_n+a_n^{(p)}$</span></p> <p>I show that <span class="math-container">$a_n=n2^n \to a_n=2(n-1)2^{n-1} +2^n=2^n(n-1+1)=n2^n$</span>.</p> <p>I can show that <span class="math-container">$a_n^{(h)}$</span> characteristic equation <span class="math-container">$r-2=0 \to a_n^{(h)}=\alpha2^n$</span></p> <p>But I'm stuck on <span class="math-container">$a_n^{(p)}$</span> characteristic equation <span class="math-container">$C2^n=2C\cdot2^{n-1}+2^n$</span></p> <p>Simplifies to <span class="math-container">$C \neq C+1$</span>, Looking online I saw that the solution is <span class="math-container">$a_n=c\cdot2^n+n2^n$</span>, but I'm not sure how to get there. </p>
lhf
589
<p>Here is another take.</p> <p>Let <span class="math-container">$b_n=2^n$</span>. Then <span class="math-container">$$ a_n=2a_{n-1}+2b_{n-1}, \quad b_n=2b_{n-1}, \quad b_0=1 $$</span> and so <span class="math-container">$$ \pmatrix{ a_n \\ b_n } = \pmatrix{ 2 &amp; 2 \\ 0 &amp; 2 } \pmatrix{ a_{n-1} \\ b_{n-1} } = 2 \pmatrix{ 1 &amp; 1 \\ 0 &amp; 1 } \pmatrix{ a_{n-1} \\ b_{n-1} }$$</span> which gives <span class="math-container">$$ \pmatrix{ a_n \\ b_n } = 2^n \pmatrix{ 1 &amp; 1 \\ 0 &amp; 1 }^n \pmatrix{ a_0 \\ b_0 } = 2^n \pmatrix{ 1 &amp; n \\ 0 &amp; 1 } \pmatrix{ a_0 \\ b_0 } $$</span> Therefore, <span class="math-container">$$ a_n = a_0 2^n + n 2^n $$</span></p>
2,537,311
<p>Question is, from finding a basis of e-vectors, determine an invertible matrix <strong>P</strong> such that: <img src="https://latex.codecogs.com/gif.latex?F=P%5E%7B-1%7DLP" title="F=L^{-1}AL" /> is diagonal. (write down the matrix F)</p> <p><img src="https://latex.codecogs.com/gif.latex?L=%5Cbegin%7Bpmatrix%7D&amp;space;%5C&amp;space;%5C1&amp;&amp;space;-2&amp;&amp;space;-1%5C%5C&amp;space;%5C&amp;space;%5C2&amp;&amp;space;%5C&amp;space;%5C6&amp;&amp;space;%5C&amp;space;%5C2%5C%5C&amp;space;-1&amp;&amp;space;-2&amp;&amp;space;%5C&amp;space;%5C1&amp;space;%5Cend%7Bpmatrix%7D" title="L=\begin{pmatrix} \ \1&amp; -2&amp; -1\\ \ \2&amp; \ \6&amp; \ \2\\ -1&amp; -2&amp; \ \1 \end{pmatrix}" /></p> <p>For my e-values using: </p> <p><img src="https://latex.codecogs.com/gif.latex?det(L-%5Clambda&space;I)" title="det(L-\lambda I)" /></p> <p>I obtained <img src="https://latex.codecogs.com/gif.latex?%5Clambda&space;=2%5C&space;and%5C&space;%5Clambda&space;=&space;4" title="\lambda =2\ and\ \lambda = 4" /></p> <p>Now for my e-vectors I'm slightly confused since I dont know what to do after plugging in <img src="https://latex.codecogs.com/gif.latex?%5Clambda&space;=2" title="\lambda =2" /> I got:</p> <p><img src="https://latex.codecogs.com/gif.latex?x&plus;2y&plus;z=0" title="x+2y+z=0" /></p> <p>what do I do from here?</p>
user284331
284,331
<p>Mean value Theorem: $|f(x)-f(0)|=|f'(\xi_{x})||x|\leq|x|$, then...</p>
1,063,154
<p>I Ran into this question and I can't find the right way to approach it.</p> <p>We have $n$ different wine bootles numbered $i=1...n$. the first is 1 year old, the second is 2 years old ... the $n$'th bottle is $n$ years old.</p> <p>Each bottle is still good at probability of $1/i$.</p> <p>We pick out a random bottle and it is good. what is the expected value of the age of the bottle?</p> <p>I'm really not sure what the random varable here is and how to aproach the question. I'd be grateful for a lead.</p> <p>Thanks,</p> <p>Yaron.</p>
Henry
6,460
<p>You are interested in the age of the bottle (the random variable), conditioned on it being good. </p> <p>The probability the bottle selected is of age $i$ given it is good is $\dfrac{\frac{1}{i}}{\sum_1^n \frac{1}{j}} = \dfrac{1}{iH(n)}$ where $H(n)$ is a harmonic number.</p> <p>So the expected age given it is good is $\displaystyle\sum_1^n i\frac{1}{iH(n)} = \dfrac{n}{H(n)}.$ </p>
416,407
<blockquote> <p>What examples are there of habitual but unnecessary uses of the axiom of choice, in any area of mathematics except topology?</p> </blockquote> <p>I'm interested in standard proofs that use the axiom of choice, but where choice can be eliminated via some judicious and maybe not quite obvious rephrasing. I'm less interested in proofs that were originally proved using choice and where it took some significant new idea to remove the dependence on choice.</p> <p>I exclude topology because I already know lots of topological examples. For instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive mathematics</a> gives choicey and choice-free proofs of a standard result (Theorem 1.4): every open cover of a compact metric space has a Lebesgue number. Todd Trimble told me about some other topological examples, e.g. a compact subspace of a Hausdorff space is closed, or the product of two compact spaces is compact. There are more besides.</p> <p>One example per answer, please. And please sketch both the habitual proof using choice and the alternative proof that doesn't use choice.</p> <p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from topology.</p> <p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space <span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p> <p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some <span class="math-container">$\varepsilon_x &gt; 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}), \ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p> <p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span> such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon &gt; 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n, \varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
Pace Nielsen
3,199
<p>Many uses of Zorn's lemma really only need transfinite recursion, without any use of AC. Sometimes you don't even need transfinite recursion, but just normal recursion, or even less.</p> <p>This is especially applicable in specific examples. For instance, you don't need AC to get an algebraic closure of <span class="math-container">$\mathbb{Q}$</span>.</p>
416,407
<blockquote> <p>What examples are there of habitual but unnecessary uses of the axiom of choice, in any area of mathematics except topology?</p> </blockquote> <p>I'm interested in standard proofs that use the axiom of choice, but where choice can be eliminated via some judicious and maybe not quite obvious rephrasing. I'm less interested in proofs that were originally proved using choice and where it took some significant new idea to remove the dependence on choice.</p> <p>I exclude topology because I already know lots of topological examples. For instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive mathematics</a> gives choicey and choice-free proofs of a standard result (Theorem 1.4): every open cover of a compact metric space has a Lebesgue number. Todd Trimble told me about some other topological examples, e.g. a compact subspace of a Hausdorff space is closed, or the product of two compact spaces is compact. There are more besides.</p> <p>One example per answer, please. And please sketch both the habitual proof using choice and the alternative proof that doesn't use choice.</p> <p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from topology.</p> <p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space <span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p> <p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some <span class="math-container">$\varepsilon_x &gt; 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}), \ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p> <p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span> such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon &gt; 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n, \varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
Ege Erdil
131,052
<p>It's common to use the axiom of choice to prove that nonzero commutative rings have the invariant basis number property: in other words, that for a nonzero commutative ring <span class="math-container">$ R $</span>, the <span class="math-container">$ R $</span>-modules <span class="math-container">$ R^m $</span> and <span class="math-container">$ R^n $</span> are isomorphic if and only if <span class="math-container">$ m = n $</span>.</p> <p>The most common proof of this uses Zorn's lemma to find a maximal ideal <span class="math-container">$ \mathfrak m $</span> of <span class="math-container">$ R $</span>. We can then tensor any isomorphism <span class="math-container">$ R^m \to R^n $</span> with <span class="math-container">$ R/\mathfrak m $</span> to get an isomorphism of <span class="math-container">$ R/\mathfrak m $</span>-vector spaces <span class="math-container">$ (R/\mathfrak m)^m \to (R/\mathfrak m)^n $</span>, which implies <span class="math-container">$ m = n $</span> by linear algebra since <span class="math-container">$ R/\mathfrak m $</span> is a field.</p> <p>In fact, however, using Zorn's lemma is unnecessary. One way to see this is by looking at the exterior powers of the modules <span class="math-container">$ R^n $</span>. The exterior power <span class="math-container">$ {\bigwedge}^n R^n $</span> is nonzero because the determinant <span class="math-container">$ (R^n)^n \to R $</span> is a surjective map that factors through <span class="math-container">$ {\bigwedge}^n R^n $</span>, while <span class="math-container">$ {\bigwedge}^m R^n $</span> is obviously zero for <span class="math-container">$ m &gt; n $</span>. Therefore the rank of a free module over a nonzero commutative ring corresponds to its highest order exterior power that doesn't vanish, proving the difficult part of the claim that <span class="math-container">$ m \neq n $</span> implies <span class="math-container">$ R^m \ncong R^n $</span>.</p>
416,407
<blockquote> <p>What examples are there of habitual but unnecessary uses of the axiom of choice, in any area of mathematics except topology?</p> </blockquote> <p>I'm interested in standard proofs that use the axiom of choice, but where choice can be eliminated via some judicious and maybe not quite obvious rephrasing. I'm less interested in proofs that were originally proved using choice and where it took some significant new idea to remove the dependence on choice.</p> <p>I exclude topology because I already know lots of topological examples. For instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive mathematics</a> gives choicey and choice-free proofs of a standard result (Theorem 1.4): every open cover of a compact metric space has a Lebesgue number. Todd Trimble told me about some other topological examples, e.g. a compact subspace of a Hausdorff space is closed, or the product of two compact spaces is compact. There are more besides.</p> <p>One example per answer, please. And please sketch both the habitual proof using choice and the alternative proof that doesn't use choice.</p> <p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from topology.</p> <p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space <span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p> <p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some <span class="math-container">$\varepsilon_x &gt; 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}), \ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p> <p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span> such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon &gt; 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n, \varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
daw
48,485
<p>The supremum of an arbitrary set of measurable functions from a <span class="math-container">$\sigma$</span>-finite measure space into <span class="math-container">$\mathbb R\cup \{\pm\infty\}$</span> exists in the following sense:</p> <p>Let <span class="math-container">$F$</span> be a set of such measurable functions. Then there is measurable <span class="math-container">$g$</span> such that <span class="math-container">$f\le g$</span> a.e. for all <span class="math-container">$f\in F$</span>. And if <span class="math-container">$h$</span> is such that <span class="math-container">$f\le h$</span> a.e. for all <span class="math-container">$f\in F$</span>, then <span class="math-container">$g\le h$</span>.</p> <p>The trick is that the inequalities are required in the a.e. sense. I have seen proofs that use Zorn's lemma (which is tempting), but there is a proof without it (see, e.g., Bogachev's monograph on measure theory, it uses monotone convergence). The result is also surprising because many properties in measure/integration theory have countability built-in.</p>
416,407
<blockquote> <p>What examples are there of habitual but unnecessary uses of the axiom of choice, in any area of mathematics except topology?</p> </blockquote> <p>I'm interested in standard proofs that use the axiom of choice, but where choice can be eliminated via some judicious and maybe not quite obvious rephrasing. I'm less interested in proofs that were originally proved using choice and where it took some significant new idea to remove the dependence on choice.</p> <p>I exclude topology because I already know lots of topological examples. For instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive mathematics</a> gives choicey and choice-free proofs of a standard result (Theorem 1.4): every open cover of a compact metric space has a Lebesgue number. Todd Trimble told me about some other topological examples, e.g. a compact subspace of a Hausdorff space is closed, or the product of two compact spaces is compact. There are more besides.</p> <p>One example per answer, please. And please sketch both the habitual proof using choice and the alternative proof that doesn't use choice.</p> <p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from topology.</p> <p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space <span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p> <p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some <span class="math-container">$\varepsilon_x &gt; 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}), \ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p> <p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span> such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon &gt; 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n, \varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
Gro-Tsen
17,064
<p>It is a theorem of ZF that every sequentially continuous function <span class="math-container">$\mathbb{R}\to\mathbb{R}$</span> is continuous. The proof is usually given in ZFC (and indeed, Choice <em>is</em> necessary to assert that sequential continuity <em>at a point</em> implies continuity at that point), but a proof can be given in ZF that sequential continuity everywhere implies continuity everywhere: see Herrlich, <em>The Axiom of Choice</em> (2006), <a href="https://books.google.com/books?id=JXIiGGmq4ZAC&amp;pg=PA29" rel="noreferrer">theorem 3.15</a> and subsequent remarks on <a href="https://books.google.com/books?id=JXIiGGmq4ZAC&amp;pg=PA30" rel="noreferrer">page 30</a>.</p> <p>(The proof in ZF is bizarre and somewhat counterintuitive, and since it only works for continuity <em>everywhere</em>, it seems quite defensible to use Choice to prove this.)</p>
836,753
<p>$\{X_1, X2, \ldots, X_{121}\}$ are independent and identically distributed random variables such that $E(X_i)= 3$ and $\mathrm{Var}(X_i)= 25$. What is the standard deviation of their average? In other words, what is the standard deviation of $\bar X= {X_1+ X_2+ \cdots + X_{121} \over 121}$?</p>
beep-boop
127,192
<p>Let $\bar X=\frac{1}{n}\sum\limits_{i=1}^{n}X_i$.</p> <p>We want $\rm{SD}(\bar X)$ (standard deviation of $\bar X$).</p> <p>Hints:</p> <ul> <li><p>$\rm{Var}[\bar X]=\frac{\sigma^2}{n}$</p></li> <li><p>$\rm{SD}(\bar X)=\sqrt{\rm{Var}[\bar X]}$</p></li> </ul> <p>where $n=$ number of random variables, $\sigma^2=$ population variance (25 in this case).</p>
3,921,259
<p>I need to calculate the operator Norm of the linear operator defined as: <span class="math-container">$$ T:(C([0,1]),||\cdot||_\infty)\rightarrow\mathbb{R} \text{ where } Tf=\sum_{k=1}^n a_kf(t_k)$$</span> for <span class="math-container">$0\leq t_1&lt;t_2&lt;...&lt;t_n\leq 1$</span> and <span class="math-container">$a_1,...,a_n\in \mathbb{R}$</span>.</p> <p>I have been able to show that <span class="math-container">$||T||\geq\left|\sum_{k=1}^n a_k\right|$</span> and <span class="math-container">$||T||\leq \sum_{k=1}^n|a_k|$</span> but I don't seem to able to bound it further than that. Would appreciate any help. Thank you.</p>
Tito Eliatron
84,972
<p>WLOG, one can assume that <span class="math-container">$a_k\ne0$</span> for all <span class="math-container">$k$</span> (if not, just drop all the zero-values, and in the case that all are zero, the problem is trivial).</p> <p>Take <span class="math-container">$f(x)$</span> a continuous function with these properties:</p> <ul> <li><span class="math-container">$f(t_k)=\frac{|a_k|}{a_k}$</span>.</li> <li><span class="math-container">$|f(x)|\le 1$</span> for all <span class="math-container">$x\in[0,1]$</span>.</li> </ul> <p>For example, you can take the polygonal through the points <span class="math-container">$\{(t_k,f(t_k)),\,1\le k\le n\}\cup\{(s_k,0),1\le k\le n-1\}$</span>, where <span class="math-container">$s_k=\frac{t_k+t_{k+1}}{2}$</span> (the mid point of the interval <span class="math-container">$[t_k,t_{k+1}]$</span>).</p> <p>Then it is clear that <span class="math-container">$\|f\|_\infty=1$</span> (observe that <span class="math-container">$|f(t_k)|=1$</span>) and <span class="math-container">$Tf=\sum_{k=1}^n |a_k|$</span>.</p> <p>This together with your calculations leads to <span class="math-container">$\|T\|=\sum_{k=1}^n|a_k|$</span>.</p>
2,306,122
<p>Show $X=\{n \in \mathbb{N}: \text{n is odd and} \ n = k(k+1) \text{for some} \ k \in \mathbb{N}\}=\emptyset$</p> <p>My proof is as follow, please point if I have made any mistake. </p> <p><strong>proof:</strong></p> <p>we have $\emptyset \subseteq X$ suppose $X≠\emptyset$ pick $n \in X$</p> <p>Then there are 2 cases 1st case: n is odd then n=(k+1)k</p> <p>Then suppose k is odd $\implies$ k+1 is even $\implies$ n is even</p> <p>2nd case: consider k is even $\implies k+1$ is odd</p> <p>then n=(k+1)k for some $k \in \mathbb{N}=\emptyset \implies n$ is even</p> <p>Therefore, n is neither even nor odd, so $k \in \mathbb{N} \implies n \not\in X$ and $\implies X= \emptyset$ </p> <p>Q.E.D</p>
JMP
210,189
<p>You are taking a very long-winded approach.</p> <p>Your $X$ can be expressed as:</p> <p>$$X=\{n\in\mathbb N|\text{n is odd} \land \text{n is even}\}$$</p> <p>and we need to prove $X=\emptyset$.</p>
3,841,535
<p>I'm having a lot of trouble solving this question via the differentiating with respect to a parameter method. I can get the correct result for the integral containing sine, but I'm totally lost when it comes to evaluating the integral containing cosine. Here's the problem statement:</p> <p>Given:</p> <p><span class="math-container">$$\int_{0}^{\infty} e^{-ax} \sin(kx) \ dx = \frac{k}{a^2+k^2}$$</span></p> <p>evaluate <span class="math-container">$\int_{0}^{\infty} xe^{-ax}\sin(kx) \ dx$</span> and <span class="math-container">$\int_{0}^{\infty} xe^{-ax} \cos(kx) \ dx$</span>.</p> <p>This is the last question in the 2nd chapter of 'Basic Training in Mathematics' by Shankar. Any help would be appreciated, I've been tearing my hair out all day with this.</p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> Lets <span class="math-container">$\ds{\on{u}\pars{a,k} \equiv \int_{0}^{\infty}\expo{-ax}\sin\pars{kx}\,\dd x}$</span> which satisfies <span class="math-container">$\ds{\on{u}_{aa}\pars{a,k} + \on{u}_{kk}\pars{a,k} = 0}$</span>.</p> <p>It has the general solution <span class="math-container">$$\on{u}\pars{a,k} = \on{f}\pars{a + k\,\ic} + \on{g}\pars{a - k\,\ic} $$</span></p> <p>Then, <span class="math-container">\begin{align} 0 &amp; = \on{u}\pars{a,0} = \on{f}\pars{a} + \on{g}\pars{a} \implies\on{g}\pars{a} = -\on{f}\pars{a} \end{align}</span> The general solution is reduced to <span class="math-container">\begin{equation} \on{u}\pars{a,k} = \on{f}\pars{a + k\,\ic} - \on{f}\pars{a - k\ic} \label{1}\tag{1} \end{equation}</span> Moreover, <span class="math-container">$\ds{{1 \over a^{2}} = \on{u}_{k}\pars{a,0} = \on{f}'\pars{a}\ic - \bracks{\on{f}'\pars{a}\pars{-\ic}} = 2\on{f}'\pars{a}\ic}$</span> <span class="math-container">$\ds{\implies \on{f}\pars{a} = {\ic \over 2a} + \underline{\mbox{a constant}}}$</span></p> <p>and ( see (\ref{1}) ) <span class="math-container">$$ \on{u}\pars{a,k} = {\ic \over 2\pars{a + k\,\ic}} - {\ic \over 2\pars{a - k\,\ic}} = \bbx{k \over a^{2} + k^{2}} \\ $$</span></p>
674,621
<p>I am trying to figure out what the three possibilities of $z$ are such that </p> <p>$$ z^3=i $$</p> <p>but I am stuck on how to proceed. I tried algebraically but ran into rather tedious polynomials. Could you solve this geometrically? Any help would be greatly appreciated.</p>
EpicMochi
95,992
<p>Using Euler's formula, which states $$ e^{i \theta} = \cos \theta + i \sin \theta $$ we will see that $$ i = 0 + i \cdot 1 = \cos \left( \frac{\pi}{2} + 2n \pi \right) + i \sin \left( \frac{\pi}{2} + 2n \pi \right) = e^{i \left(\frac{ \pi}{2} + 2n \pi \right)} $$ for all integers $n$. Thus, if $z^3 = i$, then $$z = \exp\left[ i \left(\frac{\pi}{6}+\frac{2n\pi}{3}\right)\right]$$ for all integers $n$.</p>
530,605
<blockquote> <p>Let <span class="math-container">$A$</span> be open in <span class="math-container">$\mathbb{R}^m$</span>; let <span class="math-container">$g:A\rightarrow\mathbb{R}^n$</span>. If <span class="math-container">$S\subseteq A$</span>, we say that <span class="math-container">$S$</span> satisfies the <strong>Lipschitz condition</strong> on <span class="math-container">$S$</span> if the function <span class="math-container">$\lambda(x,y)=|g(x)-g(y)|/|x-y|$</span> is bounded for <span class="math-container">$x\neq y\in S$</span>. We say that <span class="math-container">$g$</span> is <strong>locally Lipschitz</strong> if each point of <span class="math-container">$A$</span> has a neighborhood on which <span class="math-container">$g$</span> satisfies the Lipschitz condition.</p> <p>Show that if <span class="math-container">$g$</span> is locally Lipschitz, then <span class="math-container">$g$</span> is continuous. Does the converse hold?</p> </blockquote> <p>For the first part, suppose <span class="math-container">$g$</span> is locally Lipschitz. So for each point <span class="math-container">$r\in A$</span>, there exists a neighborhood for which <span class="math-container">$|g(x)-g(y)|/|x-y|$</span> is bounded. Suppose <span class="math-container">$|g(x)-g(y)|/|x-y|&lt;M$</span> in that neighborhood. Then <span class="math-container">$|g(x)-g(r)|&lt;M|x-r|$</span> in that neighborhood of <span class="math-container">$r$</span>. Therefore <span class="math-container">$g(x)\rightarrow g(r)$</span> as <span class="math-container">$x\rightarrow r$</span>, and so <span class="math-container">$g$</span> is continuous at <span class="math-container">$r$</span>. This means <span class="math-container">$g$</span> is continuous everywhere in <span class="math-container">$A$</span>.</p> <p>What about the converse? I don't think it holds, but can't come up with a counterexample.</p>
copper.hat
27,978
<p>Or $x \mapsto \sqrt{|x|}$: ${}{}{}{}$</p> <p><img src="https://i.stack.imgur.com/5u0xa.png" alt="enter image description here"></p>
1,443,471
<p>I have this conjecture: Let <em>a</em> and <em>b</em> be integers and <em>n</em> and <em>m</em> natural numbers. </p> <p>$$ a \equiv b \bmod n \Rightarrow a^m \equiv b^m \bmod n$$</p> <p>I think I got the induction proof, but I'm having difficulties on how to proof this with well-ordering principle.</p>
Kulisty
170,765
<p>Notice that</p> <p>$a^m-b^m=(a-b)\sum\limits_{k=0}^{m-1}a^{m-k-1}b^{k}=(a-b)(a^{m-1}+a^{m-2}b+\ldots+ab^{m-2}+b^{m-1})$</p> <p>So </p> <p>$a-b | a^m-b^m $</p>
729,441
<p>This question comes up after going over Arzela-Ascoli theorems.</p> <p>For a set of continuous functions $\mathbb F$ from $\mathbb R$ to $\mathbb R$ that is equicontinuous. How do I show that if sup{$|f(0)|:f \in F$} $&lt; \infty$ , then $\mathbb F$ is pointwise bounded?</p> <p>I know I need to show that since sup{$|f(0)|:f \in F$} = $M_0 &lt; \infty$ then for each $x \in \mathbb R$ sup{$|f(x)|:f \in F$} = $M_x &lt; \infty$.</p> <p>What I have gotten so far is that I should fix $\epsilon$ and use the fact that the domain is $\mathbb R$.</p> <p>I think I just need some help using equicontinuity and sup together.</p>
Brian Fitzpatrick
56,960
<p>If you don't know what an eigenvalue is, and if you're not worried about elegance, then here is a more direct approach (assuming you're working over a field not of characteristic two).</p> <p>There exist scalars $\lambda_{ij}$ for $1\leq i,j\leq n$ such that for every basis $\{v_1,\dotsc,v_n\}$ of $V$ we have $$ \begin{array}{ccccccc} T(v_1) &amp; = &amp; \lambda_{11}v_1 &amp; + &amp; \dotsb &amp; + &amp; \lambda_{n1}v_n \\ \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots &amp; \vdots \\ T(v_n) &amp; = &amp; \lambda_{1n}v_1 &amp; + &amp; \dotsb &amp; + &amp; \lambda_{nn}v_n \end{array}\tag{1} $$ Now, fix $v\in V$ and note that there exists a basis $\{v_1,\dotsc,v_i,\dotsc,v_n\}$ of $V$ such that $v_i=v$. Equation $(1)$ then implies $$ T(v)=\lambda_{1i}v_1+\dotsb+\lambda_{ii}v_i+\dotsb+\lambda_{ni}v_n\tag{2} $$ Next, since $$\{-v_1,\dotsc,-v_{i-1},v_i,-v_{i+1},\dotsc,-v_n\}$$ is also a basis for $V$, equation $(1)$ also implies $$ T(v)=-\lambda_{1i}v_1-\dotsb-\lambda_{i-1,i}\cdot v_{i-1}+\lambda_{ii}v_i-\lambda_{i+1,i}\cdot v_{i+1}-\dotsb-\lambda_{ni}v_n\tag{3} $$ Subtracting equation $(3)$ from equation $(2)$ gives $$ \mathbf{0}=2\lambda_{1i}v_1+\dotsb+2\lambda_{i-1,i}\cdot v_{i-1}+2\lambda_{i+1,i}\cdot v_{i+1}+\dotsb+2\lambda_{ni}v_{n}\tag{4} $$ Since $\{v_1,\dotsc,v_n\}$ are linearly independent, $(4)$ implies $$ \lambda_{1i}=\dotsb=\lambda_{i-1,i}=\lambda_{i+1,i}=\dotsb=\lambda_{ni}=0\tag{5} $$ Since our choice of $i$ was arbitrary, equation $(5)$ implies $$ \lambda_{kl}=0\tag{6} $$ whenever $k\neq l$. Moreover, equations $(2)$ and $(6)$ imply that $T(v)=\lambda_{kk}v=\lambda_{ll}v$ for all $k$ and $l$ so that $$ \lambda_{kk}=\lambda_{ll} $$ for all $k$ and $l$. That is, there exists a scalar $\lambda$ such that $$ \lambda_{kl}= \begin{cases} 0 &amp; k\neq l \\ \lambda &amp; k=l \end{cases}\tag{7} $$</p> <p>Finally, we wish to show that there exists a scalar $\lambda$ such that $T(v)=\lambda v$ for every $v\in V$. To do so, let $v\in V$ and note that $(2)$ and $(7)$ imply $T(v)=\lambda_{ii} v=\lambda v$.</p>
615,396
<p>What is considered a good format for writing problem sets in mathematics? Are there any good examples of problem sets that are well-written and formatted that you can show me?</p>
nomen
62,468
<p>There are a few common styles. Personally, I like ink on printer paper, formatted either in two columns or one, depending on how wide the wide formulas get. I cross out "mistakes" instead of erasing.</p> <p>Some people like using graph (grid) paper. Most people probably use pencils, but I find the ease of using a pen worthwhile.</p> <p>The basic format for a problem is the following:</p> <ol> <li>The problem number</li> <li>A statement of the problem. You should summarize it, but only once you've had enough practice that you won't make any mistakes.</li> <li>(optional) A long dash or turnstile symbol (in this context, it would be a long dash with a pip on the bottom, like $\vdash$ turned $-\frac{\pi}{2}$ radians.</li> <li>The solution/proof</li> <li>If they ask for a numerical or formulaic answer, put it in a box.</li> </ol>
767,474
<blockquote> <p>If $a,b,c\in\mathbb R^+$ prove that: $$\frac{a^2+1}{b+c}+\frac{b^2+1}{a+c}+\frac{c^2+1}{a+b}\ge 3$$</p> </blockquote>
Calvin Lin
54,563
<p>Apply Am-GM to the numerator of each fraction. </p> <p>You get the statement of Nesbit inequality. </p>
767,474
<blockquote> <p>If $a,b,c\in\mathbb R^+$ prove that: $$\frac{a^2+1}{b+c}+\frac{b^2+1}{a+c}+\frac{c^2+1}{a+b}\ge 3$$</p> </blockquote>
Michael Rozenberg
190,319
<p>By C-S and AM-GM we obtain $$\sum_{cyc}\frac{a^2+1}{b+c}=\sum_{cyc}\frac{a^2}{b+c}+\sum_{cyc}\frac{1}{b+c}\geq$$ $$\geq\frac{(a+b+c)^2}{2(a+b+c)}+\frac{9}{2(a+b+c)}=\frac{1}{2}\left(a+b+c+\frac{9}{a+b+c}\right)\geq$$ $$\geq\frac{1}{2}\cdot2\sqrt{\frac{9(a+b+c)}{a+b+c}}=3.$$ Done!</p>
238,809
<p><a href="https://i.stack.imgur.com/W5ILn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W5ILn.jpg" alt="enter image description here" /></a> How to construct a tree like <a href="https://i.stack.imgur.com/XKYl9.png" rel="nofollow noreferrer">this</a>? I was looking <code>CompleteKaryTree</code> initially, there are some similarities overall, but it's still different.</p> <pre><code>CompleteKaryTree[5, 2, GraphLayout -&gt; &quot;LayeredEmbedding&quot;, AspectRatio -&gt; 1/4] </code></pre> <p><a href="https://i.stack.imgur.com/xrW63.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xrW63.png" alt="enter image description here" /></a></p> <p>Another way, I've generated the coordinates of all the points, but I don't know how to connect them</p> <pre><code>n=4; pts=Join @@ Table[{1/2 (1+(n-j)!)+(i-1) (n-j)!,n-j-1},{j,0,n},{i,FactorialPower[n,j]}]; Graphics[{Point@pts}, ImageSize-&gt;Large] </code></pre> <p><a href="https://i.stack.imgur.com/L3hGw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L3hGw.png" alt="enter image description here" /></a></p>
kglr
125
<p>You can use <a href="https://reference.wolfram.com/language/ref/ExpressionGraph.html" rel="nofollow noreferrer"><code>ExpressionGraph</code></a> to draw the tree</p> <pre><code>expr = ConstantArray[x, Reverse @ Range[4]]; ExpressionGraph[expr, GraphLayout -&gt; &quot;LayeredEmbedding&quot;, ImageSize -&gt; Large] </code></pre> <p><a href="https://i.stack.imgur.com/0BtSI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0BtSI.png" alt="enter image description here" /></a></p> <pre><code> epxr2 = ConstantArray[x, Reverse @ Range[5]]; ExpressionGraph[expr2, GraphLayout -&gt; &quot;LayeredEmbedding&quot;, ImageSize -&gt; 700, VertexSize -&gt; Medium, AspectRatio -&gt; 1/2] </code></pre> <p><a href="https://i.stack.imgur.com/hQdAM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hQdAM.png" alt="enter image description here" /></a></p> <p>Define a function that constructs a permutation tree with edge labels:</p> <pre><code>ClearAll[rule, permutationTree] rule = # /. x : {___Integer} /; Length[x] &gt; 1 :&gt; (Reverse /@ Subsets[Reverse@x, {Length[x] - 1}]) &amp;; permutationTree[n_, opts : OptionsPattern[Graph]] := Module[{eg = ExpressionGraph[ConstantArray[x, Reverse@Range[n]], opts, GraphLayout -&gt; &quot;LayeredEmbedding&quot;, ImageSize -&gt; 700, VertexSize -&gt; Medium, AspectRatio -&gt; 1/2], edgelabels}, edgelabels = Thread[First @ Last @ Reap@ BreadthFirstScan[eg, 1, {&quot;FrontierEdge&quot; -&gt; Sow}] -&gt; Flatten@NestList[rule, Range[n], n - 1]] ; SetProperty[eg, EdgeLabels -&gt; edgelabels]] </code></pre> <p><em><strong>Examples:</strong></em></p> <pre><code>permutationTree[3] </code></pre> <p><a href="https://i.stack.imgur.com/3QfNc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3QfNc.png" alt="enter image description here" /></a></p> <pre><code>permutationTree[4] </code></pre> <p><a href="https://i.stack.imgur.com/WznkD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WznkD.png" alt="enter image description here" /></a></p> <pre><code>permutationTree[4, GraphLayout -&gt; &quot;RadialEmbedding&quot;, AspectRatio -&gt; 1, EdgeLabelStyle -&gt; Large] </code></pre> <p><a href="https://i.stack.imgur.com/I9y2G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I9y2G.png" alt="enter image description here" /></a></p> <pre><code>permutationTree[5, ImageSize -&gt; 900] </code></pre> <p><a href="https://i.stack.imgur.com/pebdp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pebdp.png" alt="enter image description here" /></a></p> <p>Alternatively, you can use <code>TreeForm</code>:</p> <pre><code>TreeForm[expr, ImageSize -&gt; Large, VertexLabeling -&gt; False] </code></pre> <p><a href="https://i.stack.imgur.com/81tlO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/81tlO.png" alt="enter image description here" /></a></p> <p><em><strong>Note:</strong></em> For versions older than v12.0, replace <code>ExpressionGraph</code> with <code>GraphComputation`ExpressionGraph</code>. (See also <a href="https://mathematica.stackexchange.com/a/187074/125">this answer</a>.)</p>
238,809
<p><a href="https://i.stack.imgur.com/W5ILn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W5ILn.jpg" alt="enter image description here" /></a> How to construct a tree like <a href="https://i.stack.imgur.com/XKYl9.png" rel="nofollow noreferrer">this</a>? I was looking <code>CompleteKaryTree</code> initially, there are some similarities overall, but it's still different.</p> <pre><code>CompleteKaryTree[5, 2, GraphLayout -&gt; &quot;LayeredEmbedding&quot;, AspectRatio -&gt; 1/4] </code></pre> <p><a href="https://i.stack.imgur.com/xrW63.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xrW63.png" alt="enter image description here" /></a></p> <p>Another way, I've generated the coordinates of all the points, but I don't know how to connect them</p> <pre><code>n=4; pts=Join @@ Table[{1/2 (1+(n-j)!)+(i-1) (n-j)!,n-j-1},{j,0,n},{i,FactorialPower[n,j]}]; Graphics[{Point@pts}, ImageSize-&gt;Large] </code></pre> <p><a href="https://i.stack.imgur.com/L3hGw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L3hGw.png" alt="enter image description here" /></a></p>
kglr
125
<p>Using a slight modification of the code in <a href="https://demonstrations.wolfram.com/PermutationTree/" rel="nofollow noreferrer">Wolfram Demonstrations &gt;&gt; Permutation Tree</a> (linked by George Varnavides in comments) and adding edge labels:</p> <pre><code>ClearAll[permTree] permTree[n_, opts : OptionsPattern[Graph]] := Module[{el = Union @@ Map[Rule @@@ Partition[FoldList[Append, {}, #], 2, 1] &amp;, Permutations @ Range @ n]}, Graph[el, opts, DirectedEdges -&gt; False, GraphLayout -&gt; &quot;LayeredEmbedding&quot;, EdgeLabels -&gt; {e_ :&gt; e[[2, -1]]}]] </code></pre> <p><em><strong>Examples:</strong></em></p> <pre><code>permTree[3, ImageSize -&gt; Large, VertexLabels -&gt; {v_ /; Length[v] == 3 :&gt; Placed[Column @ v, Below]}] </code></pre> <p><a href="https://i.stack.imgur.com/2q4IA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2q4IA.png" alt="enter image description here" /></a></p> <pre><code>permTree[4, ImageSize -&gt; 800, VertexLabels -&gt; {v_ /; Length[v] == 4 :&gt; Placed[Column @ v, Below]}] </code></pre> <p><a href="https://i.stack.imgur.com/8Bb0X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Bb0X.png" alt="enter image description here" /></a></p> <pre><code>permTree[4, ImageSize -&gt; Large, GraphLayout -&gt; &quot;RadialEmbedding&quot;] </code></pre> <p><a href="https://i.stack.imgur.com/lQ7y9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lQ7y9.png" alt="enter image description here" /></a></p>
878,816
<p>Solve inequality: $-5 &lt; \frac{1}{x} &lt; 0$ </p> <p>I thought about how I can solve this. If I multiply all sides by $x$ I'm afraid I'm removing the answer, cause $\frac{x}{x}=1$. And when $x$ 'leaves' the inequality I'm left with no letter. </p> <p>How do I get just $x$ in the middle without adding $x$ to other sides or removing $x$?<br> I then saw that: $\frac{1}{x} = -x$. So can I multiply all sides with $-1$. This also changes the signs. So I'm left with: $5&gt; x &gt; 0$. </p> <p>Is this correct? If not what did I do wrong?</p>
amWhy
9,003
<p>Note that given $-5\lt \frac 1x \lt 0$, we know that $$\frac 1x &lt; 0 \implies x&lt; 0$$. So when you multiply by $x$ to remove it from the denominator, you need to reverse the directions of the inequalities.</p> <p>$$-5 \lt \frac 1x \iff -5x \gt 1\iff x \lt -\frac 15 $$ and $$\frac 1x &lt;0 \iff x\lt 0$$ The second inequality we already know, and so the first inequality is the stricter of the two that must be met for both inequalities to hold. $ x\lt \dfrac{-1}{5}$. </p>
878,816
<p>Solve inequality: $-5 &lt; \frac{1}{x} &lt; 0$ </p> <p>I thought about how I can solve this. If I multiply all sides by $x$ I'm afraid I'm removing the answer, cause $\frac{x}{x}=1$. And when $x$ 'leaves' the inequality I'm left with no letter. </p> <p>How do I get just $x$ in the middle without adding $x$ to other sides or removing $x$?<br> I then saw that: $\frac{1}{x} = -x$. So can I multiply all sides with $-1$. This also changes the signs. So I'm left with: $5&gt; x &gt; 0$. </p> <p>Is this correct? If not what did I do wrong?</p>
Ruslan
64,206
<p>You don't have to necessarily put $x$ to the center directly. One way to solve such double inequality is to split it to a system of two inequalities:</p> <p>$$\begin{cases}-5&lt;\frac1x\\ \frac1x&lt;0\end{cases}$$</p> <p>From the second one you get $x&lt;0$, and from the first one you can obtain $x&lt;-\frac15$ (not forgetting that $x&lt;0$).</p> <p>Now the answer is intersection of the two intervals: $x\in(-\infty,0)\cap(-\infty,-\frac15)=(-\infty,-\frac15)$.</p>
2,619,344
<p>I just struck with a doubt today</p> <blockquote> <p>Why do most of the standard inequalities require the variables to be positive.</p> </blockquote> <p>For example if we want to find minimum value of a certain expression say <span class="math-container">$a+b+c$</span> the very first thought that comes in our mind is the AM GM inequality but the question must satisfy a condition <span class="math-container">$\mathbf {a, b, c\ge 0}$</span>.</p> <p>So I want to ask why is that so.</p> <blockquote> <p>Even in some very useful inequalities like the Muirhead inequality, Hölder's inequality, Minkowski's inequality, etc. we need the condition that the variables to be used must be non negative or positive.</p> <p>While there are also some inequalities like Chebyshev's inequality and Rearrangement inequality, Cauchy Schwartz inequality which do not have restrictions of the variables or terms to be positive.</p> </blockquote> <p>I want to know why such condition is needed to make the inequalities true. Is there a mathematical sense and a reason to do so? Does this have to do anything with geometry( I saw proofs of AM GM inequality using geometry and as the variables used were the lengths of some segments they were confined to be non negative)</p> <p>If someone has an idea please share.</p>
Guy Fsone
385,707
<p>See that $f$ is bounded as continuous function on $[0,1]$ and $$\frac{1}{n}\int_{0}^{1}e^{-nx}dx = \frac{1}{n^2}(1-e^{-n})\le \frac{1}{n^2}$$ we have $$\sum_{n=1}^{\infty}\left|\frac{1}{n}\int_{0}^{1}f(x)e^{-nx}dx\right| \le M\sum_{n=1}^{\infty}\frac{1}{n}\int_{0}^{1}e^{-nx}dx \le M\sum_{n=1}^{\infty}\frac{1}{n^2}&lt;\infty$$</p>
269,062
<p>Consider the following integral $$ pv\int_0^{\infty}e^{N(-2Ax+A\log x)}\frac{e^{-B\log x}}{1-2x}dx $$ where $A,B&gt;0$ and we take the Cauchy principal value at $x=1/2$. I am interested in obtaining the asymptotics when $N$ is very big. The first thing I thought of was some variant of Laplace's method but I am unsure if I can proceed here, because of the singularity at $x=1/2$. So, my question is, is some version of the Laplace's method applicable here to obtain the big $N$ asymptotics? and if so, how should I proceed?</p>
Igor Rivin
11,142
<p>Mathematica tells us that the principal value can be evaluated in closed form thus:</p> <p>$$e^{-A} 2^{-A N+B-1} \left((-1)^{A N-B+1} \Gamma (-B+A N+1) \Gamma (B-A N,-A)-i \pi \right).$$</p> <p>(OK, the $i \pi$ is a little suspicious, but the appearance of incomplete gammas is clearly correct, once you replace terms like $\exp(C \log x)$ by $x^C.$).</p>
1,524,109
<p>Could anyone help me with this proof without using determinant? I tried two ways. </p> <blockquote> <p>Let $A$ be a matrix. If $A$ has the property that each row sums to zero, then there does not exist any matrix $X$ such that $AX=I$, where $I$ denotes the identity matrix. </p> </blockquote> <p>I then get stuck. The other way was to prove by contradiction, and I failed too. </p>
Ilmari Karonen
9,602
<p>Assuming $A$ is an $n \times n$ matrix, let $v_0 = (0,0,\dots,0)$ and $v_1 = (1,1,\dots,1)$ be $n$-element column vectors. Since each row of $A$ sums to zero, it follows that $$A v_1 = (0,0,\dots,0) = A v_0,$$ showing that $A$ cannot have a (left) inverse.</p>
1,260,722
<blockquote> <p>Prove that <span class="math-container">$f=x^4-4x^2+16\in\mathbb{Q}[x]$</span> is irreducible.</p> </blockquote> <p>I am trying to prove it with Eisenstein's criterion but without success: for <strong>p=2</strong>, it divides <strong>-4</strong> and the constant coefficient 16, don't divide the leading coeficient 1, but its square 4 divides the constant coefficient 16, so doesn't work. Therefore I tried to find <span class="math-container">$f(x\pm c)$</span> which is irreducible:</p> <blockquote> <p><span class="math-container">$f(x+1)=x^4+4x^3+2x^2-4x+13$</span>, but 13 has the divisors: <strong>1 and 13</strong>, so don't exist a prime number <strong>p</strong> such that to apply the first condition: <span class="math-container">$p|a_i, i\ne n$</span>; the same problem for <span class="math-container">$f(x-1)=x^4+...+13$</span></p> <p>For <span class="math-container">$f(x+2)=x^4+8x^3+20x^2+16x+16$</span> is the same problem from where we go, if we set <strong>p=2</strong>, that means <span class="math-container">$2|8, 2|20, 2|16$</span>, not divide the leading coefficient 1, but its square 4 divide the constant coefficient 16; again, doesn't work.. is same problem for <strong>x-2</strong></p> </blockquote> <p>Now I'll verify for <span class="math-container">$f(x\pm3)$</span>, but I think it will be fall... I think if I verify all constant <span class="math-container">$f(x\pm c)$</span> it doesn't work with this method... so have any idea how we can prove that <span class="math-container">$f$</span> is irreducible?</p>
Barry Cipra
86,747
<p>As Jyrki Lahtonen observed in his answer, $p(x)=x^4-4x^2+16=(x^2-2)^2+12$ has no real roots, so the only possible factorization is of the form</p> <p>$$(x^2+ax+b)(x^2+cx+d)$$</p> <p>which expands to</p> <p>$$x^4+(a+c)x^3+(ac+b+d)x^2+(ad+bc)x+bd$$</p> <p>We conclude that $c=-a$ (from the $x^3$ term), hence* $d=b$ (from the $x$ term), which means $2b-a^2=-4$ and $b^2=16$. Plugging the two possibilities for $b$ into $a^2=2b+4$ gives $a^2=12$ or $-4$, neither of which corresponds to an integer value for $a$.</p> <p>*Zarrax in comments astutely observes my "hence" is mistaken. It ignores the possibility $a=c=0$. To complete the proof that $p(x)$ is irreducible, we need to note that if $a=c=0$ then, letting $u=x^2$, we would have a factorization</p> <p>$$(u+b)(u+d)=u^2-4u+16=(u-2)^2+12$$</p> <p>My thanks to Zarrax for pointing out the error.</p>
1,036,907
<p>For <span class="math-container">$n$</span> in the natural numbers let</p> <p><span class="math-container">$$a_n = \int_{1}^{n} \frac{\cos(x)}{x^2} dx$$</span></p> <p>Prove, for <span class="math-container">$m ≥ n ≥ 1$</span> that <span class="math-container">$|a_m - a_n| ≤ \frac{1}{n}$</span> and deduce <span class="math-container">$a_n$</span> converges.</p> <p>I am totally stuck on how to even go about approaching this. All help would be very gratefully received!</p>
Yao Zhao
889,365
<p>We only need to show <span class="math-container">$\{b_j\}$</span> is Cauchy, where <span class="math-container">$b_j=\int_{1}^{j} \frac{\cos x}{x^2} dx$</span></p> <p>Proof: <span class="math-container">$b_j-b_k=\int_{1}^{j} \frac{\cos x}{x^2} dx-\int_{1}^{k} \frac{\cos x}{x^2} dx=\int_{k}^{j} \frac{\cos x}{x^2} dx$</span></p> <p>That is, <span class="math-container">$$|b_j-b_k|=|\int_{k}^{j} \frac{\cos x}{x^2} dx| \leq \int_{k}^{j} |\frac{\cos x}{x^2}| dx$$</span></p> <p>(Recall that <span class="math-container">$|\int_{a}^{b}f(x) dx| \leq \int_{a}^{b}|f(x)| dx$</span>)</p> <p>We continue our above analysis, <span class="math-container">$$\leq \int_{k}^{j}\frac{1}{x^2} dx=-\frac{1}{x}\Biggr|_{k}^{j}= -\frac{1}{j}+\frac{1}{k}&lt;\frac{1}{k}$$</span></p> <p>Thus, <span class="math-container">$$|b_j-b_k|&lt;\frac{1}{k}&lt;\epsilon$$</span>,whenever <span class="math-container">$k&gt;\frac{1}{\epsilon}$</span>, viz., <span class="math-container">$k&gt;N,N=[\frac{1}{\epsilon}]+1$</span></p> <p>To summarize, <span class="math-container">$\forall \epsilon&gt;0$</span>, let <span class="math-container">$N=[\frac{1}{\epsilon}]+1$</span>, then whenever <span class="math-container">$j&gt;k&gt;N$</span>, we have <span class="math-container">$|b_j-b_k|&lt;\frac{1}{k}&lt;\epsilon$</span>.</p>
43,382
<p>Let <span class="math-container">$C$</span> be a coalgebra and <span class="math-container">$\Delta: C \to C\otimes C$</span> a co-multiplication map. Then, due the co-associative property we can consider <span class="math-container">$\Delta^m$</span>. But how is defined <span class="math-container">$\Delta^{m}: C \to C^{\otimes m}$</span>?</p> <p>Given <span class="math-container">$f,g \in C$</span> and <span class="math-container">$1\leq k \leq m$</span> can we have</p> <p><span class="math-container">$$\begin{align*}\Delta^{m-1}(fg)&amp;=\Delta^{k-1}(fg) \otimes \operatorname{id}^{m-k} + \Delta^{k-1}(f)\otimes \Delta^{m-k-1}(g) \\ &amp;+\Delta^{k-1}(g)\otimes \Delta^{m-k-1}(f)+\operatorname{id}^{\otimes k} \otimes \Delta^{m-k-1}(fg)?\end{align*}$$</span></p> <p>Thanks.</p>
Ralph
10,194
<p>When working with coalgebras I often find it helpful to dualize and to consider the corresponding situation for multiplication $\mu: A \otimes A \to A$ with an algebra $A$. </p> <p>In this point of view $\Delta^m$ corresponds to $\mu^m: A^{m+1} \to A$, given by $$\mu^m(a_0 \otimes ... \otimes a_m) = (a_0 ... a_{m-1}) a_m.$$ It's easy to see that this formula is equivalent to $$\mu^m = \mu \circ (\mu^{m-1} \otimes id_A).$$ Dualizing again yields $$\Delta^m = (\Delta^{m-1} \otimes id_C) \circ \Delta.$$ BTW: If $\Delta$ is coassociative this equals Mariano's formula $$\Delta^m = (id_C^{\otimes m-1} \otimes \Delta) \circ \Delta^{m-1}.$$</p>
3,719,575
<p>For <span class="math-container">$\lambda =0$</span> and <span class="math-container">$\lambda &lt;0$</span> the solution is the trivial solution <span class="math-container">$x\left(t\right)=0$</span></p> <p>So we have to calculate for <span class="math-container">$\lambda &gt;0$</span></p> <p>The general solution here is</p> <p><span class="math-container">$x\left(t\right)=C_1cos\left(\sqrt{\lambda }t\right)+C_2sin\left(\sqrt{\lambda \:}t\right)$</span></p> <p>Because <span class="math-container">$0=C_1\cdot cos\left(0\right)+C_2\cdot sin\left(0\right)=C_1$</span> we know that</p> <p><span class="math-container">$x\left(t\right)=C_2sin\left(\sqrt{\lambda }t\right)$</span></p> <p><span class="math-container">$\sqrt{\lambda }t=n\pi$</span></p> <p><span class="math-container">$\sqrt{\lambda }=\frac{n\pi }{t}$</span></p> <p>But does there a solution for lambda exist which is not dependent on t?</p>
Alessio K
702,692
<p><span class="math-container">$\lambda &gt;0$</span> the general solution is <span class="math-container">$x(t)=C_{1}cos(\sqrt{\lambda}t)+C_{2}sin(\sqrt{\lambda}t)$</span> then the condition <span class="math-container">$x(0)=0$</span> gives <span class="math-container">$C_{1}=0$</span> as you've computed.</p> <p>Then the condition <span class="math-container">$x(L)=0$</span> gives <span class="math-container">$C_{2}sin(\sqrt{\lambda}L)=0$</span> and since we are looking for a non-trivial solution, we have <span class="math-container">$\sqrt{\lambda}L=n\pi$</span> so this gives <span class="math-container">$\lambda=(\frac{n\pi}{L})^{2}$</span> for some integer n which is independent of t.</p>
1,453,682
<p>I'm having trouble classifying the solution set of the systems $Ax = b$ and $Ax = 0$.</p> <p>Let A be an ($m \times n$) matrix.</p> <p><strong>Case I:</strong></p> <p><em>For $Ax = 0$, the system has a unique solution (the trivial one) when A is invertible, and infinitely many solutions when A is not.</em> </p> <ul> <li>We can scratch off the "no solution" case because there is always the zero matrix solution correct?</li> </ul> <p><strong>Case II:</strong> </p> <p><em>For $Ax = b$, if A is invertible, then for all ($n \times 1$) vector b, the matrix equation has a unique solution given by $x = A^{-1}b$. Else, there are two remaining cases: infinitely many solutions or no solutions.</em> </p> <ul> <li>How would I know which it is? Can we not tell until we have row reduced the augmented matrix?</li> </ul> <p>Finally is the following true or false?</p> <p><strong>If y and z are solutions of the system Ax = b then any linear combination of y and z is also a solution.</strong></p> <p>My thoughts are that it is correct but I am not too sure.</p>
Ant
66,711
<p>1) yes, there is always the zero - vector solution $x = 0$</p> <p>2) Yes, you need to compute the rank of the augmented matrix.</p> <p>3) No. If $Ay = b$ and $Az=b$ then $$A(y+z) = Ay + Az = b + b = 2b \neq b$$</p> <p>except the case when $b= 0$. If $b=0$ then it is true and that is why $Ker(A)$ is a vector space.</p> <p>Is there a "certain" combination that works? Let's see:</p> <p>$$A(\alpha y + \beta z) = \alpha Ay + \beta Az = (\alpha + \beta)b = b$$ and again, with $b \neq 0$ this implies $\alpha + \beta = 1$.</p> <p>So you still have a solution provided that the "weights" sum up to $1$</p>
742,160
<p>Answer true or false to each of the following questions. If a statement is true, prove it. If a statement is false, give a counterexample.</p> <ol> <li>For all sets $A$,$B$ and $C$: IF $A ⊆ B$ and $A ⊆ C$, Then $A ⊆ (B ∩ C)$</li> <li>For all sets $A$ and $B$, if $|A| \le |B|$, then $A ⊆ B$</li> </ol>
fgp
42,986
<p>An <em>injection</em> $A \to B$ maps $A$ <em>into</em> $B$, i.e. it allows you to find a copy of $A$ inside $B$.</p> <p>A <em>surjection</em> $A \to B$ maps $A$ <em>over</em> $B$, in the sense that the image covers the whole of $B$. The syllable "sur" has latin origin, and means "over" or "above", as for example in the word "surplus" or "survey".</p>
35,877
<p>I got stucked while I was working out whether $$\frac{4n-3}{n+43}$$ converges. I would be pleased if I could a hint of the above question.</p>
lhf
589
<p><em>Hint</em>: $$\frac{4n+3}{n+43} = 4-\frac{175}{n+43} $$</p>
2,095,349
<p>I test my skills in statistics and probabilities and I decided to work with distributions. So, I tried to solve the below problem </p> <p><strong>Problem</strong></p> <p>Suppose that a hospital serves in average $80$ citizens daily from a city with $11000$ citizens. In a random day, what is the probability that the hospital serves at most $8$ citizens?</p> <p><strong>My solution</strong></p> <p>I defined a random variable $X$ {number of citizens who will be served in one day }. </p> <p>$X \sim b(x;n=11000,p)$, where \begin{align} p &amp;= \frac{E(X)}{n} = \frac{80}{11000} = 0.07 \end{align}</p> <p>Provided that $npq = 76.4 &gt; 10$:</p> <p>$b(x;n=11000,p) \sim N(pq,npq)$ </p> <p>According to the central limit theorem, \begin{align} Z = \frac{X - np}{\sqrt{npq}} = \frac{8-80}{8.74} = -8.23 \end{align} So $P(Z\le -8.23) = 0$. </p> <p>Where is my fault? I think my reasoning is not correct. </p>
BruceET
221,800
<p><strong>Binomial:</strong> If $X \sim Binom(n, p),$ then $E(X) = np = 80,$ and $n = 11000.$ So $p = 80/11000 = 0.00727.$ You seek $P(X \le 8).$ In R statistical software this is about $6.6 \times 10^{-25}.$</p> <pre><code>n = 11000; p = 80/n pbinom(8, n, p) ## 6.572574e-25 </code></pre> <p><strong>Poisson:</strong> If $Y \sim Pois(\lambda = 80),$ then $P(Y \le 8) = 8.3 \times 10^{-25}.$ </p> <pre><code>ppois(8, 80) ## 8.331982e-25 </code></pre> <p><strong>Normal approximation:</strong> The mean is $\mu = 80.$ According to the Poisson distribution, the standard deviation is $\sigma = \sqrt{80} = 8.944.$ According to the binomial distribution, the SD is $\sigma = \sqrt{np(1-p)} = 8.912.$ Then $P(W \le 8) = P(W &lt; 8.5) \approx 0.$ The three answers below are from binomial, Poisson, and the standardized normal in your Question, respectively.</p> <pre><code> pnorm(8.5, 80, sqrt(80)) ## 6.534509e-16 pnorm(8.5, 80, 8.912) ## 5.164271e-16 pnorm(-8.23) ## 9.360672e-17 </code></pre> <p>From a printed table of the standard normal CDF, you can tell only that the integral in @SeanRobertson's Answer (posted while I am typing this), is very nearly 0. Using R (or other statistical software) you can get 'exact' values for any of the normal integrals. But they are all essentially 0 for practical purposes, and they are all approximations.</p> <p><em>Note:</em> Unless this problem is intended to explore normal probabilities 'off the table', I'm wondering if the intention was to find $P(X \le 80)$ or $P(X \ge 80).$ </p>
3,703,011
<p>So, I am an absolute beginner in mathematics; only being knowledgeable in some basic ideas of the subject. My interest in math started only recently, while reading about set theory and cardinality (particularly the concept of higher infinities) in some other forums. Can you guys recommend me any farily accessible books or any other material which I could use to understand those topics? Or do I need to study some other areas in mathematics before I am able to comprehend set theory or cardinals? </p>
Doug
436,926
<p>I was in a similar boat! I really liked Daniel Cunningham's <em>Set Theory: A First Course</em> (<a href="https://rads.stackoverflow.com/amzn/click/com/1107120322" rel="nofollow noreferrer" rel="nofollow noreferrer">https://www.amazon.com/Set-Theory-Cambridge-Mathematical-Textbooks/dp/1107120322</a>). The text starts very simply with the ZF axioms, then builds up to transfinite recursion, ordinals, and cardinals. I found the last couple chapters challenging, but intriguing.</p> <p>The book was very readable and interesting, and the proofs are easy to follow. I self-studied the book, having only had limited exposure to formal set theory as a computer science undergraduate.</p> <p>You probably already realize this, but learning mathematics requires doing as many exercises yourself as possible. I wrote solutions to most of the problems in Cunningham's text, which you can find here: <a href="https://sites.google.com/view/dougchartier/mathematics/solutions" rel="nofollow noreferrer">https://sites.google.com/view/dougchartier/mathematics/solutions</a>. Note that I can't vouch for the accuracy of the solutions, but please let me know if you have any corrections or feedback. I know the importance of being able to check your work or get hints if you're stuck (or even just read the solution entirely if you've banged your head against the problem for too long!). I hope they're helpful in your endeavor.</p>
4,260,645
<p>I can't find any insights online on how useful the graph of function <span class="math-container">$f(x)$</span> (on the <span class="math-container">$y$</span> axis) versus it's derivative <span class="math-container">$f'(x)$</span>? (on the <span class="math-container">$x$</span> axis) does it provide some useful informations if any?</p> <p>For example, I see that when I plot <span class="math-container">$\sin(x)$</span> against <span class="math-container">$\cos(x)$</span> the plot is a circle which is reminiscent of parametric equations.</p> <p>My question is: Assuming the function is nice (nice in the common sense that it is continuous) does that kind of graph provide any useful information? What about this graph where <span class="math-container">$x(t) = f'(t)$</span>:</p> <p><a href="https://i.stack.imgur.com/DIYa9.png" rel="nofollow noreferrer">f(t) versus x(t) where x(t) = f'(t)</a></p>
emacs drives me nuts
746,312
<p>There is this really good popular video from <em>MinutePhysics</em> titled <a href="https://youtu.be/54XLXg4fYsc" rel="nofollow noreferrer">How To Tell If We're Beating COVID-19</a>.</p> <p>It plots &quot;total number of cases&quot; against &quot;rate of change&quot; for different countries and over time.</p> <p>Even if you are comlpetely annoyed by pandemic stuff by now (and the video is 1 1/2 years old and from March '20), it's still interesting to see how different graphing can give more insight into a topic.</p>
2,518,213
<p>We work in the vector space $\mathbb{R}^k$ over $\mathbb{R}$. We define the (usual) infinite norm on $\mathbb{R}$:</p> <p>$$\|x\|_{\infty} = \max\{|x_1|,|x_2|,\dots,|x_n|\}$$ where $x_1, x_2, \dots, x_n $ are the coordinates of of $x$ in $\mathbb{R}^k$. </p> <p>We would like to prove the following fact: $$\exists c&gt; 0, \forall x\in \mathbb{R}^k, \|x\|_{\infty} \le c \|x\|$$</p> <p>where $\|\cdot \|$ is any norm. I am also told that if this isn't true, then I can construct a sequence $(x_n)$ such that $\|x_n\|$ is bounded but $\|x\|_{\infty}$ diverges to infinity.</p> <p>I am slightly puzzled by that hint, and I'm stuck trying to prove the existence of such a sequence. Here is what I've done: first negate the statement:</p> <p>$$\forall c&gt; 0, \exists x\in \mathbb{R}^k, \|x\|_{\infty} &gt; c \|x\|$$</p> <p>Then I thought, whynot construct the obvious sequence and see where that takes us? So we define $(x_n)$ such that $$\|x_n\|_{\infty} &gt; n \|x_n\|$$ This at least tells us that $\frac{\|x_n\|}{\|x_n\|_{\infty}}$ tends to $0$, but I haven't gotten much further. What bugs me the most is trying to somehow bound $\|x_n\|$. </p> <p>Could someone suggest an approach to tackle that hint?</p>
DanielWainfleet
254,665
<p>Suppoe no such $c$ exists. Then for every $r&gt;0$ there exists $x_r$ such that $\|x_r\|_{\infty}&gt;r\|x\|.$ </p> <p>So for $n\in \Bbb N$ let $\|x_{n^2}\|_{\infty}&gt;n^2\|x_{n^2}\|.$ This requires $\|x_{n^2}\| \ne 0.$ </p> <p>Let $y_n=\frac {x_{n^2}}{n\|x_{n^2}\|}.$ $$\text { Then }\quad \|y_n\|=\frac {1}{n}$$ $$\text { and }\quad \|y_n\|_{\infty}=\frac {\|x_{n^2}\|_{\infty}}{n\|x_{n^2}\|}&gt; \frac {n^2\|x_{n^2}\|}{n\|x_{n^2}\|}=n.$$</p> <p>BTW we can also observe that for $x=(x_1,...,x_k)$ there exists $j$ with $1\leq j\leq k$ and $|x_j|=\|x\|_{\infty}$ so $|x_i|\leq |x_j|$ for $i=1,...,k .\quad$ Therefore $$\|x\|=\left(\sum_{i=1}^kx_i^2\right)^{1/2}\geq (x_j^2)^{1/2}=|x_j|=\|x\|_{\infty}$$ and also $$\|x\|=\left(\sum_{i=1}^kx_i^2\right)^{1/2} \leq \left(\sum_{i=1}^kx_j^2\right)^{1/2}=(kx_j^2)^{1/2}=\sqrt k\;|x_j|=\sqrt k\; \|x\|_{\infty}.$$ </p>
401,967
<p>This question is about logical complexity of sentences in third order arithmetic. See <a href="https://en.wikipedia.org/wiki/Arithmetical_hierarchy" rel="nofollow noreferrer">Wikipedia</a> for the basic concepts.</p> <p>Recall that the Continuum Hypothesis is a <span class="math-container">$\Sigma^2_1$</span> sentence. Furthermore (loosely speaking) it can't be reduced to a <span class="math-container">$\Pi^2_1$</span> sentence, as stated in <a href="https://mathoverflow.net/a/218649/170446">Emil Jeřábek's answer to <em>Can we find CH in the analytical hierarchy?</em></a>.</p> <p>Is there an example of a <span class="math-container">$\Sigma^2_2$</span> sentence with no known reduction to a <span class="math-container">$\Pi^2_2$</span> sentence? (Equivalently, a <span class="math-container">$\Pi^2_2$</span> sentence with no known reduction to a <span class="math-container">$\Sigma^2_2$</span> sentence.) I mean that there should be no known reduction even under large cardinal assumptions.</p> <p>I'd prefer an example that's either famous or easy to state. But to begin, any example will do.</p> <p><em>Update:</em> Sentences such as &quot;<span class="math-container">$\mathfrak{c} \leqslant \aleph_2$</span>&quot; and &quot;<span class="math-container">$\mathfrak{c}$</span> is a successor cardinal&quot; are <span class="math-container">$\Delta^2_2$</span>, meaning that they're simultaneously <span class="math-container">$\Sigma^2_2$</span> and <span class="math-container">$\Pi^2_2$</span>. The reason is that each such sentence (and also its negation) can be expressed in the form &quot;<span class="math-container">$\mathbb{R}$</span> has a well-ordering <span class="math-container">$W$</span> such that <span class="math-container">$\phi(W)$</span>&quot; where <span class="math-container">$\phi$</span> is <span class="math-container">$\Sigma^2_2$</span>.</p>
Elliot Glazer
109,573
<p>The Suslin hypothesis is <span class="math-container">$\Pi^2_2,$</span> and <span class="math-container">$T = ZFC + GCH + LC$</span> (LC an arbitrary large cardinal axiom) does not prove it to be equivalent to any <span class="math-container">$\Sigma^2_2$</span> sentence. Suppose toward contradiction <span class="math-container">$T$</span> proves SH to be equivalent to <span class="math-container">$\exists A \subset \mathbb{R} \varphi(A),$</span> where <span class="math-container">$\varphi$</span> is <span class="math-container">$\Pi^2_1.$</span> Assume <span class="math-container">$V \models T.$</span></p> <p>We'll use several results from Chapters VIII and X of Devlin and Johnsbraten's <em>The Souslin Problem.</em> There are generic extensions <span class="math-container">$V[G] \models T+\diamondsuit^*$</span> and <span class="math-container">$V[G][H] \models T+SH$</span> which do not add reals to or collapse cardinals of <span class="math-container">$V.$</span> In <span class="math-container">$V[G][H],$</span> there is <span class="math-container">$A \subset \mathbb{R}$</span> such that <span class="math-container">$\varphi(A)$</span> holds and <span class="math-container">$A' \subset \omega_1$</span> which codes a bijection between <span class="math-container">$\mathbb{R}$</span> and <span class="math-container">$\omega_1$</span> as well as <span class="math-container">$A.$</span> By downwards absoluteness of <span class="math-container">$\varphi,$</span> <span class="math-container">$A$</span> witnesses that <span class="math-container">$V[G][A'] \models SH.$</span> But we also have <span class="math-container">$V[G][A'] \models \diamondsuit^*$</span> by Lemma 4 (pg. 79), which is a contradiction since <span class="math-container">$\diamondsuit$</span> negates SH.</p>
1,251,334
<blockquote> <p>$||x-2|-3| &gt;1$, then $x$ belongs to:<BR> (a) $(-\infty, -2) \cup (0,4) \cup(6,\infty)$<br> (b) $(-1,1)$<br> (c) $(-\infty, 1)\cup (1, &gt; \infty)$ <br> (d) $(-2,2)$</p> </blockquote> <p><strong><em>Answer:(a)</em></strong></p> <p>My solution:</p> <p>let $|x-2| = p,$<br> $|p-3|&gt;1$ and finally i got four inequalities after more calculations :<br></p> <p>$x&gt;6, x&lt;-2,x&gt;0, x&lt;4$</p> <p>I think I am doing this question wrong but i don't know how to do this question either way. How do I do this?</p>
Prahlad Vaidyanathan
89,789
<p>Just do it with two simultaneous inequalities: $$ ||x-2| - 3| &gt; 1 \Leftrightarrow |x-2|-3 &gt; 1 \text{ or } |x-2|-3 &lt; -1 $$ $$ \Leftrightarrow |x-2| &gt; 4 \text{ or } |x-2| &lt; 2 $$ $$ \Leftrightarrow x-2 &gt; 4 \text{ or } x-2 &lt; -4 \text{ or } -2 &lt; x-2 &lt; 2 $$ $$ \Leftrightarrow x&gt;6 \text{ or } x &lt; -2 \text{ or } 0 &lt; x &lt; 4 $$ So your answer is $$ (6,\infty)\cup (-\infty,-2)\cup (0,4) $$</p>
1,315,922
<blockquote> <p>show that $$\sum_{k=2}^{n}\left(\dfrac{2}{k}+\dfrac{H_{k}-\frac{2}{k}}{2^{k-1}}\right)\le 1+2\ln{n}$$where $ n\ge 2,H_{k}=1+\dfrac{1}{2}+\cdots+\dfrac{1}{k}$</p> </blockquote> <p>Maybe this $\ln{k}&lt;H_{k}&lt;1+\ln{k}$?</p>
Henry
106,067
<p>It seems that it doesn't have anything to do with primes. </p> <p>Let $0=r_0 &lt; r_1 &lt; \cdots &lt;r_n$ be integers. I claim that $P_n(x)=\sum_{i=0}^n a_i x^{r_i}$ has at most $n$ positive zeros. </p> <p>Let's proceed by induction. When $n=0$, it's obvious. </p> <p>Suppose the statement holds for $n=k$. </p> <p>If $P_{k+1}(x)$ has $k+2$ positive zeroes, then its derivative $P_{k+1}'(x)=\sum_{i=1}^{k+1} r_i a_i x^{r_i-1}$ must have at least $k+1$ positive zeros. This means that $\sum_{i=1}^{k+1}r_i a_i x^{r_i-r_1}$ also has at least $k+1$ positive zeros, which contradicts to the induction hypothesis. </p> <p>$\therefore P_n(x)$ has at most $n$ positive zeros. </p> <p>This proves the statement in your question.</p>
108,890
<p>Consider $V_{(n-1, 1)}$, the $n-1$ dimensional irreducible representation of $S_n$, i.e. the "standard" or "defining" representation. Is there a nice formula for how the $k$-th tensor power of $V_{(n-1, 1)}$ decomposes into irreps?</p>
Bruce Westbury
3,992
<p>You want the "partition algebras". Some references to get you started are:</p> <p>MR1317365 (97b:82023) Jones, V. F. R. The Potts model and the symmetric group. Subfactors (Kyuzeso, 1993), 259--267, World Sci. Publ., River Edge, NJ, 1994. </p> <p>MR1399030 (98g:05152) Martin, Paul . The structure of the partition algebras. J. Algebra 183 (1996), no. 2, 319--358.</p> <p>MR2143201 (2006g:05228) Halverson, Tom ; Ram, Arun . Partition algebras. European J. Combin. 26 (2005), no. 6, 869--921.</p> <p>The partition algebras are the endomorphism algebras of the tensor powers of, $V$, the natural representation of $S_n$. As has been mentioned in the comments this decomposes as the sum of the trivial representation and the representation you are interested in.</p> <p>You can recover information about the representation you are interested in from the partition algebras. For example, instead of looking at all set partitions, you only consider set partitions with no singleton.</p>
440,534
<p>Given two <em>origin symmetric convex</em> polytopes <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> (that is <span class="math-container">$P_i=-P_i$</span>) with the same edge-graph, but potentially of different dimensions and combinatorial types. Let <span class="math-container">$\phi: G_{P_1}\to G_{P_2}$</span> be an isomorphism between their edge-graphs.</p> <blockquote> <p><strong>Question:</strong> Does for each vertex <span class="math-container">$v\in P_1$</span> hold <span class="math-container">$\phi(-v)=-\phi(v)$</span>?</p> </blockquote> <p>Inuitively, I am asking whether the edge-graph already determines which vertices form an antipodal pair.</p> <p>The following example shows that for a vertex (black) its antipodal vertex (white) is not necessarily a vertex of maximal graph-distance (gray).</p> <img src="https://i.stack.imgur.com/Dm541.png" width="250"/> <hr /> <p>This question is a more precise formulation of <a href="https://mathoverflow.net/q/439787/108884">this older question</a>.</p>
David E Speyer
297
<p><span class="math-container">$\def\RR{\mathbb{R}}$</span> The answer is no! I will give an example of a centrally symmetric polytope in <span class="math-container">$\RR^4$</span> with <span class="math-container">$12$</span> vertices where there is a symmetry of the edge graph interchanging two non-antipodal vertices, and fixing the other ten vertices.</p> <p>Define the function <span class="math-container">$f : \RR \to \RR^4$</span> by <span class="math-container">$$f(\theta) = (\cos \theta, \sin \theta, \cos (3 \theta), \sin (3 \theta)).$$</span> Observe that <span class="math-container">$f(\theta+\pi) = -f(\theta)$</span>, so the convex hull of <span class="math-container">$f(\theta_1)$</span>, <span class="math-container">$f(\theta_2)$</span>, ..., <span class="math-container">$f(\theta_n)$</span>, <span class="math-container">$f(\theta_1+\pi)$</span>, <span class="math-container">$f(\theta_2+\pi)$</span>, ..., <span class="math-container">$f(\theta_n+\pi)$</span> is always a centrally symmetric polytope. I'll write <span class="math-container">$P(\theta_1, \theta_2, \ldots, \theta_n)$</span> for the convex hull of <span class="math-container">$f(\theta_1)$</span>, <span class="math-container">$f(\theta_2)$</span>, ..., <span class="math-container">$f(\theta_n)$</span>, <span class="math-container">$f(\theta_1+\pi)$</span>, <span class="math-container">$f(\theta_2+\pi)$</span>, ..., <span class="math-container">$f(\theta_n+\pi)$</span>.</p> <p>We need two lemmas:</p> <p>Lemma 1: Let <span class="math-container">$|\theta_1 - \theta_2| &lt; 2 \pi/3$</span> and let <span class="math-container">$\theta_3$</span>, <span class="math-container">$\theta_4$</span>, ..., <span class="math-container">$\theta_n$</span> be any other angles. Then <span class="math-container">$(f(\theta_1), f(\theta_2))$</span> is an edge of <span class="math-container">$P(\theta_1, \theta_2, \theta_3, \theta_4, \ldots, \theta_n)$</span>.</p> <p>Lemma 2: Let <span class="math-container">$0 &lt; \alpha &lt; \beta &lt; \pi/2$</span>. Then there is <span class="math-container">$\delta&gt;0$</span> (dependent on <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>) such that, for <span class="math-container">$|\gamma-\pi/2| &lt; \delta$</span>, the line segement <span class="math-container">$(f(\gamma), f(- \gamma))$</span> is NOT an edge of <span class="math-container">$P(-\gamma, -\beta, - \alpha, \alpha, \beta, \gamma)$</span>.</p> <p>Once we have these lemmas, our construction will be to choose <span class="math-container">$0 &lt; \alpha &lt; \beta &lt; \pi/6$</span> and then <span class="math-container">$\gamma$</span> extremely close to <span class="math-container">$\pi/2$</span>. Our polytope will be <span class="math-container">$P(-\gamma, - \beta, - \alpha, \alpha, \beta, \gamma)$</span>. If we have chosen <span class="math-container">$\gamma$</span> close enough to <span class="math-container">$\pi/2$</span>, then the above lemmas guarantee that both <span class="math-container">$f(\gamma)$</span> and <span class="math-container">$f(\pi - \gamma)$</span> will NOT neighbor <span class="math-container">$f(- \gamma)$</span> and <span class="math-container">$f(-\pi+\gamma)$</span>, but will neighbor the other eight vertices (and each other). So switching <span class="math-container">$f(\gamma)$</span> and <span class="math-container">$f(\pi-\gamma)$</span> will be an symmetry of the edge graph which does not preserve the antipodal pairing.</p> <p>We now prove the lemmas.</p> <p>Proof of Lemma 1: Rotating the circle, we may assume that <span class="math-container">$\theta_1 = - \theta_2$</span> and <span class="math-container">$0 &lt; \theta_1 &lt; \pi/3$</span>. Put <span class="math-container">$a = \cos \theta_1 &gt; 1/2$</span> and consider the function <span class="math-container">$g(\theta) = 3 a^2 \cos \theta - \cos^3 \theta$</span>. Basic calculus shows that this is maximized at <span class="math-container">$\theta = \pm \theta_1$</span>. (We need that <span class="math-container">$a&gt;1/2$</span> in order to make sure that the value at <span class="math-container">$\theta_1$</span>, namely <span class="math-container">$2 a^3$</span>, beats the other local maximum at <span class="math-container">$\pi$</span>, namely <span class="math-container">$1-3a^2$</span>.) Expanding <span class="math-container">$\cos^3 \theta = (3/4) \cos \theta + (1/4) \cos (3 \theta)$</span>, we have <span class="math-container">$g(\theta) = (3 a^2-3/4) \cos \theta - (1/4) \cos (3 \theta)$</span>. Writing <span class="math-container">$(x_1, x_2, x_3, x_4)$</span> for the coordinates on <span class="math-container">$\RR^4$</span>, the linear functional <span class="math-container">$(3a^2 - 3/4) x_1 - (1/4) x_3$</span> is larger at <span class="math-container">$f(\pm \theta_1)$</span> than at any other <span class="math-container">$f(\theta)$</span>, so <span class="math-container">$((f(\theta_1), f(-\theta_1))$</span> is an edge of <span class="math-container">$P(\theta_1, -\theta_1, \theta_3, \theta_4, \ldots, \theta_n)$</span> as desired. <span class="math-container">$\square$</span></p> <p>Proof of Lemma 2: It is enough to show that some point on the line segment from <span class="math-container">$f(\gamma)$</span> to <span class="math-container">$f(- \gamma)$</span> is in the convex hull of <span class="math-container">$f(\pm \alpha)$</span>, <span class="math-container">$f(\pm \beta)$</span>, <span class="math-container">$f(\pi \pm \alpha)$</span> and <span class="math-container">$f(\pi \pm \beta)$</span>. Putting <span class="math-container">$h(\theta) = ((f(\theta)+f(-\theta))/2$</span>, we will show that <span class="math-container">$h(\gamma)$</span> is in the convex hull of <span class="math-container">$h(\alpha)$</span>, <span class="math-container">$h(\beta)$</span>, <span class="math-container">$h(\pi-\alpha)$</span> and <span class="math-container">$h(\pi - \beta)$</span>. Explicitly, <span class="math-container">$h(\theta) = (\cos \theta, 0, \cos (3 \theta), 0)$</span>, so all of these points are in <span class="math-container">$2$</span> dimensions. The points <span class="math-container">$h(\alpha)$</span>, <span class="math-container">$h(\beta)$</span>, <span class="math-container">$h(\pi-\alpha)$</span> and <span class="math-container">$h(\pi - \beta)$</span> are the vertices of a parallelogram with center at <span class="math-container">$(0,0)$</span>. As <span class="math-container">$\gamma$</span> approaches <span class="math-container">$\pi/2$</span>, the point <span class="math-container">$h(\gamma)$</span> approaches <span class="math-container">$(0,0)$</span> so, for <span class="math-container">$\gamma$</span> close enough to <span class="math-container">$\pi/2$</span>, the point <span class="math-container">$h(\gamma)$</span> will be inside this parallelogram. <span class="math-container">$\square$</span></p>
2,458,613
<p>Say that we have $k$ red balls and $n-k$ blacks balls for $n$ balls total. Then, say we partition the balls into equal sized groups of size $m$. What is the expected number of groups with a red ball?</p> <p>It seems clear that I should use linearity of expectation of some sort. I tried calculating the probability that any one group has at least one red ball, but I can't seem to get my equation to match my code simulation result.</p> <p>Any help would be appreciated</p>
Marko Riedel
44,883
<p>If I understand this correctly we take one of the ${n\choose k}$ arrangements of these balls in a line and partition into groups of size $m$ of consecutive balls starting at the left and then ask about the expected number of groups containing at least one red ball. Alternatively we may start the computation by asking of the expectation of the number of groups containing no red ball. We thus use a marked generating function as in</p> <p>$$\left. \frac{\partial}{\partial u} (uB^m - B^m + (R+B)^m)^{n/m}\right|_{u=1}$$</p> <p>and get</p> <p>$$\left. \frac{n}{m} (uB^m - B^m + (R+B)^m)^{n/m-1} B^m\right|_{u=1} \\ = \left. \frac{n}{m} ((R+B)^m)^{n/m-1} B^m\right|_{u=1} \\ = \frac{n}{m} B^m (R+B)^{n-m}.$$</p> <p>Extracting coefficients we find</p> <p>$$\frac{n}{m} [R^k] [B^{n-k}] B^m (R+B)^{n-m} = \frac{n}{m} [R^k] [B^{n-k-m}] (R+B)^{n-m} = \frac{n}{m} {n-m\choose k}.$$</p> <p>Now to count the groups with at least one red ball we subtract from the number of groups which is $n/m$ and the result becomes</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n}{m} \left(1 - {n\choose k}^{-1} {n-m\choose k}\right).}$$</p> <p>This matches the answer that was first to appear.<P></p> <p>We may also check this by enumeration as shown below.</p> <pre> with(combinat); ENUM := proc(n, k, m) option remember; local src, d, grp, run, reds, gf; if n mod m &lt;&gt; 0 then return FAIL fi; gf := 0; src := [seq(R, idx=1..k), seq(B, idx=k+1..n)]; for d in permute(src) do reds := 0; for grp from 0 to n/m-1 do run := d[1+grp*m..m+grp*m]; if numboccur(run, R) &gt; 0 then reds := reds + 1; fi; od; gf := gf + u^reds; od; gf; end; EXENUM := (n, k, m) -&gt; subs(u=1, diff(ENUM(n, k, m), u))/binomial(n,k); EX := (n, k, m) -&gt; n/m*(1 - binomial(n-m,k)/binomial(n,k)); </pre>
3,183,931
<p>I approached this way:</p> <p>Having equal sides <span class="math-container">$1,2,3,4$</span> then the possible <span class="math-container">$3$</span> digit numbers are <span class="math-container">$111,112,221,222,223,...,448$</span></p> <p>Then total number of <span class="math-container">$3$</span> digit numbers possible </p> <p><span class="math-container">$2(1+2+3+4)=20$</span></p> <p><span class="math-container">$20-4=16$</span></p> <p><span class="math-container">$16\cdot3=48$</span> </p> <p>Now for equal sides <span class="math-container">$5,6,7,8,9$</span> </p> <p><span class="math-container">$5\cdot9=45$</span> ,<span class="math-container">$45-5=40,40\cdot3=120$</span></p> <p><span class="math-container">$120+48=168$</span></p> <p>But the solution given is <span class="math-container">$165$</span>.Even after checking a lot I couldn't get where is wrong. </p>
Reinhard Meier
407,833
<p>I can only find <span class="math-container">$1+3+5+7 = 16$</span> triangles with equal sides <span class="math-container">$a$</span> and <span class="math-container">$b$</span> of lengths <span class="math-container">$1,\,2,\,3,\,4$</span> which respect the triangle inequality. From those <span class="math-container">$16$</span> triangles, I subtract the four equilateral ones, permute the remaining ones and add the equilateral triangles again, which gives <span class="math-container">$$ (16-4)\cdot 3 + 4 = 40 $$</span> triangles with equal sides <span class="math-container">$1,\,2,\,3,\,4.$</span></p> <p>There are <span class="math-container">$45$</span> triangles with equal sides <span class="math-container">$a$</span> and <span class="math-container">$b$</span> of lengths <span class="math-container">$5,\,6,\,7,\,8,\,9$</span> which respect the triangle inequality. From those <span class="math-container">$45$</span> triangles, I subtract the five equilateral ones, permute the remaining ones and add the equilateral triangles again, which gives <span class="math-container">$$ (45-5)\cdot 3 + 5 = 125 $$</span> triangles with equal sides <span class="math-container">$5,\,6,\,7,\,8,\,9.$</span></p>
3,183,931
<p>I approached this way:</p> <p>Having equal sides <span class="math-container">$1,2,3,4$</span> then the possible <span class="math-container">$3$</span> digit numbers are <span class="math-container">$111,112,221,222,223,...,448$</span></p> <p>Then total number of <span class="math-container">$3$</span> digit numbers possible </p> <p><span class="math-container">$2(1+2+3+4)=20$</span></p> <p><span class="math-container">$20-4=16$</span></p> <p><span class="math-container">$16\cdot3=48$</span> </p> <p>Now for equal sides <span class="math-container">$5,6,7,8,9$</span> </p> <p><span class="math-container">$5\cdot9=45$</span> ,<span class="math-container">$45-5=40,40\cdot3=120$</span></p> <p><span class="math-container">$120+48=168$</span></p> <p>But the solution given is <span class="math-container">$165$</span>.Even after checking a lot I couldn't get where is wrong. </p>
dan_fulea
550,003
<p>Using <a href="https://www.sagemath.org" rel="nofollow noreferrer">sage</a>, we have </p> <pre><code>sage: R = [1..9] sage: RRR = cartesian_product([R,R,R]) sage: S = [ (a,b,c) for (a,b,c) in RRR if len({a,b,c}) &lt;= 2 and a &lt; b+c and b &lt; a+c and c &lt; a+b ] sage: T = [ 100*a + 10*b + c for (a,b,c) in S ] sage: T.sort() sage: for k in [0..14]: ....: for n in [0..10]: ....: print T[11*k+n], ....: print ....: 111 122 133 144 155 166 177 188 199 212 221 222 223 232 233 244 255 266 277 288 299 313 322 323 331 332 333 334 335 343 344 353 355 366 377 388 399 414 424 433 434 441 442 443 444 445 446 447 454 455 464 466 474 477 488 499 515 525 533 535 544 545 551 552 553 554 555 556 557 558 559 565 566 575 577 585 588 595 599 616 626 636 644 646 655 656 661 662 663 664 665 666 667 668 669 676 677 686 688 696 699 717 727 737 744 747 755 757 766 767 771 772 773 774 775 776 777 778 779 787 788 797 799 818 828 838 848 855 858 866 868 877 878 881 882 883 884 885 886 887 888 889 898 899 919 929 939 949 955 959 966 969 977 979 988 989 991 992 993 994 995 996 997 998 999 </code></pre> <p>and there are </p> <pre><code>sage: len(S) 165 </code></pre> <p>elements in the solution set <code>S</code>.</p>
3,267,110
<p>This integral <span class="math-container">$$\int_{-\infty}^{\infty}\frac{ a^2x^2 dx}{(x^2-b^2)^2+a^2x^2}=a\pi, ~ a, b \in \Re$$</span> looks suspiciously interesting as it is independent of the parameter <span class="math-container">$b$</span>. The question is: What is the best way of proving or disproving this?</p>
clathratus
583,016
<p>Glasser's master theorem states that for arbitrary constants <span class="math-container">$\alpha$</span>, <span class="math-container">$(\alpha_n)_{n=1}^{N}$</span>, <span class="math-container">$(\beta_n)_{n=1}^N$</span>, the function <span class="math-container">$$\phi(x)=|\alpha|x-\sum_{n=1}^{N}\frac{|\alpha_n|}{x-\beta_n},$$</span> and any integrable function <span class="math-container">$F(x)$</span>, <span class="math-container">$$\mathrm{PV}\int_{-\infty}^{\infty}F(\phi(x))dx=\mathrm{PV}\int_{-\infty}^\infty F(x)dx.$$</span> For your integral, set <span class="math-container">$$\phi(x)=x-\frac{b^2}{x}$$</span> and <span class="math-container">$$F(\phi(x))=\frac{a^2x^2}{(x^2-b^2)^2-a^2x^2}=\frac{1}{(\frac{x}{a}-\frac{b^2}{ax})^2+1}$$</span> to immediately yield the desired result.</p>
4,380,559
<p><span class="math-container">$P$</span> is a <span class="math-container">$\mathbb{R}^{2\times2}$</span> positive semidefinite matrix satisfying <span class="math-container">$P=P^2$</span>, <span class="math-container">$P \neq 0$</span>, and <span class="math-container">$P \neq I$</span>. Show that <span class="math-container">$$P=\begin{bmatrix} \cos t \\ \sin t \end{bmatrix} \begin{bmatrix} \cos t &amp; \sin t \end{bmatrix}$$</span> for some <span class="math-container">$t$</span>.</p>
Glorious Nathalie
948,761
<p>Since we're talking about positive semi-definite matrices, then it is implicitly assumed that <span class="math-container">$P$</span> is symmetric. So take</p> <p><span class="math-container">$P = \begin{bmatrix} a &amp;&amp; b \\ b &amp;&amp; c \end{bmatrix} $</span></p> <p>From <span class="math-container">$P = P^2 $</span> it follows that <span class="math-container">$ P (P - I) = 0 $</span> , thus</p> <p><span class="math-container">$ \begin{bmatrix} a &amp;&amp; b \\ b &amp;&amp; c \end{bmatrix} \begin{bmatrix} a - 1 &amp;&amp; b \\ b &amp;&amp; c - 1 \end{bmatrix} $</span></p> <p>Hence, we have the following three equations</p> <p><span class="math-container">$ a(a - 1) + b^2 = 0 $</span></p> <p><span class="math-container">$ a b + b (c -1 )= 0 $</span></p> <p><span class="math-container">$ b^2 + c (c - 1) = 0 $</span></p> <p><strong>Case I:</strong> <span class="math-container">$b = 0$</span>, then <span class="math-container">$ a$</span> is either <span class="math-container">$0$</span> or <span class="math-container">$1$</span> and <span class="math-container">$c$</span> is either <span class="math-container">$0 $</span> or <span class="math-container">$1$</span>.</p> <p><strong>Case II:</strong><span class="math-container">$ a + c - 1 = 0 $</span> then we can take <span class="math-container">$a = \cos^2 t , c = \sin^2 t $</span> and it would follow that <span class="math-container">$ b = \pm \cos t \sin t $</span></p> <p>Case I results in two possible matrices (being <span class="math-container">$\ne 0 $</span> and <span class="math-container">$\ne I$</span> ):</p> <p><span class="math-container">$ \begin{bmatrix} 1 &amp;&amp; 0 \\ 0 &amp;&amp; 0 \end{bmatrix} $</span> and <span class="math-container">$ \begin{bmatrix} 0 &amp;&amp; 0 \\ 0 &amp;&amp; 1 \end{bmatrix} $</span></p> <p>Case II results in <span class="math-container">$ P = \begin{bmatrix} \cos^2 t &amp;&amp; \pm \cos t \sin t \\ \pm \cos t \sin t &amp;&amp; \sin^2 t \end{bmatrix} $</span> which can factored as</p> <p><span class="math-container">$ P = \begin{bmatrix} \cos t_0 \\ \sin t_0 \end{bmatrix} \begin{bmatrix} \cos t_0 &amp;&amp; \sin t_0 \end{bmatrix} $</span></p> <p>where <span class="math-container">$t_0 = t $</span> if <span class="math-container">$b$</span> is taken with a plus sign, and <span class="math-container">$t_0 = -t $</span> if taken with a negative sign.</p> <p>Also note that Case I is a special case of Case II, by setting <span class="math-container">$ t= 0 $</span> or <span class="math-container">$ t = \dfrac{\pi}{2} $</span> respectively.</p>
292,176
<p>I'm having a difficult time finding any theory on an inverse problem I've come up against. Let's say I have an unknown function $f:[0,1] \rightarrow \mathbb{R}$, and I know $\int_{a}^{b} f$ for some collection $A$ of pairs $(a,b)\in[0,1]^2$. I'm looking for pointers to any material that discusses condtions on $f$ and $A$ that are sufficient recover (all of? some of?) the values of $f$. Google searches just keep turning up elementary-calculus-help-type pages. I'm a beginning graduate student, if it matters. Thanks.</p> <p><strong>Edit:</strong> I'm actually looking for something broader than I asked for. I know that recovering the values of $f$ is a lot to ask and is very unlikely unless $A=[0,1]^2$; I'm also looking for approximations to $f$, anything that can be said about its properties/behaviour, etc. when $A\subsetneqq [0,1]^2$.</p>
Iosif Pinelis
36,721
<p>In general, it appears that hardly anything interesting can be said. E.g., let $A=\{(1/5,3/5),(2/5,4/5)\}$; here, it will be convenient to think of $A$ as a set of (say) open intervals, rather than a set of pairs of endpoints of intervals. </p> <p>However, note first that, without loss of generality, for each open interval in $A$, all the intervals (closed, left-open, right-open) with the same endpoints may be assumed to belong to $A$. Next, let us assume that $\int_0^1|f|&lt;\infty$ and that $A$ is a semi-ring (<a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=10&amp;ved=0ahUKEwiCseDeyI3ZAhVV9mMKHWDpBEEQFghuMAk&amp;url=https%3A%2F%2Fwww.math.wustl.edu%2F~sawyer%2Fhandouts%2FMeasuresSemiRings.pdf&amp;usg=AOvVaw2f0CpK2T7FG25Ke5k8zwwE" rel="nofollow noreferrer">see measures on semi-rings</a>) and $[0,1]\in A$. Then the formula $\mu(I):=\int_I f$ for $I\in A$ defines a finite signed countably-additive measure $\mu$ on $A$, which can be uniquely extended to a signed measure $\bar\mu$ on the sigma-algebra $\Sigma$ generated by $A$. </p> <p>The measure $\bar\mu$ determines, and is determined by, the conditional expectation $E(f|\Sigma)$ (of $f$ given $\Sigma$), equal the Radon--Nikodym derivative $\dfrac{d\bar\mu}{d\lambda|_\Sigma}$ (with respect to the underlying Lebesgue measure $\lambda$ over $[0,1]$), and this conditional expectation is then precisely all that we can get from the knowledge of the map $A\ni I\mapsto \int_I f$. (One might note that the appearance of the Radon--Nikodym <em>derivative</em> here is in broad agreement with the comment "Recovery of function from its integral is called differentiation" by Alexandre Eremenko.)</p> <p>E.g., if $A$ consists of all intervals with endpoints in the set $\{j/n\colon j=0,\dots,n\}$, then all that we will know is, in essence, the "histogram" of the average values of $f$ over the intervals $[0,1/n],\dots,[1-1/n,1]$, and this "histogram" is the best approximation to $f$ that we can get in this case. </p> <p><strong>Extended comment:</strong> Dirk suggested an inverse-problem approach. One may note that such an approach will work perfectly well (and, generally, even better) within the above framework of the conditional expectation. Indeed, for a space $X$ of (say) real-valued integrable functions on $[0,1]$ we have the map $X\overset K\to\mathbb R^A$ defined by the formula $Kf:=(\int_I f)_{I\in A}$ for $f\in X$. This map can be factored as follows: \begin{equation} X\overset{E(\cdot|\Sigma)}\longrightarrow X_\Sigma\overset{K_\Sigma}\longrightarrow\mathbb R^A, \end{equation} where $X_\Sigma$ is the set of all integrable $\Sigma$-measurable functions in $X$ and $K_\Sigma$ is the restriction of $K$ to $X_\Sigma$; indeed, by the definition of the conditional expectation/Radon--Nikodym derivative, we have<br> $K_\Sigma E(f|\Sigma)=Kf$ for all $f\in X$. Thus, instead of $K$, one can deal with its restriction $K_\Sigma$, with the same (or greater) degree of success. In particular, if $A$ is finite, then we have to deal with the finite-dimensional space $X_\Sigma$ instead of the possibly infinite-dimensional space $X$. </p> <p>This comment may be viewed as an illustration of what was said previously: that the conditional expectation $E(f|\Sigma)$ is precisely all that we can get from the knowledge of the map $A\ni I\mapsto \int_I f$. </p>
292,176
<p>I'm having a difficult time finding any theory on an inverse problem I've come up against. Let's say I have an unknown function $f:[0,1] \rightarrow \mathbb{R}$, and I know $\int_{a}^{b} f$ for some collection $A$ of pairs $(a,b)\in[0,1]^2$. I'm looking for pointers to any material that discusses condtions on $f$ and $A$ that are sufficient recover (all of? some of?) the values of $f$. Google searches just keep turning up elementary-calculus-help-type pages. I'm a beginning graduate student, if it matters. Thanks.</p> <p><strong>Edit:</strong> I'm actually looking for something broader than I asked for. I know that recovering the values of $f$ is a lot to ask and is very unlikely unless $A=[0,1]^2$; I'm also looking for approximations to $f$, anything that can be said about its properties/behaviour, etc. when $A\subsetneqq [0,1]^2$.</p>
Aaron Meyerowitz
8,008
<p>If $A$ is infinite and dense then you can find the function exactly by taking limits. I'll look at the finite case. In the event that the function is a polynomial of degree $n$ then knowing the integral over $n+1$ internally disjoint intervals is enough for recovery. Without knowing that the function is a polynomial , but given $n+1$ integrals one could find the unique polynomial of degree $n$ that matches the data. Or the polynomial of degree $n-m$ that best fits.</p>