qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
172,080 | <p>Here is a fun integral I am trying to evaluate:</p>
<p>$$\int_{0}^{\infty}\frac{\sin^{2n+1}(x)}{x} \ dx=\frac{\pi \binom{2n}{n}}{2^{2n+1}}.$$</p>
<p>I thought about integrating by parts $2n$ times and then using the binomial theorem for $\sin(x)$, that is, using $\dfrac{e^{ix}-e^{-ix}}{2i}$ form in the binomial series.</p>
<p>But, I am having a rough time getting it set up correctly. Then, again, there is probably a better approach. </p>
<p>$$\frac{1}{(2n)!}\int_{0}^{\infty}\frac{1}{(2i)^{2n}}\sum_{k=0}^{n}(-1)^{2n+1-k}\binom{2n}{k}\frac{d^{2n}}{dx^{2n}}(e^{i(2k-2n-1)x})\frac{dx}{x^{1-2n}}$$</p>
<p>or something like that. I doubt if that is anywhere close, but is my initial idea of using the binomial series for sin valid or is there a better way?.</p>
<p>Thanks everyone.</p>
| Graham Hesketh | 66,912 | <p>One more just for luck... </p>
<p>Use the evenness of the integrand, the binomial expansion of $\sin(x)^{2n}$ in terms of exponentials, and the Fourier transform representation of the <a href="http://en.wikipedia.org/wiki/Rectangular_function">rectangular function</a> and you have:</p>
<p>\begin{aligned}
\frac{1}{2}\int _{-\infty}^{\infty }\!{\frac { \sin \left( x \right) ^{
2\,n+1}}{x}}{dx}&=\frac{1}{{2}^{2n+1}}\sum _{k=0}^{2\,n} {2\,n\choose k} \left( -1
\right) ^{n-k}\int _{-\infty }^{\infty }\!{\frac {\sin \left( x
\right) {{\rm e}^{-2ix \left( n-k \right) }}}{x}}{dx}\\
&=\frac {\pi
}{{2}^{2n+1}}\sum _{k=0}^{2\,n}{2\,n\choose k} \left( -1 \right) ^{n-k}
\cases{1 &$ \left| n-k \right| <1/2$\cr 1/2 &$ \left| n-k \right| =1/2$\cr 0&$ \left| n-k \right|>1/2 $\cr}\\
&=\frac{\pi}{{2}^{2n+1}}{2\,n\choose n}
\end{aligned}
The rectangular function advantageously shows us that the only non-zero-weighted term in the sum is the $k=n$ term and we are spared any further manipulation or evaluation of sums.</p>
|
159,761 | <p>I have two lists:</p>
<pre><code>list1 = {"a", "b"};
list2 = {{{1, 2}, {3, 4}}, {{1, 2}}};
</code></pre>
<p>My goal is to create a new list which would be:</p>
<pre><code>{"a u 1:2","a u 2:3","b u 1:2"}
</code></pre>
<p>In other words first element in <code>list1</code> would be distributed before each subelement of first element in <code>list2</code> etc.</p>
<p>There are some answers using <code>MapThread</code> e.g. <a href="https://mathematica.stackexchange.com/questions/79943/add-elements-of-one-list-to-sublists-of-another-list">here</a>. But that is not satisfactory, actually, it does not work, just try e.g.</p>
<pre><code>subl = {{{1, 2}, {3, 4}}, {{5, 6}}};
list = {11, 12};
MapThread[Append, {subl, list}]
</code></pre>
<p>As it returns:
{{{1, 2}, {3, 4}, 11}, {{5, 6}, 12}}</p>
<p>while the result I am seeking should look like:</p>
<pre><code>{{{1,2,11},{3,4,11}},{{5,6,12}}}
</code></pre>
<p>And level specification returns errors:</p>
<pre><code>MapThread::mptd: Object {{{1,2},{3,4}},{{5,6}}} at position {2, 1} in MapThread[Append,{{{{1,2},{3,4}},{{5,6}}},{11,12}},2] has only 1 of required 2 dimensions.
</code></pre>
<p>or</p>
<pre><code>MapThread::intnm: Non-negative machine-sized integer expected at position 3 in MapThread[Append,{{{{1,2},{3,4}},{{5,6}}},{11,12}},{2}].
</code></pre>
<p>Thus I do not think this is a duplicate, or I have not found an answer that would work in this case.</p>
<p>I have:</p>
<pre><code>Map[Function[u,
StringRiffle[ToString /@ u, {"u ", ":", ""}]], list2, {2}]
</code></pre>
<p>Producing</p>
<pre><code>{{"u 1:2", "u 3:4"}, {"u 1:2"}}
</code></pre>
<p>so I had thought simply:</p>
<pre><code>MapThread[
StringJoin[#2, #1] &, {Map[
Function[u, StringRiffle[ToString /@ u, {"u ", ":", ""}]],
list2, {2}], list1},{2}]
</code></pre>
<p>but that gives error and </p>
<pre><code>MapThread[
StringJoin[#2, #1] &, {Map[
Function[u, StringRiffle[ToString /@ u, {"u ", ":", ""}]],
list2, {2}], list1}]
</code></pre>
<p>on the first level gives:</p>
<pre><code>{"a u 1:2u 3:4", "b u 1:2"}
</code></pre>
<p>I tried to repartition the lists so that they are similar in size but that did not work. The solution that works is:</p>
<pre><code>listC = Map[Function[u, StringRiffle[ToString /@ u, {"u ", ":", ""}]],
list2, {2}]
MapThread[Function[{u, v}, StringJoin[u, #] & /@ v], {list1, listC}]
</code></pre>
<p>But I do no like it due to the <code>/@v</code> part. I would really like to find a general solution to this problem: redistribute (prepend, apend, join strings) elements in one list across arbitrary dimension of another list (which my solution does not permit, I made use that the in this particular case where i need to apply one level deeper).</p>
| user1066 | 106 | <p>For the second part of your question, <code>ArrayFlatten</code> and <code>Thread</code> may be combined:</p>
<pre><code>ArrayFlatten[{#}] & /@ Thread[{list1, list2}]
</code></pre>
<blockquote>
<p>{{{a, 1, 2}, {a, 3, 4}}, {{b, 1, 2}}}</p>
</blockquote>
<p>But perhaps more useful is simply the following: </p>
<pre><code>ArrayFlatten@Thread[{list1, list2}]
</code></pre>
<blockquote>
<p>{{a, 1, 2}, {a, 3, 4}, {b, 1, 2}}</p>
</blockquote>
<p>And, after WReach's <a href="https://mathematica.stackexchange.com/a/159769/106">answer</a>:</p>
<pre><code>StringRiffle[{#1, " u ", #2, ":", #3}, ""] & @@@ArrayFlatten@Thread[{list1, list2}]
</code></pre>
<blockquote>
<p>{"a u 1:2", "a u 3:4", "b u 1:2"}</p>
</blockquote>
|
1,896,024 | <p><span class="math-container">$f(n) = 2n^2 + n$</span></p>
<p><span class="math-container">$g(n) = O(n^2)$</span></p>
<p>The question is to find the mistake in the following process:</p>
<blockquote>
<p><span class="math-container">$f(n) = O(n^2) + O(n)$</span></p>
<p><span class="math-container">$f(n) - g(n) = O(n^2) + O(n) - O(n^2)$</span></p>
<p><span class="math-container">$f(n)-g(n) = O(n)$</span></p>
</blockquote>
<p>From how I understand it, Big-Oh represents the upper bound on the number of operations (when <span class="math-container">$n$</span> tends to very large value). So, the difference between an order of <span class="math-container">$n^2$</span> minus an order of <span class="math-container">$n^2$</span> should be negligible if <span class="math-container">$n$</span> is very large.</p>
<p>But the individual steps seem correct. It seems to me that the mistake is that when doing the minus with large values, the <span class="math-container">$O(n)$</span> will also get consumed.</p>
<p>I need clarification on whether I'm correct. If I'm not, then where is the mistake?</p>
| Mithlesh Upadhyay | 234,055 | <p>According to definition of big-O notation:</p>
<p>$f(x)=O(g(x)){\text{ as }}$$x\to a$\,
if and only if</p>
<p>${\displaystyle \limsup _{x\to a}\left|{\frac {f(x)}{g(x)}}\right|<\infty }$</p>
<p>$\lim_{x→\infty}\left|{\cfrac {f(n)-g(n)}{(n^2)}}\right|=0$ </p>
<p>If $g(x)$ is nonzero, or at least becomes nonzero beyond a certain point, the relation $f(x) = o(g(x))$ is equivalent to</p>
<p>$\lim _{x\to \infty }{\cfrac {f(x)}{g(x)}}=0$.</p>
<hr>
<p>Do you know that,</p>
<p>$f(n)+g(n)=(2n^2+n)+O(n^2)=\max((2n^2+n), O(n^2))=O(n^2)$</p>
<p>Can we conclude that:</p>
<p>$f(n)=g(n)+O(n^2)=O(n^2)$</p>
<p>$f(n)-g(n)=O(n^2)$</p>
|
1,077,594 | <p>Let $C[a,b]$ be the space of continuous functions on $[a,b]$ with the norm
$$
\left\Vert{f}\right\Vert=\max_{a \leq t \leq b}\left| f(t)\right|
$$</p>
<p>Then $C[a,b]$ is a Banach space. </p>
<p>Let's view $C^1[a,b]$ as a subspace of it. My question is, is this $C^1[a,b]$ a Banach space?</p>
<p>I think it is, since for every Cauchy sequence $\{f_n\}$ in $C^1[a,b]$, it is also a Cauchy sequence in $C[a,b]$, so it converges to a function $f$ in $C[a,b]$. But convergence in $C[a,b]$ is uniform, so $f$ is in $C[a,b]$ too, which follows that $C^1[a,b]$ is complete, i.e. a Banach space.</p>
<p>However, I just read a theorem named <a href="http://en.wikipedia.org/wiki/Closed_graph_theorem" rel="nofollow">Closed Graph Theorem</a>, stating that</p>
<blockquote>
<p>(Closed Graph Theorem) Let $X$ and $Y$ be two Banach space, and $T$ a closed linear operator from $A\subset X$ to $Y$. If $A$ is closed in $X$, then $T$ is continuous.</p>
</blockquote>
<p>Apply this theorem to the above case, let $X=C^1[a,b]$, $Y=C[a,b]$ and $T=\frac{d}{dt}$ from $X$ to $Y$. We can prove that $T$ is a closed linear operator. Note that $X$ is closed in $X$, so by the above theorem $T$ is continuous.</p>
<p>However, it is easy to prove that differential operator is NOT continuous.</p>
<p>I am sure the Closed Graph Theorem and the last statement is true, so I think $C^1[a,b]$ is not Banach.</p>
<p>Could anyone tell me why?</p>
| sabachir | 201,840 | <p>we use $$f\left( z \right) = \frac{{e^{iz} }}{{1 + z^2 }}$$</p>
|
1,743,935 | <p>Not sure if I have done this correctly, seems too straight forward, any help is very appreciated. </p>
<blockquote>
<p>QUESTION:<br>
Find the real and imaginary parts of $f(z) = \cos(z)$.</p>
</blockquote>
<p>ATTEMPT:<br>
$\cos(z) = \cos(x+iy) = \cos x\cos(iy) − \sin x\sin(iy) =
\cos x\cosh y − i\sin x\sinh y$</p>
<p>Is that correct? </p>
| egreg | 62,967 | <p>By definition,
$$
\cos z=\frac{e^{iz}+e^{-iz}}{2},\qquad
\sin z=\frac{e^{iz}-e^{-iz}}{2i}
$$
In particular, for real $y$,
$$
\cos(iy)=\frac{e^{-y}+e^{y}}{2}=\cosh y
$$
and
$$
\sin(iy)=\frac{e^{-y}-e^{y}}{2i}=i\frac{e^{y}-e^{-y}}{2}=i\sinh y
$$</p>
<p>So, yes, you're correct.</p>
|
1,743,935 | <p>Not sure if I have done this correctly, seems too straight forward, any help is very appreciated. </p>
<blockquote>
<p>QUESTION:<br>
Find the real and imaginary parts of $f(z) = \cos(z)$.</p>
</blockquote>
<p>ATTEMPT:<br>
$\cos(z) = \cos(x+iy) = \cos x\cos(iy) − \sin x\sin(iy) =
\cos x\cosh y − i\sin x\sinh y$</p>
<p>Is that correct? </p>
| Community | -1 | <p>Using the exponential definition of the cosine,</p>
<p>$$2\cos(z)=e^{iz}+e^{-iz}=e^{-y+ix}+e^{y-ix}\\
=e^{-y}(\cos(x)+i\sin(x))+e^{y}(\cos(x)-i\sin(x))\\
=(e^y+e^{-y})\cos(x)-i(e^y-e^{-y})\sin(x)).$$</p>
|
1,219,129 | <p>For any vector space $V$ over $\mathbb{C}$, let $X$ be a set whose cardinality is the dimension of $V$. Then $V \cong \bigoplus\limits_{i \in X} \mathbb{C}$ as vector spaces.</p>
<p>Is there a similar description of arbitrary Hilbert spaces? Is there something they all "look" like?</p>
| Tomasz Kania | 17,929 | <p>Every Hilbert space is isometrically isomorphic to $\ell_2(\Gamma)$ for some set $\Gamma$. This follows directly from <a href="http://en.wikipedia.org/wiki/Parseval%27s_identity" rel="nofollow">Parseval's identity</a>.</p>
|
3,073,832 | <p>I need to understand the meaning of this mathematical concept: "undecided/undecidable". </p>
<p>I know what it means in the English dictionary. But, I don't know what it means mathematically.</p>
<p>If You answer this question with possible mathematical examples, it will be very helpful to understand this issue.</p>
<p>Thank you very much!</p>
| hunter | 108,129 | <p>Given a set of axioms, a statement is undecidable if neither it nor its negation follow from the axioms.</p>
<p>Example:
If your only axiom is:
<span class="math-container">$$
\forall z \forall x \forall y \ (y=x) \vee(y=z)\vee (x=z)
$$</span>
(in English, "for any three things, two of them are equal")</p>
<p>then it is undecidable whether</p>
<p><span class="math-container">$$
\forall x \forall y (x=y)
$$</span>
(English, "there is only one thing.")</p>
<p>By contrast, it is decidable (and false) that
<span class="math-container">$$
\exists x \exists y \exists z (x \neq y) \wedge (x\neq z) \wedge (y\neq z).
$$</span>
(English, "there exist three distinct things.")</p>
|
3,073,832 | <p>I need to understand the meaning of this mathematical concept: "undecided/undecidable". </p>
<p>I know what it means in the English dictionary. But, I don't know what it means mathematically.</p>
<p>If You answer this question with possible mathematical examples, it will be very helpful to understand this issue.</p>
<p>Thank you very much!</p>
| user3482749 | 226,174 | <p>A statement <span class="math-container">$P$</span> is undecidable in an theory <span class="math-container">$T$</span> if <span class="math-container">$t \cup \{P\}$</span> and <span class="math-container">$t \cup \{\neg P\}$</span> are both consistent. In practice, it's usually used more broadly: <span class="math-container">$P$</span> is undecidable in <span class="math-container">$T$</span> if <span class="math-container">$T \cup\{P\}$</span> and <span class="math-container">$T\cup\{\neg P\}$</span> are equiconsistent, becuase Gödel's Second Incompleteness Theorem is annoying that way: specifically, for most systems that we're interested in, we can't prove that the system is consistent (or, more precisely, we can't prove it <em>in that system</em> unless the system is inconsistent). Thus, rather than requiring two systems to be consistent (which we can't prove), we instead require that at least there's no consistency reason to prefer one over the other. </p>
|
1,954,411 | <p>Let $N>0$ be a large integer, and $n<N$, then how to simply the following sum
$$\sum\limits_{k=1}^n\frac{N-n+k}{(N-k+1)(N-k+1)(N-k)}.$$
Thank you very much, guys.</p>
<p>Actually for another similar sum $\sum\limits_{k=1}^n\frac{1}{(N-k+1)(N-k)}=\sum\limits_{k=1}^n\frac{1}{N-k}-\frac{1}{N-k+1}=\frac{1}{N-n}-\frac{1}{N}$, I know the trick. But adding one term of such thing, $\frac{N-n+k}{N-k+1}$, it becomes difficult. </p>
<p>So, thanks a million for any clue.</p>
| Claude Leibovici | 82,404 | <p><em>I am not sure that you will like it.</em></p>
<p>$$S_n=\sum\limits_{k=1}^n\frac{N-n+k}{(N-k+1)^2(N-k)}$$ $$S_n=\frac{n (n-2 N)}{N (n-N)}+(n-2 N-1)\, \big(\psi ^{(1)}(-N)-\psi ^{(1)}(n-N)\big)$$ where appears the first derivative of <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow">the digamma function</a>. I do not think that this could be further simplified. The trouble is that $\psi ^{(1)}(m)$ is undefined for $m\leq 0$.</p>
<p>May be, you could prefer the following. Considering for large values of $N$ $$\frac{N-n+k}{(N-k+1)^2(N-k)}=\left(\frac{1}{N}\right)^2+\frac{4 k-n-2}{N^3}+\frac{9 k^2-3 k n-10 k+2
n+3}{N^4}+\frac{16 k^3-6 k^2 n-28 k^2+8 k n+18 k-3 n-4}{N^5}+\frac{25 k^4-10 k^3
n-60 k^3+20 k^2 n+60 k^2-15 k n-28 k+4
n+5}{N^6}+O\left(\frac{1}{N^7}\right)$$ and now summing from $k=1$ to $k=n$, we should get, as an <strong>approximation</strong>,
$$S_n=\frac{n \left(N \left(6 N^3-3 N+2\right)+1\right)}{6 N^6}+\frac{n^2 \left(6 N^3-6
N+5\right)}{6 N^6}+\frac{n^3 (N (9 N-2)-10)}{6 N^6}+\frac{n^4 (12 N-5)}{6
N^6}+\frac{5 n^5}{2 N^6}$$</p>
<p>For sure, we could add more terms for higher accuracy. For illustration purposes, I used $N=1000$ and varied $n$. The following table reports the decimal values of the exact sum and of the ugly approximation.
$$\left(
\begin{array}{ccc}
n & \text{exact} & \text{approximation} \\
50 & 0.00005270 & 0.00005270 \\
100 & 0.00011173 & 0.00011173 \\
150 & 0.00017880 & 0.00017876 \\
200 & 0.00025625 & 0.00025600 \\
250 & 0.00034721 & 0.00034618 \\
300 & 0.00045610 & 0.00045276 \\
350 & 0.00058916 & 0.00057993 \\
400 & 0.00075548 & 0.00073276 \\
450 & 0.00096866 & 0.00091727 \\
500 & 0.00124975 & 0.00114053
\end{array}
\right)$$</p>
|
1,954,411 | <p>Let $N>0$ be a large integer, and $n<N$, then how to simply the following sum
$$\sum\limits_{k=1}^n\frac{N-n+k}{(N-k+1)(N-k+1)(N-k)}.$$
Thank you very much, guys.</p>
<p>Actually for another similar sum $\sum\limits_{k=1}^n\frac{1}{(N-k+1)(N-k)}=\sum\limits_{k=1}^n\frac{1}{N-k}-\frac{1}{N-k+1}=\frac{1}{N-n}-\frac{1}{N}$, I know the trick. But adding one term of such thing, $\frac{N-n+k}{N-k+1}$, it becomes difficult. </p>
<p>So, thanks a million for any clue.</p>
| user90369 | 332,823 | <p>With the same method which you have used above you get </p>
<p>$$\sum\limits_{k=1}^n \frac{N-n+k}{(N-k+1)^2(N-k)}=\frac{n(2N-n)}{N(N-n)}-(2N+1-n)\sum\limits_{k=1}^n \frac{1}{( N-k+1)^2}$$</p>
<p><em>Hints</em>:</p>
<p>$\enspace N-n+k=(2N+1-n)-(N-k+1)$</p>
<p>$\enspace \displaystyle \frac{1}{(N-k+1)^2(N-k)}=\frac{1}{N-k}-\frac{1}{N-k+1}-\frac{1}{(N-k+1)^2} $</p>
<p>The closed form for $\sum\limits_{k=1}^n \frac{1}{( N-k+1)^2}$ is:</p>
<p>$$\sum\limits_{k=1}^n \frac{1}{( N-k+1)^2}= \sum\limits_{k=1}^N \frac{1}{k^2} -\sum\limits_{k=1}^{N-n} \frac{1}{k^2}=$$$$((\frac{1}{N!} \begin{bmatrix} N+1 \\ 2 \end{bmatrix})^2-\frac{2}{N!} \begin{bmatrix} N+1 \\3 \end{bmatrix})-((\frac{1}{(N-n)!} \begin{bmatrix} N-n+1 \\ 2 \end{bmatrix})^2-\frac{2}{(N-n)!} \begin{bmatrix} N-n+1 \\3 \end{bmatrix}) $$</p>
<p>where $\begin{bmatrix} n \\ k \end{bmatrix}$ is called <em>unsigned Stirling number of the first kind</em> . </p>
|
4,307,016 | <p>Explore convergence of <span class="math-container">$\sum_{n=3}^{\infty}\frac{1}{n\ln n(\ln \ln n)^\alpha}$</span></p>
<p>Tried to use Cauchy integral test,so we need to find</p>
<p><span class="math-container">$$\int_{3}^\infty\frac{dx}{x\ln x(\ln \ln x)^\alpha}=\int_{\ln 3}^{\infty}\frac{dz}{z(\ln z)^\alpha}= \int_{\ln (\ln 3)}^{\infty}\frac{du}{(u^\alpha)}$$</span></p>
<p>and stuck here. How continue from here?</p>
<p>I know that <span class="math-container">$\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^\alpha}$</span> converges when <span class="math-container">$\alpha>1$</span> and diverges when <span class="math-container">$\alpha\leq1$</span></p>
<p>but here we start sum from <span class="math-container">$\ln (\ln 3)$</span> not-natural number can we say same thing here and why if we can?</p>
| Botnakov N. | 452,350 | <p>You already almost solved the problem.</p>
<p>If <span class="math-container">$\alpha \ne 1$</span> we have
<span class="math-container">$$\int_{\ln (\ln 3)}^{\infty}\frac{du}{(u^\alpha)} = \int_{\ln (\ln 3)}^{\infty} u^{-\alpha} du = \frac{u^{-\alpha+1}}{-\alpha+1} \bigg|_{\ln (\ln 3)}^{\infty} $$</span>
It's finite iff <span class="math-container">$-\alpha+1 < 0$</span>.</p>
<p>If <span class="math-container">$\alpha = 1$</span> we have
<span class="math-container">$$\int_{\ln (\ln 3)}^{\infty}\frac{du}{u} = \ln u \bigg|_{\ln (\ln 3)}^{\infty} = +\infty.$$</span></p>
|
117,608 | <p>We know that if $G$ is a simple group with $p+1$ Sylow $p$-subgroups, then $G$ is 2-transitive. Now let $G$ be almost simple group with $p+1$ Sylow $p$-subgroups. Is $G$ 2-transitive group?</p>
| Geoff Robinson | 14,450 | <p>I think there is a direct argument. Let $M$ be the unique minimal normal subgroup of $G,$
which is non-Abelian simple. Then $M$ must act faithfully by conjugation
on the $(p+1)$ Sylow $p$-subgroups of $G$- otherwise, $M$
has a normal Sylow $p$-subgroup, which must then be trivial.
But even then, $M$ must normalize, and hence centralize, a Sylow $p$-subgroup $P$
of $G$, as $M$ and $P$ normalize each other and have trivial intersection.
Then $P$ is contained in $C_G(M)=1,$ a contradiction.
Thus $G$ is isomorphic to a subgroup the symmetric
group of degree $p+1$ and a Sylow $p$- subgroup
of $G,$ say $P,$ has order $p.$ Now $P$ fixes no other Sylow $p$- subgroup
of $G$ in the conjugation action, so permutes the remaining
$p$ such subgroups in one orbit of length $p.$ Hence $G$
Is doubly transitive. </p>
<p>Later addition: Let me try to address more precisely Mart's question in the comments-
the argument is less elementary, but still avoids the classification of finite simple
groups. Let me retain my notation of $M$ for the unique minimal normal subgroup of $G,$ (called $S$ by Derek and Mart) and let $P$ be a Sylow $p$-subgroup of $G,$ which has order $p,$ as we have seen already.
The key point I will use is a Theorem of Feit and Thompson (Nagoya J. Math ~1963), which built on an earlier result of Brauer: the combined result asserts that if $X$ is a finite irreducible subgroup of ${\rm GL}(n,\mathbb{C})$ for some $n \leq \frac{p-1}{2},$ where $p$ is a prime, then either $X$ has a normal Sylow $p$-subgroup, or $X/Z(X) \cong {\rm PSL}(2,p).$</p>
<p>Our group $G$ has a transitive faithful permutation action on $p+1$ points, affording a permutation character $\chi,$ say. We are assuming that $M$ has order prime to $p,$ and aiming to derive a contradiction. The orbits of $M$ all have equal length, and are permuted by $G$. If $M$ has two or more orbits, then ${\rm Res}^{G}_{M}(\chi)$ has at least two trivial constituents, and $M$ has a faithful irreducible character of degree at most $\frac{p-1}{2},$ which extends irreducibly to $MP$ (it can't induce irreducibly by degree considerations). Now $P$ is not normal in $MP$ as $[M,P] \neq 1.$ But $MP/Z(MP)$ is not isomorphic to ${\rm PSL}(2,p),$ since ${\rm PSL}(2,p)$ has no normal $p$-complement, while $MP$ does have a normal $p$-complement (note that we do need $p >3$ here, but $S_{4}$ is solvable and $G$ is not, so we do indeed have $ p >3$). This contradicts the result of Brauer, Feit and Thompson. Hence $M$ is transitive.
Now $M$ is not doubly transitive, as $p$ does not divide $|M|,$ so that ${\rm Res}^{G}_{M}(\chi)$ is a sum of at least $3$ irreducible characters (allowing multiplicities). However, the trivial character only occurs once, and $M$ has no other linear character. Hence ${\rm Res}^{G}_{M}(\chi)$ has a faithful irreducible constituent $\mu$ of degree at most $\frac{p-1}{2},$ which once more extends irreducibly to $MP,$ and we obtain the same contradiction as above.</p>
<p>Third edit: Actually, there is a simpler argument using less sophisticated representation theory to obtain $p$ divides $|M|$. Suppose otherwise, and retain the notation above. Note that ${\rm Res}^{G}_{MP}(\chi)$ can't have an irreducible constituent of degree $p$ (but does have a trivial constituent): for if $\mu$ were such constituent, then Clifford's theorem would force $\mu$ to restrict to a sum of non-trivial linear characters of $M$, contrary to the fact that $M$ is perfect. Hence $MP$ has a non-trivial complex irreducible character $\theta$ say, of degree less than $p$ (and $\theta$ is faithful using the simplicity of $M$). Let $r$ be an odd prime divisor of $|M|$, and let $R$ be a $P$-invariant Sylow $r$-subgroup of $M$ (which exists). Then by the theorem of Hall-Higman-Shult, we have $[M,R] \leq {\rm ker} \theta = 1.$ Let $Q$ be a $P$-invariant Sylow $2$-subgroup of $M$. Then as $r$ was arbitrary, we have $M = QC_{M}(P).$ Hence $[M,P] \leq Q.$
But $[M,P] \lhd M$ and $M$ is non-Abelian simple, so $[M,P] = 1$ and $P \leq C_{G}(M) = 1,$ a contradiction.</p>
|
1,134,854 | <blockquote>
<p>In complex analysis, let $a, b>0$ in $\mathbb R$, $f(s)=\int^{b}_{a}1/t^s dt$, then $f$ is holomorphic for $Re(s)>0$.</p>
</blockquote>
<p>If $s\neq 1$, then $f(s)=\frac{a^{1-s}}{(1-s)}-\frac{b^{1-s}}{(1-s)}$, but if $s=1$, then $f(s)=\ln\big(\frac{b}{a}\big)$, they seems quite different in the form, how to prove that it is holomorphic?</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>For problems like this, you can use <a href="http://en.wikipedia.org/wiki/Morera%27s_theorem" rel="nofollow">Morera's theorem</a> (and <a href="http://en.wikipedia.org/wiki/Fubini%27s_theorem" rel="nofollow">Fubini</a>).</p>
<p>For your particular case, check that the discontinuity of $\frac{a^{1-s}}{(1-s)}-\frac{b^{1-s}}{(1-s)}$ at $s=1$ is <a href="http://en.wikipedia.org/wiki/Removable_singularity" rel="nofollow">removable</a> and $\lim_{s\to 1}\cdots=\ln(b/a)$.</p>
|
416,153 | <p>Show that $x^2+y^2=p$ has a solution in $\mathbb{Z}$ if and only if $ p≡1 \mod 4$. Thnx, if someone can help</p>
| DonAntonio | 31,254 | <p>Hints: </p>
<p>$$\begin{align*}\bullet&\;\;\;\Bbb Z_p:=\Bbb Z/p\Bbb Z\;\;\text{is a field whenever $\;p\;$ is a prime}\\
\bullet&\;\;\;\text{Doing arithmetic modulo $\;p\;$ :}\;x^2+y^2=p=0\;\wedge\;\;xy\neq 0\iff\left(\frac xy\right)^2=-1\\
\bullet&\;\;\;\left|\;\Bbb Z_p^*\;\right|=p-1\\{}\\
\bullet&\;\;\;\exists\; a\in \Bbb Z_p^*\;\;s.t.\;\;a^2=-1\iff p=1\pmod 4\end{align*}$$</p>
|
187,974 | <p>If $ \cot a + \frac 1 {\cot a} = 1 $, then what is $ \cot^2 a + \frac 1{\cot^2 a}$? </p>
<p>the answer is given as $-1$ in my book, but how do you arrive at this conclusion?</p>
| N. S. | 9,176 | <p><strong>Hint</strong> What do you get if you square $\cot(a)+\frac{1}{\cot(a)}$?</p>
|
914,936 | <p>Does anyone know where I can find the posthumously published (I think) chapter 8 of Gauss's Disquisitiones Arithmaticae?</p>
| Community | -1 | <p>Maser's 1889 German translation has on the title page: "the last third of the text contains Gauss's published papers on The Theory of Numbers, followed by his posthumous writings on that subject". It's about 230 pp., tacked onto books I-VII of Disquisitiones. It's available as a reprint (mine is Chelsea press). I don't know if this is considered to be the missing book or not ... is there consensus about what the proper content of book VIII would have been?</p>
|
3,393,244 | <p>My homework is to transform this formula </p>
<p><span class="math-container">$$(A \wedge \neg B) \wedge (A \vee \neg C)$$</span> into this equivalent form: <span class="math-container">$A \wedge \neg B$</span>. Do you have any ideas?</p>
| J.G. | 56,861 | <p>Note that <span class="math-container">$A\land\neg B\to A$</span> and <span class="math-container">$A\to A\lor\neg C$</span>, so your original statement is equivalent to <span class="math-container">$A\land\neg B$</span> by repeated use of <span class="math-container">$(p\to q)\to((p\land q)\equiv p)$</span>.</p>
|
1,943,328 | <p>I know about $S_n$, $D_n$ and $A_n$. And from my limited understanding there seem to be many more. I would like to know whether there is some kind of relation that links a small set of non Abelian groups to create the other ones. Something like with the Abelian groups and the Fundamental Theorem of Abelian Groups.</p>
| Mees de Vries | 75,429 | <p>The "official" answer is the <a href="https://en.wikipedia.org/wiki/Classification_of_finite_simple_groups">classification of simple finite groups</a>. In some sense, <a href="https://en.wikipedia.org/wiki/Composition_series">all finite groups are built from simple finite groups</a>, so understanding those is a great help in understanding all finite groups.</p>
<p>However, this is much less tangible and accessible than the classification of finite abelian groups. Perhaps more useful for a beginner is <a href="https://en.wikipedia.org/wiki/Cayley%27s_theorem">Cayley's theorem</a>, which states that every group is isomorphic to a subgroup of $S_n$ for some $n$. Thus, if you understand all subgroups of $S_n$, you understand all finite groups.</p>
<p>In general, your question "ought" to be difficult to answer; finite groups are very complex objects (as opposed to e.g. finite dimensional vector spaces), and the fact that abelian finite groups are so "easy" to understand tells you that this complexity lies in the non-abelian groups.</p>
|
557,543 | <p>Does there exists a positive decreasing sequence $\{a_i\}$ with $\sum_{i\in\mathbb{N}} a_i$ convergent, such that $\forall I\subset\mathbb{N},\sum_{i\in I}a_i$ is an irrational number?</p>
<p>Such an example would give rise to a <strong>closed perfect set containing no rationals</strong>. I can only do it for infinite $I$ (for example let $a_i=10^{-p_i}$, where $p_i$ is the $i$th prime.), but the set of infinite sums is not closed.</p>
| André Nicolas | 6,312 | <p>Let
$$a_n=\frac{\sqrt{2}}{10^{n!}}.$$
The sum of any finite (non-zero!) number of the $a_i$ is irrational. The sum of an infinite number of the $a_i$ is transcendental, since $\sum_{i=1}^\infty \frac{1}{10^{n_i!}}$ is a <a href="http://en.wikipedia.org/wiki/Liouville_number">Liouville number.</a> </p>
|
793,693 | <p>Since I was interested in maths, I have a question. Is infinity a real or complex quantity? Or it isn't real or complex?</p>
| Jack M | 30,481 | <p>The question is a bit meaningless. "The infinite" is a philosophical concept. There are a <em>wide</em> variety of very different mathematical objects that are used to represent "the infinite", and now that we're in the realm of mathematics and not philosophy, I can make the concrete mathematical claim that <em>no</em>, those objects are neither real numbers nor complex numbers.</p>
<p>For a rundown on what different mathematical objects can represent infinity, I think the linked questions in Asaf's comment under your question are a fine place to start.</p>
|
1,946,824 | <p>In his book "Analysis 1", Terence Tao writes:</p>
<blockquote>
<p>A logical argument should not contain any ill-formed
statements, thus for instance if an argument uses a statement such
as x/y = z, it needs to first ensure that y is not equal to zero.
Many purported proofs of “0=1” or other false statements rely on
overlooking this “statements must be well-formed” criterion.</p>
</blockquote>
<p>Can you give an example of such a proof of "0=1"?</p>
| Andreas Blass | 48,510 | <p>The essential fact needed here is that a recursive definition like that of $F_n$ can be replaced by an equivalent explicit definition. Specifically, the Fibonacci sequence $W=\{\langle k,F_k\rangle:k\in\mathbb N\}$ can be defined as the set of all those ordered pairs $\langle k,x\rangle$ such that there exists a function $f$ with the following properties: (1) the domain of $f$ is $\{0,1,\dots,k\}$ (known in set theory as $k+1$), (2) $f(0)=1$, (3) if $k\geq1$ then $f(1)=1$, (4) for all $j$ such that $2\leq j\leq k$, $f(j)=f(j-1)+f(j-2)$, and (5) $f(k)=x$. In view of this explicit definition, we can prove the existence of $W$ by applying the separation axiom (also called subset axiom, Aussonderung, and comprehension axiom) of ZF to the set $\mathbb N\times\mathbb N$. And once we have $W$, we get the desired $S$ as the range of $W$, i.e., as the set of second components of the ordered pairs in $W$.</p>
|
777,535 | <p>I need to find the full Taylor expansion of $$f(x)=\frac{1+x}{1-2x-x^2}$$</p>
<p>Any help would be appreciated. I'd prefer hints/advice before a full answer is given. I have tried to do partial fractions\reductions. I separated the two in hopes of finding a known geometric sum but I could not.</p>
<p>Edit: I guess you could say that I did not have the.... insight to take the path with the partial decomposition mentioned. I have done some work (I had to go to the gym that is why it took a while)</p>
<p>$$\frac{1+x}{1-2x-x^2}=\frac{1}{2(\sqrt{2}-x-1)}-\frac{1}{2(\sqrt{2}+x+1)}$$ I am going to work with this to go further.</p>
<p>I got to this:</p>
<p>$$\frac{1}{2}\left(\sum_{n=0}^\infty\frac{x^n}{(\sqrt{2}-1)^{n+1}}+\sum_{n=0}^\infty\frac{x^n}{(-\sqrt{2}-1)^{n+1}}\right) $$ But I think this is wrong for some reason.</p>
<p>Edit: Figured it out.</p>
<p>$$\begin{align*}
\implies\frac{1+x}{1-2x-x^2}&=\frac{1}{2(\sqrt{2}-x-1)}-\frac{1}{2(\sqrt{2}+x+1)} \\[2mm]
&=\frac{1}{2}\left(\frac{1}{a-x}-\frac{1}{b+x}\right) \mbox{where $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$}. \\[2mm]
&=\frac{1}{2}\left(\frac{1}{a} \frac{1}{1-\frac{x}{a}}-\frac{1}{b}
\frac{1}{1-\frac{x}{-b}}\right) \\[2mm]
&=\frac{1}{2}\left(\frac{1}{a}\sum_{n=0}^\infty \frac{1}{a^n}x^n-\frac{1}{b}\sum_{n=0}^\infty\frac{1}{(-b)^n}x^n\right) \\[2mm]
&=\frac{1}{2}\left(\frac{1}{\sqrt{2}-1}\sum_{n=0}^\infty \frac{1}{(\sqrt{2}-1)^n}x^n-\frac{1}{\sqrt{2}+1}\sum_{n=0}^\infty\frac{1}{(-\sqrt{2}-1)^n}x^n\right) \\[2mm]
&=\frac{1}{2}\left(\sum_{n=0}^\infty\frac{x^n}{(\sqrt{2}-1)^{n+1}}+\sum_{n=0}^\infty\frac{x^n}{(-\sqrt{2}-1)^{n+1}}\right) \\
&=1+3x+7x^2+17x^3+\ldots
\end{align*}$$</p>
| Community | -1 | <p>The proof assumes that the dot product is linear, which is not trivial to prove without the standard algebraic definition.</p>
<p>The more straightforward proof would be as follows: Create a triangle with the two vectors $a$ and $b$ so that the third side is $a-b$. Define the dot product as $a\cdot b=a_1b_1+a_2b_2$. Then note that $$||x||^2=x_1^2+x_2^2=(x_1,x_2)\cdot (x_1,x_2)$$ so the magnitude squared of a vector equals the vector dotted with itself. Then by the law of cosines, letting $\theta$ denote the angle between $a$ and $b$ and recalling that $a-b$ is the side opposite $\theta$ we get $$||a-b||^2=||a||^2+||b||^2-2||a||\,||b||\cos\theta$$ Using the magnitude squared/dot product relationship above gives $$(a-b)\cdot (a-b)=a\cdot a+b\cdot b-2||a||\,||b||\cos\theta$$ Clearly the dot product is linear and symmetric by our algebraic definition, so the left side can be re-written as $$a\cdot a+b\cdot b-2a\cdot b=a\cdot a+b\cdot b-2||a||\,||b||\cos\theta$$ from which it follows that $$a\cdot b=||a||\,||b||\cos\theta$$ </p>
|
1,756,685 | <p>For natural numbers—that is, integers greater than or equal to 1—prove that: <br/>
$n^{2n+1}\ge(n+1)^{n+1}(n-1)^{n}$ <br/></p>
<p>Equivalently, show that $(1-1/n)^n$ is strictly increasing.</p>
| Jack D'Aurizio | 44,121 | <p>$$\left(1-\frac{1}{n}\right)^n = \left(1-\frac{1}{n}\right)^n\cdot 1\stackrel{\color{red}{AM-GM}}{\color{red}{<}}\left(\frac{n\cdot\left(1-\frac{1}{n}\right)+1}{n+1}\right)^{n+1}=\left(1-\frac{1}{n+1}\right)^{n+1}.$$</p>
|
2,801,433 | <p>I have made the following conjecture, and I do not know if this is true.</p>
<blockquote>
<blockquote>
<p><strong>Conjecture:</strong></p>
</blockquote>
<p><span class="math-container">\begin{equation*}\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}\stackrel{k\to\infty}{\longrightarrow}2\verb| such that we denote by | p_n\verb| the | n^\text{th} \verb| prime.|\end{equation*}</span></p>
</blockquote>
<p>Is my conjecture true? It seems like it, according to a plot made by Wolfram|Alpha, but if it does, then it converges.... <em>very</em>.... <em>very</em>, slowly. In fact, let <span class="math-container">$k=5000$</span>, then the sum is approximately equal to <span class="math-container">$1.97$</span>, which just proves how slow it would be.</p>
<p>Is there a way of showing whether or not this is indeed convergent? For any other higher values of <span class="math-container">$k$</span>, it seems that it is just too much for Wolfram|Alpha to calculate, and it does not give me a result when I let <span class="math-container">$k=\infty$</span>. Also, for users who might not understand the notation, we can similarly write that <span class="math-container">$$\sum_{n=1}^\infty\frac{1}{\pi^{1/n}p_n}=2\qquad\text{ or }\qquad\lim_{k\to\infty}\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}=2.$$</span> Also, without Wolfram|Alpha, I have <em>no idea</em> how to approach this problem in terms of proving it or disproving it. Does the sum even converge <em>at all</em>? If so, to what value? Any help would be much appreciated.</p>
<hr />
<p>Thank you in advance.</p>
<p><strong>Edit:</strong></p>
<p>I looked at <a href="https://math.stackexchange.com/questions/2070991/is-sum-limits-n-1-infty-frac1nk1-frac12-for-k-to-infty?rq=1">this post</a> to see if I could rewrite my conjecture as something else in order to help myself out. Consequently, I wrote that <span class="math-container">$$\sum_{n=1}^k\frac{1}{\pi^{1/n}p_n}\stackrel{k\to\infty}{\longleftrightarrow}4\sum_{n=1}^\infty\frac{1}{n^k+1}\tag{$\text{LHS}=2$}$$</span> since both sums look very similar. Could <em>this</em> be of use?</p>
| Robert Z | 299,698 | <p>Recall that
<a href="https://en.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">$\sum_{n=1}^{\infty} \frac{1}{p_n}$</a> is a divergent series. Then your series is divergent too because, for any positive number $a$,<br>
$$\lim_{n\to \infty}a^{1/n}=\lim_{n\to \infty}e^{\ln(a)/n}=1,$$ and therefore
$$\frac{1}{\pi^{1/n}p_n}\sim \frac{1}{p_n}.$$</p>
|
2,136,079 | <p>A cone $K$, where $K ⊆\Bbb R^n$ , is pointed; which means that it contains no line (or equivalently, $(x ∈ K~\land~ −x∈K) ~\to~ x=\vec 0$.</p>
| Royi | 33 | <p>It means there are no 2 points inside it which creates a line and the whole line is contained by the cone.</p>
<p>For instance, take $ \mathbb{R}^{2} $ it is clearly a cone yet it is not pointed as any line in $ \mathbb{R}^{2} $ is contained by $ \mathbb{R}^{2} $.</p>
<p>Yet if you take $ \mathbb{R}^{2}_{++} $, namely only the right up quarter of it (Where each coordinate is non negative) it is a cone clearly, moreover it is a pointed cone as there is no line contained in it.</p>
<p>Remember that a line is defined by all points which are defined by $ {x}_{1}, {x}_{2} $ and $ \theta \in \mathbb{R} $ in the following way:</p>
<p>$$ \theta {x}_{1} + \left( 1 - \theta \right) {x}_{2} $$</p>
|
3,424,656 | <p>Assume <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous at <span class="math-container">$x=a$</span>. Prove <span class="math-container">$h=\max\{f,g\}$</span> is continuous at <span class="math-container">$x=a$</span>.</p>
<p>My solution:</p>
<p>When <span class="math-container">$f\ge g\Rightarrow h=\max\{f,g\}=f$</span> and since <span class="math-container">$f$</span> is continuous at <span class="math-container">$x=a$</span> so is <span class="math-container">$h$</span>.</p>
<p>When <span class="math-container">$f<g\Rightarrow h=\max\{f,g\}=g$</span> and since <span class="math-container">$g$</span> is continuous at <span class="math-container">$x=a$</span> so is <span class="math-container">$h$</span>.</p>
<p>Does this seem sufficient?</p>
| copper.hat | 27,978 | <p>We always have <span class="math-container">$x_k \le y_k + |x_k - y_k| \le y_k +\|x-y\|_\infty$</span>.</p>
<p>Hence <span class="math-container">$x_k \le \max_j y_j +\|x-y\|_\infty$</span> and so <span class="math-container">$\max_j x_k \le \max_j y_j +\|x-y\|_\infty$</span>.</p>
<p>Reversing the roles of <span class="math-container">$x,y$</span> gives <span class="math-container">$|\max_k x_k - \max_k y_k| \le \|x-y\|_\infty$</span>.</p>
<p>In particular, the <span class="math-container">$\max$</span> is Lipschitz with rank one.</p>
<p>Now consider the function <span class="math-container">$x \mapsto \max(f(x),g(x))$</span>.</p>
<p><strong>Alternative</strong>:</p>
<p>Suppose <span class="math-container">$f(a) > g(a)$</span>. Then there is a neighbourhood <span class="math-container">$U$</span> of <span class="math-container">$a$</span> such that <span class="math-container">$f(x)>g(x)$</span>
for <span class="math-container">$x\in U$</span>. Hence <span class="math-container">$\max(f(x),g(x)) = f(x)$</span> for <span class="math-container">$x \in U$</span> and so <span class="math-container">$h$</span> is continuous.</p>
<p>The case <span class="math-container">$f(a) < g(a)$</span> is similar.</p>
<p>If <span class="math-container">$f(a) = g(a)$</span>, let <span class="math-container">$\epsilon>0$</span> and choose a neighbourhood <span class="math-container">$U$</span> such that
<span class="math-container">$|f(x)-f(a)| < \epsilon$</span> and <span class="math-container">$|g(x)-g(a)| < \epsilon$</span> for <span class="math-container">$x \in U$</span>.
Then
<span class="math-container">$-\epsilon+f(a) < f(x) < f(a) + \epsilon$</span> and <span class="math-container">$-\epsilon+f(a) < g(x) < f(a) + \epsilon$</span> for <span class="math-container">$x \in U$</span> and
so <span class="math-container">$-\epsilon+f(a) <\max(f(x),g(x)) < f(a) + \epsilon$</span> for <span class="math-container">$x \in U$</span>.
Hence <span class="math-container">$| h(x)-h(a) | < \epsilon$</span>.</p>
|
3,424,656 | <p>Assume <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous at <span class="math-container">$x=a$</span>. Prove <span class="math-container">$h=\max\{f,g\}$</span> is continuous at <span class="math-container">$x=a$</span>.</p>
<p>My solution:</p>
<p>When <span class="math-container">$f\ge g\Rightarrow h=\max\{f,g\}=f$</span> and since <span class="math-container">$f$</span> is continuous at <span class="math-container">$x=a$</span> so is <span class="math-container">$h$</span>.</p>
<p>When <span class="math-container">$f<g\Rightarrow h=\max\{f,g\}=g$</span> and since <span class="math-container">$g$</span> is continuous at <span class="math-container">$x=a$</span> so is <span class="math-container">$h$</span>.</p>
<p>Does this seem sufficient?</p>
| BR Pahari | 276,873 | <p>At <span class="math-container">$x=a$</span>, use the fact that </p>
<p><span class="math-container">$$\max{(f,g)}=\frac{1}{2}(f+g+|f-g|).$$</span></p>
|
798,897 | <p>In our lecture we ran out of time, so our prof told us a few properties about measure: He said that a measure is $\sigma$-additive iff it has a right-side continuous function that it creates. And he was not only referring to probability measures.
After going through my lecture notes, I thought that this would imply that there can be no other measures than ones having a right-side continuous function (I think they are called Lebesgue-Stieltjes measures) as $\sigma$-additivity is a prerequisite to be a measure. So somehow, this does not fit together. Does anybody know what he could have meant here? Or was he only referring to probability measures?</p>
<p>Is anything unclear about my question?</p>
| Squirtle | 29,507 | <p>Consider the Dirac measure. </p>
<p>$$\delta_a(E)= 1\text{ if }a\in E, 0\text{ otherwise}$$</p>
<p>Or what about the measure that is zero on the empty set and infinity otherwise. </p>
|
2,958,135 | <p>A "standard" example of Bayes Theorem goes something like the following:</p>
<blockquote>
<p>In any given year, 1% of the population will get disease <em>X</em>. A particular test will detect the disease in 90% of individuals who have the disease but has a 5% false positive rate. If you have a family history of <em>X</em>, your chances of getting the disease are 10% higher than they would have been otherwise.</p>
</blockquote>
<p>Virtually all explanations I've seen of Bayes' Theorem will include all of those facts in their formulation of the probability. It makes perfect sense to me to account for patient-specific factors like family history, and it also makes perfect sense to me to include information on the overall reliability of the test. I'm struggling to understand the relevance of the fact that 1% of the population will get disease <em>X</em>, though. In particular, that fact is presumably true for all patients who receive the test; that being the case, wouldn't Bayes' Theorem imply that the <em>actual</em> probability of a false positive is much higher than 5% (and that one of the numbers is therefore wrong)?</p>
<p>Alternatively, why doesn't the 5% figure already account for that fact? Given that the 5% figure was presumably calculated directly from the data, wouldn't Bayes' Theorem effectively be contradicting the data in this case?</p>
| ryang | 21,813 | <p>Further to user856's explanation in the comments, here's a complementary answer.</p>
<p>The way to frame/interpret medical tests in general is to understand them as updating one's level of certainty that the patient has the disease:</p>
<ul>
<li>without a medical-test result, the disease prevalence (a measure of disease frequency) can
be taken as <em>the patient's probability of having the disease</em>;</li>
<li>however, in the context of a medical-test result, the aforementioned probability has changed: its <em>updated</em> value depends not just on the <strong>disease prevalence</strong> (as before), but now also on the test's sensitivity (true positive rate) and specificity (true negative rate). In other words, <em>our knowledge of said probability has been refined</em>.</li>
</ul>
<p><a href="https://i.stack.imgur.com/XzHrl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XzHrl.png" alt="https://i.stack.imgur.com/ZPmMO.png" /></a></p>
<p>p: <strong>disease prevalence and other (prior) risk factors</strong><br>
v: <strong>test sensitivity</strong><br>
f: <strong>test specificity</strong><br>
D: Diseased<br>
H: Healthy<br>
+: Positive test result<br>
-: Negative test result</p>
<p>The abovementioned probabilities are</p>
<ol>
<li>the <strong>positive predictive value</strong>, i.e., the probability that the patient is indeed Diseased given a positive test result <span class="math-container">$$P(D|+)=\frac{P(D+)}{P(D+)+P(H+)}=\frac{pv}{pv+(1-p)(1-f)},$$</span></li>
<li>the <strong>false omission rate</strong>, i.e., the probability that the patient is actually Diseased given a negative test result <span class="math-container">$$P(D|-)=\frac{P(D-)}{P(D-)+P(H-)}=\frac{p(1-v)}{p(1-v)+(1-p)f}.$$</span></li>
</ol>
<p>Thus, a screening test's <em><strong>predictive values</strong></em> <span class="math-container">$P(D|+)\,$</span> & <span class="math-container">$\,P(H|-)$</span> and <em><strong>overall accuracy</strong></em> <span class="math-container">$$P(D+)+P(H-)=pv+(1-p)f$$</span> depend on both its technical characteristics (sensitivity and specificity) and the population that it is being used on (disease prevalence). In particular:</p>
<ul>
<li>unless the test has 100% sensitivity, its <strong>number of false-negative results is <em>proportional</em> to the disease prevalence <span class="math-container">$p;$</span></strong></li>
<li>unless the test has 100% specificity, its <strong>number of false-positive results is <em>proportional</em> to <span class="math-container">$(1-p).$</span></strong></li>
</ul>
<p>N.B. The OP mentions “test reliability”, but that’s a separate issue, since reliability typically refers to consistency across retakes of a test’s results.</p>
<p><a href="https://math.stackexchange.com/a/4319216/21813">Here</a> is a glossary.
<span class="math-container">$$\\$$</span>
Finally, here is a concrete extended example (based on actual data, and simplistically assuming that successive tests are independent of one another) to put all this in context:
<a href="https://i.stack.imgur.com/u0idE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u0idE.jpg" alt="enter image description here" /></a>
Due to the low disease prevalence,</p>
<ul>
<li>the PCR and rapid tests have a <strong>positive predictive value</strong> of only
<span class="math-container">$4\%$</span> and <span class="math-container">$17\%$</span> respectively,</li>
<li>whereas their <strong>negative predictive value</strong> are both almost <span class="math-container">$100\%;$</span></li>
</ul>
<p>the tests' <strong>overall accuracy</strong> are <span class="math-container">$95\%$</span> and <span class="math-container">$99\%$</span> respectively.</p>
|
737,689 | <p>I have to prove that in a partially ordered set, only one of </p>
<blockquote>
<p>$$x<y,x=y,x>y$$ </p>
</blockquote>
<p>can hold. </p>
<p>My book says if both $x<y$ and $x=y$ hold, then this will imply $x<x$, which is a contradiction (contradicting irreflexivity). </p>
<p>I don't understand how this conclusion was reached. Two elements $x$ and $y$ may have multiple relations existing between them. For example, $\langle 2,4\rangle_R$, where $R$ can be $2<4$, $2|4$, $2$ is the largest even number smaller than $4$, etc. These distinct relations don't interact with each other; they have completely distinct identities. </p>
<p>$<$ and $=$ are also distinct relations between $x$ and $y$. One is reflexive while the other is irreflexive. From these two <strong>distinct</strong> relations, how could one ever conclude that irreflexivity (I still don't know of what relation) is being violated? </p>
| David | 119,775 | <p><strong>Hint</strong>. Use the binomial theorem: since $a=b+kp$ we have
$$a^p=(b+kp)^p=b^p+\cdots\ ,$$
and with a bit of thought you will be able to see why all the remaining terms are divisible by $p^2$.</p>
|
1,136,278 | <p>Prove that $n(n-1)<3^n$ for all $n≥2$. By induction.
What I did: </p>
<p>Step 1- Base case:
Keep n=2</p>
<p>$2(2-1)<3^2$</p>
<p>$2<9$ Thus it holds.</p>
<p>Step 2- Hypothesis: </p>
<p>Assume: $k(k-1)<3^k$</p>
<p>Step 3- Induction:
We wish to prove that:</p>
<p>$(k+1)(k)$<$3^k.3^1$</p>
<p>We know that $k≥2$, so $k+1≥3$ </p>
<p>Then $3k<3^k.3^1$</p>
<p>Therefore, $k<3^k$, which is true for all value of $n≥k≥2$</p>
<p>Is that right? Or the method is wrong? Is there any other methods?</p>
| axiom | 167,868 | <p>We know that $k(k-1)<3^k$ (The induction assumption)</p>
<p>Multiply 3 both sides, and we get:</p>
<p>$3k(k-1)<3^{k + 1}$</p>
<p>Now we will be done if we prove that
$k(k+1)\le3k(k - 1)$.</p>
<p>This can be rearranged as $2k - 4 \ge 0$, which is true since $k \ge 2$.</p>
<p>Hence proved.</p>
|
1,666,396 | <p>I can show the convergence of the following infinite product and some bounds for it:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=\sqrt{1+\frac{1}{2}} \sqrt[3]{1+\frac{1}{3}} \sqrt[4]{1+\frac{1}{4}} \cdots<$$</p>
<p>$$<\left(1+\frac{1}{4} \right)\left(1+\frac{1}{9} \right)\left(1+\frac{1}{16} \right)\cdots=\prod_{k \geq 2} \left(1+\frac{1}{k^2} \right)=\frac{\sinh \pi}{2 \pi}=1.83804$$</p>
<p>Here I used Euler's product for $\frac{\sin x}{x}$.</p>
<p>The next upper bound is not as easy to evaluate, but still possible, taking two more terms in Taylor's series for $\sqrt[k]{1+\frac{1}{k} }$:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\prod_{k \geq 2} \left(1+\frac{1}{k^2}-\frac{k-1}{2k^4}+\frac{2k^2-3k+1}{6k^6} \right)=$$</p>
<p>$$=\prod_{k \geq 2} \left(1+\frac{1}{k^2}-\frac{1}{2k^3}+\frac{5}{6k^4}-\frac{1}{2k^5}+\frac{1}{6k^6} \right)<$$</p>
<p>$$<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{108}+\frac{\pi^6}{5670}-1-\frac{\zeta (3)}{2}-\frac{\zeta (5)}{2} \right)=1.81654$$</p>
<p>The numerical value of the infinite product is approximately:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=1.758743628$$</p>
<p>The ISC found no closed from for this number.</p>
<blockquote>
<p>Is there some way to evaluate this product or find better bounds in closed form?</p>
</blockquote>
<hr>
<p><strong>Edit</strong></p>
<p>Clement C suggested taking logarithm and it was a very useful suggestion, since I get the series:</p>
<p>$$\ln \prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}= \frac{1}{2} \ln \left(1+\frac{1}{2} \right)+\frac{1}{3} \ln \left(1+\frac{1}{3} \right)+\dots$$</p>
<p>I don't know how to find the closed form, but I can certainly use it to find the boundaries (since the series for logarithm are very simple).</p>
<p>$$\frac{1}{2} \ln \left(1+\frac{1}{2} \right)+\frac{1}{3} \ln \left(1+\frac{1}{3} \right)+\dots>\sum^{\infty}_{k=2} \frac{1}{k^2}-\frac{1}{2}\sum^{\infty}_{k=2} \frac{1}{k^3}$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}-\frac{1}{2}-\frac{\zeta (3)}{2} \right)=1.72272$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}-\frac{5}{6}-\frac{\zeta (3)}{2} \right)=1.77065$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}-\frac{7}{12}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4}\right)=1.75438$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}+\frac{\pi^6}{4725}-\frac{47}{60}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4}\right)=1.76048$$</p>
<blockquote>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}+\frac{\pi^6}{4725}-\frac{37}{60}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4} -\frac{\zeta (7)}{6}\right)=1.75803$$</p>
</blockquote>
<p>This method generates much better bounds than my first idea. The last two are very good approximations.</p>
<hr>
<p><strong>Edit 2</strong></p>
<p>Actually, would it be correct to write (it gives the correct value of the product):</p>
<blockquote>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=\frac{1}{2} \exp \left( \sum_{k \geq 2} \frac{(-1)^k \zeta(k)}{k-1} \right)$$</p>
</blockquote>
| Yuriy S | 269,624 | <p><strong>This is not an answer</strong>, but it's important and I post it separately from the question itself.</p>
<p>I found in <a href="https://math.stackexchange.com/a/1065075/269624">this answer</a> by @RandomVariable the following series:</p>
<p>$$\sum_{k=1}^{\infty} \frac{\ln (k+1)}{k(k+1)}=\frac{\pi^2}{4}-1-4\int_{0}^{\infty} \frac{\arctan x}{1+x^{2}} \frac{dx}{e^{\pi x}+1}=\frac{\pi^2}{4}-1-4\int_{0}^{\pi/2} \frac{t~dt}{e^{\pi \tan t}+1} $$</p>
<p>They are related to $\gamma_1$ - Stieltjes constant. </p>
<p>This same series also appeared in <a href="http://www.people.fas.harvard.edu/~sfinch/csolve/kz3.pdf" rel="nofollow noreferrer">this paper</a> by Steven Finch, page 5.</p>
<p>$$\sum_{k=1}^{\infty} \frac{\ln (k+1)}{k(k+1)}=1.2577468869$$</p>
<p>This is the same numerical value as:</p>
<p>$$\sum_{k=1}^{\infty} \frac{\ln (1+\frac{1}{k})}{k}=1.2577468869$$</p>
<p>Which is confirmed in <a href="http://www.people.fas.harvard.edu/~sfinch/csolve/kz.pdf" rel="nofollow noreferrer">this paper</a> by the same author, page 3, where this form of the series is used.</p>
<p>It is connected to the integral (page 2, the same paper):</p>
<p>$$\sum_{k=1}^{\infty} \frac{\ln (1+\frac{1}{k})}{k}=-\int_{1}^{\infty} \frac{\ln (y-[y])}{y^2}dy$$</p>
<p>Where $[y]$ is the floor function, meaning $y-[y]$ is the fractional part of $y$.</p>
<p>In <a href="https://math.dartmouth.edu/~carlp/factorial.pdf" rel="nofollow noreferrer">another paper</a> this series is connected to the numer of divisors of $n!$, however slightly different integral representation is used (page 3):</p>
<p>$$\sum_{k=1}^{\infty} \frac{\ln (1+\frac{1}{k})}{k}=\int_{1}^{\infty} \frac{\ln ([y]+1)}{y^2}dy$$</p>
<p>And finally, this is slightly related to <a href="http://mathworld.wolfram.com/Alladi-GrinsteadConstant.html" rel="nofollow noreferrer">Alladi-Grinstead Constant</a>, which is given by:</p>
<p>$$e^{c-1}$$</p>
<p>$$c=\sum_{k=2}^{\infty} \frac{\ln (\frac{k}{k-1})}{k}=\sum_{k=1}^{\infty} \frac{\ln (1+\frac{1}{k})}{k+1}=0.788530566$$</p>
<p>See also the original Alladi and Grinstead paper <a href="http://www.academia.edu/17531988/On_the_decomposition_of_n_into_prime_powers" rel="nofollow noreferrer">here</a>.</p>
<p>And this is also somehow connected to the Luroth series representations of real numbers.</p>
<p>Oh, and thanks to @SteveKass for <a href="https://books.google.com/books?id=Pl5I2ZSI6uAC&pg=PA122&lpg=PA122&dq=1.2577468869&source=bl" rel="nofollow noreferrer">this useful link</a>.</p>
<hr>
<p>Comparing the convergence of three series, we find that even though they are equivalent, the convergence rate is drastically different.</p>
<p>$$\sum_{k=1}^{\infty} \frac{\ln (k+1)}{k(k+1)}=\sum_{k=1}^{\infty} \frac{\ln (1+\frac{1}{k})}{k}=\sum_{k = 2}^{\infty} \frac{(-1)^k \zeta(k)}{k-1}$$</p>
<p><a href="https://i.stack.imgur.com/NUcIO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NUcIO.png" alt="Convergence"></a></p>
<hr>
<p>We can also obtain the following interesting equality:</p>
<p>$$(1+1)\sqrt{1+\frac{1}{2}} \sqrt[3]{1+\frac{1}{3}} \sqrt[4]{1+\frac{1}{4}} \cdots=\sqrt{2} \sqrt[6]{3} \sqrt[12]{4} \sqrt[20]{5} \sqrt[30]{6} \cdots=\prod_{k=1}^{\infty}(k+1)^{\frac{1}{k(k+1)}}$$</p>
|
1,666,396 | <p>I can show the convergence of the following infinite product and some bounds for it:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=\sqrt{1+\frac{1}{2}} \sqrt[3]{1+\frac{1}{3}} \sqrt[4]{1+\frac{1}{4}} \cdots<$$</p>
<p>$$<\left(1+\frac{1}{4} \right)\left(1+\frac{1}{9} \right)\left(1+\frac{1}{16} \right)\cdots=\prod_{k \geq 2} \left(1+\frac{1}{k^2} \right)=\frac{\sinh \pi}{2 \pi}=1.83804$$</p>
<p>Here I used Euler's product for $\frac{\sin x}{x}$.</p>
<p>The next upper bound is not as easy to evaluate, but still possible, taking two more terms in Taylor's series for $\sqrt[k]{1+\frac{1}{k} }$:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\prod_{k \geq 2} \left(1+\frac{1}{k^2}-\frac{k-1}{2k^4}+\frac{2k^2-3k+1}{6k^6} \right)=$$</p>
<p>$$=\prod_{k \geq 2} \left(1+\frac{1}{k^2}-\frac{1}{2k^3}+\frac{5}{6k^4}-\frac{1}{2k^5}+\frac{1}{6k^6} \right)<$$</p>
<p>$$<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{108}+\frac{\pi^6}{5670}-1-\frac{\zeta (3)}{2}-\frac{\zeta (5)}{2} \right)=1.81654$$</p>
<p>The numerical value of the infinite product is approximately:</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=1.758743628$$</p>
<p>The ISC found no closed from for this number.</p>
<blockquote>
<p>Is there some way to evaluate this product or find better bounds in closed form?</p>
</blockquote>
<hr>
<p><strong>Edit</strong></p>
<p>Clement C suggested taking logarithm and it was a very useful suggestion, since I get the series:</p>
<p>$$\ln \prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}= \frac{1}{2} \ln \left(1+\frac{1}{2} \right)+\frac{1}{3} \ln \left(1+\frac{1}{3} \right)+\dots$$</p>
<p>I don't know how to find the closed form, but I can certainly use it to find the boundaries (since the series for logarithm are very simple).</p>
<p>$$\frac{1}{2} \ln \left(1+\frac{1}{2} \right)+\frac{1}{3} \ln \left(1+\frac{1}{3} \right)+\dots>\sum^{\infty}_{k=2} \frac{1}{k^2}-\frac{1}{2}\sum^{\infty}_{k=2} \frac{1}{k^3}$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}-\frac{1}{2}-\frac{\zeta (3)}{2} \right)=1.72272$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}-\frac{5}{6}-\frac{\zeta (3)}{2} \right)=1.77065$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}-\frac{7}{12}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4}\right)=1.75438$$</p>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}<\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}+\frac{\pi^6}{4725}-\frac{47}{60}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4}\right)=1.76048$$</p>
<blockquote>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}>\exp \left( \frac{\pi^2}{6}+\frac{\pi^4}{270}+\frac{\pi^6}{4725}-\frac{37}{60}-\frac{\zeta (3)}{2} -\frac{\zeta (5)}{4} -\frac{\zeta (7)}{6}\right)=1.75803$$</p>
</blockquote>
<p>This method generates much better bounds than my first idea. The last two are very good approximations.</p>
<hr>
<p><strong>Edit 2</strong></p>
<p>Actually, would it be correct to write (it gives the correct value of the product):</p>
<blockquote>
<p>$$\prod_{k \geq 2}\sqrt[k]{1+\frac{1}{k}}=\frac{1}{2} \exp \left( \sum_{k \geq 2} \frac{(-1)^k \zeta(k)}{k-1} \right)$$</p>
</blockquote>
| Jacob | 181,986 | <p>Working off of other people's findings, you can write
$$\sum_{k=2}^{\infty} \frac{(-x)^k \zeta (k)}{k} = x \gamma + \ln (\Gamma(x+1))$$
$$\frac{d}{dx}\sum_{k=2}^{\infty} \frac{(-x)^k \zeta (k)}{k} = -\sum_{k=2}^{\infty} (-x)^{k-1} \zeta (k)=\gamma+\psi(x+1)=H_x$$
$$\sum_{k=2}^{\infty} (-x)^{k-2} \zeta (k)=\frac{H_x}{x}$$
$$\int_0^{x} \sum_{k=2}^{\infty} (-y)^{k-2}\zeta (k) dy = -\sum_{k=2}^{\infty} \frac{(-x)^{k-1}\zeta (k)}{k-1} = -\int_0^x \frac{H_y}{y} dy$$
$$\sum_{k=2}^{\infty} \frac{(-x)^{k}\zeta (k)}{k-1} = x\int_0^x \frac{H_y}{y} dy$$
However, I believe $\int_0^1 \frac{H_x}{x} dx$ has no closed form, meaning that $$\prod_{k=2}^{\infty} \sqrt[k]{1+\frac{1}{k}}=\frac{1}{2} \exp \left(\sum_{k=2}^{\infty} \frac{(-1)^k \zeta(k)}{k-1}\right)=\frac{1}{2}\exp \left({\int_0^1 \frac{H_x}{x} dx} \right)$$ has no closed form either.</p>
|
1,752,021 | <blockquote>
<p>Let $G=S_3\times S_3$ where $S_3$ is the symmetric group. Let $p=
\begin{pmatrix}
1 & 2 & 3 \\
2 & 3 & 1 \\
\end{pmatrix}
$, let $L=(p)$, $K=L\times L$ and $H=\{(I_3,I_3),(p,p),(p^2,p^2)\}$. Show that $K\triangleleft G$, $H\triangleleft K$ but $H$ no is a normal subgroup of $G$.</p>
</blockquote>
<p>I wonder if there a quick way to do this exercise, without having to develop each of the products.</p>
| gregorygsimon | 148,402 | <p>Using generators, there won't be much computation needed. </p>
<p>$K$ is generated by $(1,p)$ and $(p,1)$. If $H$ is invariant under conjugation by the generators of $K$, then $H$ is normal in $K$.</p>
<p>Lagrange's theorem proves that $S_3$ is generated by $p$ and any transposition, e.g. $t=
\begin{pmatrix}
1 & 2 & 3 \\
2 & 1 & 3 \\
\end{pmatrix}$. So $G$ is generated by $(1,p),(1,t),(p,1),(t,1)$.</p>
<p>Conjugating these subgroups by the '$p$' generators will be easy, you just need to worry about $tpt^{-1}$.</p>
|
1,752,021 | <blockquote>
<p>Let $G=S_3\times S_3$ where $S_3$ is the symmetric group. Let $p=
\begin{pmatrix}
1 & 2 & 3 \\
2 & 3 & 1 \\
\end{pmatrix}
$, let $L=(p)$, $K=L\times L$ and $H=\{(I_3,I_3),(p,p),(p^2,p^2)\}$. Show that $K\triangleleft G$, $H\triangleleft K$ but $H$ no is a normal subgroup of $G$.</p>
</blockquote>
<p>I wonder if there a quick way to do this exercise, without having to develop each of the products.</p>
| Community | -1 | <p>Observe that $|L| = 3$ and $|S_3| = 6$, so the index of $L$ in $S_3$ is $2$. Therefore, $L \lhd S_3$. Consequently, $K = L \times L \lhd S_3 \times S_3 = G$.</p>
<p>Then, observe that $K$ is abelian, because it is a direct product of abelian groups, so all of its subgroups are normal. Therefore $H \lhd K$.</p>
<p>To see that $H$ is not normal in $G$, define
$$\tau = \begin{pmatrix}
1 & 2 & 3 \\
1 & 3 & 2 \\
\end{pmatrix} \quad \text{and} \quad
\mu = \begin{pmatrix}
1 & 2 & 3 \\
1 & 2 & 3 \\
\end{pmatrix}$$
You can confirm that $\tau p \tau^{-1} = p^2$ and $\mu p \mu^{-1} = p$, and therefore
$$(\tau, \mu)(p, p)(\tau^{-1}, \mu^{-1}) = (p^2, p) \not\in H$$</p>
|
14,007 | <p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p>
<p>My question essentially boils down to: </p>
<blockquote>
<p>What are tips/tricks/techniques for creating quiz and exam questions that both</p>
<ol>
<li>test students at various levels of Bloom's hierarchy and</li>
<li>minimize the amount of work for the grader</li>
</ol>
<p>?</p>
</blockquote>
<p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p>
<p>I have some ideas:</p>
<ul>
<li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li>
<li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li>
<li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li>
</ul>
<p>I'm curious to hear what other things people have used.</p>
| Federico Poloni | 4,930 | <p>First heard it from a former classmate of mine, might be her own invention:</p>
<blockquote>
<p>When the second derivative is positive, the function is happy (i.e., its graph looks like a smile). When the second derivative is negative, the function is sad (i.e., its graph looks like a frown).</p>
</blockquote>
<p><strong>added</strong> (I hope Federico doesn't mind ... Gerald Edgar)</p>
<p>Pictorially,<br>
second derivative positve, second derivative negative:<br>
<a href="https://i.stack.imgur.com/lWeOu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lWeOu.jpg" alt="second"></a><br>
<strong>Two</strong> plus signs signifies <strong>second</strong> derivative </p>
<p>First derivative may be added, if you somehow remember they guy faces to our left): </p>
<p>First derivative positive, first derivative negative : </p>
<p><a href="https://i.stack.imgur.com/eGcvA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eGcvA.jpg" alt="first"></a></p>
|
14,007 | <p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p>
<p>My question essentially boils down to: </p>
<blockquote>
<p>What are tips/tricks/techniques for creating quiz and exam questions that both</p>
<ol>
<li>test students at various levels of Bloom's hierarchy and</li>
<li>minimize the amount of work for the grader</li>
</ol>
<p>?</p>
</blockquote>
<p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p>
<p>I have some ideas:</p>
<ul>
<li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li>
<li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li>
<li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li>
</ul>
<p>I'm curious to hear what other things people have used.</p>
| Elle Najt | 8,029 | <p>To remember concave Up (vs concave down), I remember that the U shape is concave up. Similarly, in conVex, the V is convex. </p>
<p>(If you like, v is the only letter in the word which is the graph of a function. Sadly, concave also has a v, but this mneumonic seems to work anyway. You just have to remember which word (convex) you've assigned the mneumonic to, which seems easier than remembering which word means which.)</p>
|
1,451,745 | <p>Can someone check my logic here. </p>
<p><strong>Question:</strong> How many ways are there to choose a an $k$ person committee from a group of $n$ people? </p>
<p><strong>Answer 1:</strong> there are ${n \choose k}$ ways. </p>
<p><strong>Answer 2:</strong> condition on eligibility. Assume the creator of the committee is already in the committee. This leaves us with choosing $k - 1$ people from a group of $n - 1$ potentially eligible people. If all remaining people are eligible, there are ${n - 1 \choose k - 1}$ possible committees, if there are $n - 2$ eligible people, there are ${n - 2 \choose k - 1}$ committees, if there are $n - 3$ eligible people, there are ${n - 3 \choose k - 1}$ committees,..., if there are $k - 1$ eligible people there are ${k - 1 \choose k - 1}$ committees. Therefore,$${n - 1 \choose k - 1} + {n - 2 \choose k - 1} + {n - 3 \choose k - 1} + \dots + {k - 1 \choose k - 1} = {n \choose k}$$.</p>
| pre-kidney | 34,662 | <p>Did you define what you mean by "eligible"? I don't follow the argument.</p>
<p>Here is another approach: inductively apply Pascal's identity
$$
\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}.
$$</p>
|
4,103,366 | <p>I have <span class="math-container">$A \in \mathbb{R}^{q\times n }, B \in \mathbb{R}^{n \times p} $</span> with <span class="math-container">$\text{rank}(A)=q~$</span> and <span class="math-container">$~\text{rank}(B)=p$</span>.</p>
<p>Additionally there is the condition: <span class="math-container">$n\geq p \geq q$</span>.</p>
<p>I know that <span class="math-container">$\text{rank}(AB)\leq \min\{\text{rank}(A), \text{rank}(B)\}=q$</span>.</p>
<p>I want to know if equality (<span class="math-container">$\text{rank}(AB)=q$</span>) always holds with the additional condition <span class="math-container">$n\geq p \geq q~$</span>, or are there some constraints?</p>
| nachosemu | 846,505 | <p>Take the following example:</p>
<p><span class="math-container">$$
A = \begin{bmatrix}
0 & 0 & 0& 1 & 0 & 0\\
0 & 0 & 0& 0 & 1 & 0 \\
0 & 0 & 0& 0 & 0 & 1 \\
\end{bmatrix}, B = \begin{bmatrix}
1 & 0\\
0 & 1\\
0 & 0\\
0 & 0\\
0 & 0\\
\end{bmatrix}, AB=\begin{bmatrix}
0 & 0\\
0 & 0\\
0 & 0\\
\end{bmatrix}
$$</span></p>
<p>As you can see, <span class="math-container">$rank(A) = 3$</span> , <span class="math-container">$rank(B) = 2$</span> and <span class="math-container">$rank(AB) = 0$</span>. So it not always hold and constrain may be thought as A and B rank is <em>compressed</em> in the same side, roughly said. It is,</p>
<p><span class="math-container">$$
A_2 = \begin{bmatrix}
1 & 0 & 0& 0 & 0 & 0\\
0 & 1 & 0& 0 & 0 & 0 \\
0 & 0 & 1& 0 & 0 & 0 \\
\end{bmatrix}, A_2B=\begin{bmatrix}
1 & 0\\
0 & 1\\
0 & 0\\
\end{bmatrix}
$$</span></p>
|
4,351,990 | <p>I have just finished my undergrad and while I haven't studied much in representation theory I find it a very fascinating subject. My current interest is in differential equations, and I am wondering is there any ongoing research that combines these two areas?</p>
| A. Thomas Yerger | 112,357 | <p>I'll offer another view: physics, especially quantum mechanics, is essentially about the interplay between representation theory and differential equations. Representations of groups like the unitary groups and the Heisenberg group(s) encode information about symmetries of physical systems, invariances under various sort of coordinate changes. These representations are often not on finite dimensional spaces, but infinite dimensional function spaces, and the group elements are often represented as differential operators, giving rise to equations of motion for physical systems. One can study these equations both on the representation theory of Lie groups and algebras side of things, and also with the techniques of PDEs, and both have interesting things to say about the systems that are governed by these representations.</p>
<p>I am not a physicist, but I helped a very talented undergraduate in physics learn the mathematics underlying the physics based on the book by Peter Woit, "Quantum Theory, Groups, and Representations." This book is not really a PDEs book, but touches on many topics adjacent to PDEs, including some harmonic analysis, some symplectic topology and geometry, some complex geometry and the infamous path integral. It's pretty reasonable to me that someone with a background in the analysis of these topics would have quite a bit to say about PDEs.</p>
|
270,849 | <p>I am trying to show that </p>
<p>$P(E\mid E\bigcup F) \geq P(E \mid F)$.</p>
<p>This is intuitively clear. But when expanding I get $P(E)\ P(F)\geq P(E\bigcup F)\ P(E \bigcap F)$. How to continue?</p>
| Davide Giraudo | 9,849 | <p><strong>Hint:</strong> the difference between these two terms is $P(E\cap F^c)P(F\cap E^c)$. </p>
|
3,301,115 | <p>I'm currently taking an introductory course in graph theory, and this problem is giving me a bit of a hard time. Where would I even start? Thanks a bunch?</p>
| Michael Rozenberg | 190,319 | <p>By C-S
<span class="math-container">$$\sum_{cyc}\frac{a}{\sqrt{1-bc}}\leq\sqrt{\sum_{cyc}a\sum_{cyc}\frac{a}{1-bc}}.$$</span>
Thus, it's enough to prove that
<span class="math-container">$$\sum_{cyc}\frac{a}{1-bc}\leq\frac{9}{2(a+b+c)},$$</span> which is true by SOS:
<span class="math-container">$$\frac{9}{2(a+b+c)}-\sum_{cyc}\frac{a}{1-bc}=\sum_{cyc}\left(\frac{3}{2(a+b+c)}-\frac{a}{1-bc}\right)=$$</span>
<span class="math-container">$$=\frac{1}{2(a+b+c)}\sum_{cyc}\frac{3(a^2+b^2+c^2-bc)-2a(a+b+c)}{1-bc}=$$</span>
<span class="math-container">$$=\frac{1}{4(a+b+c)}\sum_{cyc}\frac{2a^2+6b^2+6c^2-6bc-4ab-4ac}{1-bc}=$$</span>
<span class="math-container">$$=\frac{1}{4(a+b+c)}\sum_{cyc}\frac{(c-a)(6c-a-3b)-(a-b)(6b-a-3c)}{1-bc}=$$</span>
<span class="math-container">$$=\frac{1}{4(a+b+c)}\sum_{cyc}(a-b)\left(\frac{6a-b-3c}{1-ac}-\frac{6b-a-3c}{1-bc}\right)=$$</span>
<span class="math-container">$$=\frac{1}{4(a+b+c)}\sum_{cyc}\frac{(a-b)^2(7-c(a+b)-3c^2)}{(1-ac)(1-bc)}=$$</span>
<span class="math-container">$$=\frac{1}{4(a+b+c)}\sum_{cyc}\frac{(a-b)^2(4c^2-c(a+b)+7(a^2+b^2))}{(1-ac)(1-bc)}=$$</span>
<span class="math-container">$$=\frac{1}{4(a+b+c)}\sum_{cyc}\frac{(a-b)^2\left(\left(2c-\frac{a+b}{4}\right)^2+7(a^2+b^2)-\frac{(a+b)^2}{16}\right)}{(1-ac)(1-bc)}\geq0.$$</span></p>
|
546,276 | <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$, and assume that $s_n \rightarrow s$. Prove that $s^k_n\rightarrow s^k$ for every $k \in\mathbb{N}$</p>
<p>Ok, so we need $|s^k_n - s^k| < \varepsilon$. I rewrote this as</p>
<p>$$|s_ns^{k-1}_n - ss^{k-1}|=|(s_n-s)(s^{k-1}_n + s^{k-1}) -s_ns^{k-1}+ss_n^{k-1}|$$</p>
<p>But this seems really messy. What should I use here: $|s_n - s| < \varepsilon?$</p>
<p>Help!</p>
| DanielV | 97,045 | <p>$y = \frac 1 4 x^3 + 12x + 6$</p>
<p>$L = -24x - 32$</p>
<p>$P$ is a line that is perpendicular to $L$.</p>
<p>The slope of a perpendicular is the negative of the multiplicative inverse, that is: $$\frac{dP} {dx} = -(\frac {dL} {dx})^{-1} = -\frac{dx} {dL}$$</p>
<p>To solve the problem we want $y$ where:
$$\underbrace{\frac {dx} {dy}}_{\text{slope of the inverse}} = \underbrace{\frac {dP} {dx}}_{\text{Slope of a perpendicular}}$$
$$\frac {dx} {dy} = -\frac {dx} {dL}$$
$$\frac {dy} {dx} = -\frac {dL} {dx}$$
$$\frac {3} {4} x^2 + 12 = -(-24)$$
$$x = \pm 4$$</p>
<p>Then just find the corresponding $y$ values, which are the $x$ values of the inverse function.</p>
|
2,147,458 | <p>Solve the following integral:
$$
\frac{2}{\pi}\int_{-\pi}^\pi\frac{\sin\frac{9x}{2}}{\sin\frac{x}{2}}dx
$$</p>
| Jack D'Aurizio | 44,121 | <p>Such integral equals:
$$\frac{4}{\pi}\int_{-\pi/2}^{\pi/2}\frac{\sin(9x)}{\sin(x)}\,dx=\frac{4}{\pi}\int_{-\pi/2}^{\pi/2}\frac{e^{9ix}-e^{-9ix}}{e^{ix}-e^{-ix}}\,dx \tag{1}$$
that is:
$$ \frac{4}{\pi}\int_{-\pi/2}^{+\pi/2}\left(e^{8ix}+e^{6ix}+\ldots+1+\ldots+e^{-6ix}+e^{-8ix}\right)\,dx=\frac{4}{\pi}\int_{-\pi/2}^{\pi/2}1\,dx=\color{red}{4}\tag{2} $$
since $\int_{-\pi/2}^{\pi/2}\cos(2nx)\,dx = 0$.</p>
|
2,905,022 | <p>I recently stumbled upon the problem $3\sqrt{x-1}+\sqrt{3x+1}=2$, where I am supposed to solve the equation for x. My problem with this equation though, is that I do not know where to start in order to be able to solve it. Could you please give me a hint (or two) on what I should try first in order to solve this equation?</p>
<p><strong>Note</strong> that I only want hints.</p>
<p>Thanks for the help!</p>
| Barry Cipra | 86,747 | <p>Hint: Try the substitution $u=\sqrt{x-1}$.</p>
<p><strong>Added later</strong>: The answer hints so far, including mine (above), have all aimed at squaring away the square root symbols, leaving a quadratic equation that's easy to solve, with the caveat that the solutions to the quadratic are not necessarily solutions to the original equation. That approach works for any equation of the form $A\sqrt{ax+b}+B\sqrt{cx+d}=C$. But for this specific equation, it turns out there's an easy solution. Since the OP has asked only for hints, I'll state the key idea in the form of a question:</p>
<blockquote>
<p>What can you say about $3\sqrt{x-1}+\sqrt{3x+1}$ if $x\gt1$?</p>
</blockquote>
|
213,916 | <p>Let $ D\subset \mathbb{C}$ be open, bounded, connected and with smooth boundary. Let $f$ be a nonconstant holomorphic function in a neighborhood of the closure of $D$ , such that $|f(z)|=c \forall z\in \partial D$, show that $f$ takes on each value $a$, such that $|a| < |c| $ at least once in $D$.</p>
| Christopher A. Wong | 22,059 | <p>The underlying principle in this problem is the open mapping property for holomorphic functions. However, this problem can be cleaned up by using some more specialized results.</p>
<blockquote>
<p><strong>Claim 1</strong>. If $f(z)$ must vanish somewhere on $D$.</p>
</blockquote>
<p><em>Proof</em>: As $f$ is nonconstant, then by maximum modulus principle, $|f(z)| < c$ on $D$. However, if $f(z)$ doesn't vanish on $D$, then by the minimum modulus principle, $|f(z)| > c$, a contradiction.</p>
<blockquote>
<p><strong>Claim 2</strong>. For every $a$ such that $|a| < c$, $f(z) - a$ has a zero in $D$.</p>
</blockquote>
<p><em>Proof</em>: Notice that for all $z \in \partial D$, $|2f(z) - (f(z) - a)| = |f(z) + a| \le c + |a| < 2c = |f(z)|$. Therefore, by Rouche's theorem, the function $2f(z)$ and the function $f(z) - a$ must share the same number of zeros in $D$. By Claim 1, $f(z)$ vanishes somewhere in $D$, and hence $f(z) - a$ vanishes somewhere in $D$.</p>
|
3,732,571 | <p>I know that there exists a bijection between <span class="math-container">$[0,1]$</span> and <span class="math-container">$[0,1]\cup\{2\}$</span>, but late last night I was not able to come up with a trivial solution. Will be glad if one can provide such example.</p>
| Aryaman Maithani | 427,810 | <p>Define <span class="math-container">$f:[0, 1]\to[0,1]\cup\{2\}$</span> as
<span class="math-container">$$f(x) = \begin{cases}
2 & x = 1\\
\dfrac{1}{n-1} & x\text{ is of the form }\dfrac{1}{n};\;n\ge 2\\
x & \text{otherwise}
\end{cases}$$</span></p>
<p>The idea basically is to take the countable subset <span class="math-container">$\left\{\dfrac{1}{n} : n \ge 1\right\}$</span> of <span class="math-container">$[0, 1]$</span> and put that in a bijection with <span class="math-container">$\left\{\dfrac{1}{n} : n \ge 1\right\} \cup \{2\}$</span> and keep everything else fixed.</p>
|
3,732,571 | <p>I know that there exists a bijection between <span class="math-container">$[0,1]$</span> and <span class="math-container">$[0,1]\cup\{2\}$</span>, but late last night I was not able to come up with a trivial solution. Will be glad if one can provide such example.</p>
| FiMePr | 802,801 | <p>Well, I'm not sure there is a simple formula for this.</p>
<p>The idea is to find an infinite countable subset <span class="math-container">$C$</span> of <span class="math-container">$[0,1]$</span>, add the point <span class="math-container">$2$</span> to it and define a bijection between <span class="math-container">$C$</span> and <span class="math-container">$C \cup \lbrace 2 \rbrace$</span> using "Hilbert's hotel first trick". You then define the rest of the bijection as the identity on <span class="math-container">$[0,1] \setminus C$</span>.</p>
<p>For instance, you could take <span class="math-container">$C = \lbrace \frac{1}{n} \,| \, n \geq 1 \rbrace$</span>. You can choose the bijection between <span class="math-container">$C$</span> and <span class="math-container">$C \cup \lbrace 2 \rbrace$</span> as the one that sends <span class="math-container">$2$</span> to <span class="math-container">$1$</span>, <span class="math-container">$1$</span> to <span class="math-container">$\frac{1}{2}$</span>, <span class="math-container">$\frac{1}{2}$</span> to <span class="math-container">$\frac{1}{3}$</span>, etc.</p>
<p>If you want to be pedantic, you can say that this is due to the equality <span class="math-container">$1+\omega = \omega$</span>, i.e. "infinity of the natural numbers plus one point at the beginning is infinity of the natural numbers, because of this shifting trick".</p>
|
380,177 | <p>In mathematics, I want to know what is indeed the difference between a <strong>ring</strong> and an <strong>algebra</strong>?</p>
| mdp | 25,159 | <p>A ring $R$ has operations $+$ and $\times$ <a href="https://en.wikipedia.org/wiki/Ring_(mathematics)#Definition" rel="noreferrer">satisfying certain axioms which I won't repeat here</a>. An (associative) algebra $A$ similarly has operations $+$ and $\times$ satisfying the same axioms (it doesn't need a multiplicative identity, but this axiom isn't always assumed in rings either), plus an additional operation $\cdot\;\colon R\times A\to A$, where $R$ is some ring (often a field) that <a href="https://en.wikipedia.org/wiki/Associative_algebra#Definition" rel="noreferrer">satisfies some axioms</a> making it compatible with the multiplication and addition in $A$. You should think of this as an analogue of scalar multiplication in vector spaces.</p>
<p>Note also that there are non-associative algebras, so the axioms on multiplication can be weakened from those in rings.</p>
<p>As a vague summary, the algebraic structure of a ring is entirely internal, but in an algebra there is also structure coming from interaction with an external ring of scalars.</p>
|
380,177 | <p>In mathematics, I want to know what is indeed the difference between a <strong>ring</strong> and an <strong>algebra</strong>?</p>
| rschwieb | 29,335 | <p>One thing that complicates answering this question is that rings are almost always assumed to be associative, but algebras are frequently not assumed to be associative. (In other words, my impression is that it's more common to allow 'algebra' to name something nonassociative than it is to use 'ring' to mean something nonassociative.)</p>
<p>Nonassociative algebras are not rare: <a href="http://en.wikipedia.org/wiki/Lie_algebra">Lie algebras</a> and <a href="http://en.wikipedia.org/wiki/Jordan_algebra">Jordan algebras</a> are common nonassociative algebras.</p>
<p>Associative algebras are not rare either: Every single ring $R$ is an associative algebra over its center!</p>
<p>Both structures might or might not be defined to have an identity, so we'll just overlook that feature.</p>
<p>Here's my take, (even though I think Matt Pressland's answer is pretty good already.) $R$ is a commutative ring.</p>
<p>An associative $R$-algebra $A$ is certainly a ring, and a nonassociative algebra may still be counted as a nonassociative ring. </p>
<p>The extra ingredient is an $R$ module structure on $A$ which plays well with the multiplication in $A$. (This was well described before by Matt P: indeed, they are like "scalars".) </p>
<p>In a nutshell, that module action and compatilibity is described by a ring homomorphism from $R$ into the center of $End(A)$, the ring of additive endomorphisms of $A$. </p>
|
1,212,425 | <p>This is a homework problem that I cannot figure out. I have figured out that if $n^2 + 1$ is a perfect square it can be written as such:</p>
<p>$n^2 + 1 = k^2$.</p>
<p>and if $n$ is even it can be written as such:</p>
<p>$n = 2m$</p>
<p>I believe I'm supposed to use the fact that if $n \pmod{4} \equiv 0$ or $1$ then it's a perfect square (maybe that's wrong).</p>
<p>I cannot figure this out.</p>
| Brian M. Scott | 12,042 | <p>You have the useful fact backwards. It’s not true that if $n$ is congruent to $0$ or $1$ mod $4$, then $n$ is a perfect square: $5\equiv1\pmod4$, and $8\equiv0\pmod4$, but neither $5$ nor $8$ is a perfect square. What <em>is</em> true is that if $n$ is a perfect square, then $n\equiv0\pmod 4$ or $n\equiv1\pmod 4$. To prove this, just show that the square of an even number is always congruent to $0$ mod $4$, and the square of an odd number is always congruent to $1$ mod $4$.</p>
<p>Thus, if $n^2+1$ is a perfect square, it must be congruent to $0$ or $1$ mod $4$. So must $n^2$. The only possibility, then, is that $n^2\equiv0\pmod4$ and $n^2+1\equiv1\pmod4$. But then $n^2$ is even, so ... ?</p>
|
3,814,195 | <p>As an applied science student, I've been taught math as a tool. And although I've been studying <strong>a lot</strong> throughout the years, I always felt like I am missing depth. Then I read geodude's answer on this <a href="https://math.stackexchange.com/questions/721364/why-dont-taylor-series-represent-the-entire-function">post</a>, that cited these beautiful quotes:</p>
<blockquote>
<p>You might want to do calculus in <span class="math-container">$\mathbb{R}$</span>, but the functions themselves naturally live in <span class="math-container">$\mathbb{C}$</span></p>
</blockquote>
<blockquote>
<p>Even in <span class="math-container">$\mathbb{R}$</span>, and in the most practical and applied problems, you can hear distant echos of the complex behavior of the functions. It's their nature, you can't change it.</p>
</blockquote>
<p>And although pieces of complex analysis are well known even to the most applied scientist (e.g Euler's identity), these quotes really helped me understand why my math knowledge is so shallow. It seems I share the same worries with other engineers: (<a href="https://math.stackexchange.com/questions/1658577/whats-the-best-way-for-an-engineer-to-learn-real-math">What's the best way for an engineer to learn "real" math?</a>) and I've found many beautiful and informative answers about diving deeper into mathematics, but none of them (as far as I could spot) addressed complex analysis. And as I think I am lost in the labyrinth of math knowledge, I ask this question:</p>
<p>How can one that has an basic knowledge of real analysis approach complex analysis? What do I start? Are there any books you would recommend?</p>
| Lawrence Mano | 201,051 | <p>Complex Analysis by Ahlfors is a masterpiece! But whatever book you read, you must read with not only your mind but also with your heart and soul!! You must feel the subject and only then Complex Analysis will stick.</p>
|
2,991,825 | <p>I'm trying to find the general solution to this matrix
<span class="math-container">\begin{bmatrix}1&-2&1&3&0\\2&-4&4&6&4\\ -2&4&-1&-6&2\\1&-2&-3&3&-8\end{bmatrix}</span></p>
<p>Ax=<span class="math-container">$\begin{bmatrix}1&6&0&-7&\end{bmatrix}^T$</span></p>
<p>I think I'm supposed to get it in x=x*+z format, I'm still not sure if this is the correct way to do it.
But I ended up getting this matrix in row echelon form.
-2(r1)+(r2)</p>
<p>2(r1)+(r3)</p>
<p>-(r1)+(r4)
<span class="math-container">\begin{bmatrix}1&-2&1&3&0\\0&0&2&0&4\\ 0&0&1&0&2\\0&0&-4&0&-8\end{bmatrix}</span></p>
<p>And then </p>
<p>2(r2)+(r4)</p>
<p>-1/2(r2)+(r3)</p>
<p><span class="math-container">\begin{bmatrix}1&-2&1&3&0\\0&0&2&0&4\\ 0&0&0&0&0\\0&0&0&0&0\end{bmatrix}</span></p>
<p>then</p>
<p>1/2(r2)</p>
<p><span class="math-container">\begin{bmatrix}1&-2&1&3&0\\0&0&1&0&1\\ 0&0&0&0&0\\0&0&0&0&0\end{bmatrix}</span></p>
<p>and lastly -r2 + r2</p>
<p><span class="math-container">\begin{bmatrix}1&-2&0&3&-2\\0&0&1&0&1\\ 0&0&0&0&0\\0&0&0&0&0\end{bmatrix}</span></p>
<p>after doing some algebra i ended up getting</p>
<p>x1 = 2(x2)-3(x4)-2(x5)</p>
<p>x3 = -2(x5)</p>
<p>and set x5 = x4 = x2 = 1</p>
<p>and got z = <span class="math-container">$\begin{bmatrix}1&1&-2&1&1\end{bmatrix}^T$</span></p>
<p>But I'm when i try to solve for Ax = <span class="math-container">$\begin{bmatrix}1&6&0&-7\end{bmatrix}^T$</span></p>
<p>The last two rows are full of zeros</p>
<p>so I can't have 0=-7</p>
<p>How would i solve this?</p>
| Doug M | 317,162 | <p>use that <span class="math-container">$|\sin x| < |x|$</span></p>
<p><span class="math-container">$\sin \frac{\pi}{2^{n+1}} < \frac{\pi}{2^{n+1}}$</span></p>
<p><span class="math-container">$0<2^{n}\sin \frac{\pi}{2^{n+1}} < \frac {\pi}{2}$</span></p>
<p>The sequence is bounded... can we show that it is monotone?</p>
<p><span class="math-container">$x_n = $$2^{n}\sin \frac{\pi}{2^{n+1}}\\
2^{n}\sqrt {\frac {1-\cos \frac{\pi}{2^{n}}}{2}}\\
2^{n}\frac {\sin \frac {\pi}{2^n}}{\sqrt {2(1+\cos \frac{\pi}{2^{n}})}}\\
x_{n-1}\sqrt {\frac {2}{1+\cos \frac{\pi}{2^{n}}}}\\
\sqrt {\frac {2}{1+\cos \frac{\pi}{2^{n}}}}>1$</span></p>
<p><span class="math-container">$\frac {x_{n+1}}{x_{n}} > 1$</span> suggests that it is.</p>
|
2,208,943 | <p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p>
<p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p>
<p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
| Stella Biderman | 123,230 | <p>In general, the push for rigor is usually in response to a failure to be able to demonstrate the kinds of results one wishes to. It's usually relatively easy to demonstrate that there exist objects with certain properties, but you need precise definitions to prove that no such object exists. The classic example of this is non-computable problems and Turing Machines. Until you sit down and say "this precisely and nothing else is what it means to be solved by computation" it's impossible to prove that something isn't a computation, so when people start asking "is there an algorithm that does <span class="math-container">$\ldots$</span>?" for questions where the answer "should be" no, you suddenly need a precise definition. Similar things happened with real analysis.</p>
<p>In real analysis, as mentioned in an excellent comment, there was a shift in what people's conception of the notion of a function was. This broadened conception of a function suddenly allows for a number of famous "counter example" functions to be constructed. These often that require a reasonably rigorous understanding of the topic to construct or to analyze. The most famous is the everywhere continuous nowhere differentiable Weierstrass function. If you don't have a very precise definition of continuity and differentiability, demonstrating that that function is one and not the other is extremely hard. The quest for weird functions with unexpected properties and combinations of properties was one of the driving forces in developing precise conceptions of those properties.</p>
<p>Another topic that people were very interested in was infinite series. There are lots of weird results that can crop up if you're not careful with infinite series, as shown by the now famously cautionary theorem:</p>
<blockquote>
<p><strong>Theorem (Summation Rearrangement Theorem):</strong> Let <span class="math-container">$a_n$</span> be a sequence such that <span class="math-container">$\sum a_n$</span> converges conditionally. Then for every <span class="math-container">$x$</span> there is some <span class="math-container">$b_n$</span> that is a reordering of <span class="math-container">$a_n$</span> such that <span class="math-container">$\sum b_n=x$</span>.</p>
</blockquote>
<p>This theorem means you have to be very careful dealing with infinite sums, and for a long time people weren't and so started deriving results that made no sense. Suddenly the usual free-wheeling algebraic manipulation approach to solving infinite sums was no longer okay, because sometimes doing so changed the value of the sum. Instead, a more rigorous theory of summation manipulation, as well as concepts such as uniform and absolute convergence had to be developed.</p>
<p>Here's an example of an problem
surrounding an infinite product created by Euler:</p>
<blockquote>
<p>Consider the following formula:
<span class="math-container">$$x\prod_{n=1}^\infty \left(1-\frac{x^2}{n^2\pi^2}\right)$$</span>
Does this expression even make sense? Assuming it does, does this equal <span class="math-container">$\sin(x)$</span> or <span class="math-container">$\sin(x)e^x$</span>? How can you tell (notice that both functions have the same zeros as this sum, and the same relationship to their derivative)? If it doesn't equal <span class="math-container">$\sin(x)e^x$</span> (which it doesn't, it really does equal <span class="math-container">$\sin(x)$</span>) how can we modify it so that it does?</p>
</blockquote>
<p>Questions like this were very popular in the 1800s, as mathematicians were notably obsessed with infinite products and summations. However, most questions of this form require a very sophisticated understanding of analysis to handle (and weren't handled particularly well by the tools of the previous century).</p>
|
2,208,943 | <p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p>
<p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p>
<p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
| danny | 429,130 | <p>I list here few excellent texts on Real Analysis,have a look at them. </p>
<p>1)Understanding Analysis by Stephen Abbott</p>
<p>2)Real Mathematical Analysis by Pugh</p>
<p>3)Counterexamples in analysis by Gelbaum</p>
<p>For historically inclined yet mathematical you may try
The Calculus gallery by William Dunham</p>
<p>Coming to your question of there was a need for epsilon delta proofs, have a look at this
<a href="https://en.m.wikipedia.org/wiki/Non-standard_analysis" rel="noreferrer">https://en.m.wikipedia.org/wiki/Non-standard_analysis</a></p>
|
1,393,265 | <p>How to prove that$(n!)^{1/n}$ tends to infinity as limit tends to infinity?
I tried to do this by expanding $n!$ as $n\times (n-1)\times (n-2)\cdots 4\times3\times2\times 1$ and taking out n common from each factor so that I can have $n$ outside the radical sign, But then the last terms would be $(4/n)\times(3/n)\times(2/n)\times (1/n)$, which would tend to zero and would present indeterminate form of $0\cdot \infty$, but how should I further solve it. I would appreciate a little help.</p>
| 3d0 | 217,450 | <p>Using Sterling:</p>
<p>$$\sqrt{2\pi}\ n^{n+1/2}e^{-n} \le n! \le e\ n^{n+1/2}e^{-n} $$</p>
<p>Lets apply $^{\frac{1}{n}}$</p>
<p>$$(\sqrt{2\pi}\ n^{n+1/2}e^{-n})^{\frac{1}{n}} \le (n!)^{\frac{1}{n}} \le (e\ n^{n+1/2}e^{-n})^{\frac{1}{n}} $$</p>
<p>The left side
$$\sqrt[2n]{2\pi}\ n^{1+\frac{1}{2n}}e^{-1} \le (n!)^{\frac{1}{n}}$$
So:</p>
<p>$$\sqrt[2n]{2\pi n}\ e^{-1} n \le (n!)^{\frac{1}{n}}$$</p>
<p>and</p>
<p>$$\lim_{n\to \infty} \sqrt[2n]{2\pi n}\ e^{-1} n \to \infty$$</p>
|
275,371 | <p>I was wondering if it is possible to decompose any symmetric matrix into a positive definite and a negative definite component. I can't seem to think of a counterexample if the statement is false.</p>
| Community | -1 | <p>This answer is years late but I like this proof.
The set of positive definite matrices is open in the set of symmetric matrices and must thus contain <span class="math-container">$\frac{n(n+1)}{2}$</span> matrices which are linearly independent. Take that as a basis for the vector space of symmetric matrices. Then every element can be written as a linear combination of positive definite matrices, and we can simply group the basis elements by the positive coefficients and negative coefficients, giving us that a symmetric matrix M can be written as the sum of a positive definite matrix and the negation of a positive definite matrix.</p>
|
1,407,797 | <p>P is the middle of the median line from vertex A, of ABC triangle. Q is the point of intersection between lines AC and BP.</p>
<p><a href="https://i.stack.imgur.com/ka8E8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ka8E8.png" alt="enter image description here"></a></p>
| suncup224 | 61,149 | <p><strong>Hint:</strong> </p>
<p>Part 1: Apply <a href="https://en.wikipedia.org/wiki/Menelaus'_theorem" rel="nofollow">Menelaus' theorem</a> on triangle $AMC$ and line $BPQ$.</p>
<p>Part 2: Apply Menelaus' theorem on triangle $BCQ$ and line $APM$ (needs the answer form part 1).</p>
<p>I'm giving only hints but you should be able to figure it out after learning the theorem (you will learn more!). If you need further help, feel free to comment.</p>
|
364,278 | <p>Let <span class="math-container">$X$</span> be a variety over a number field <span class="math-container">$K$</span>. Then it is known that for any topological covering <span class="math-container">$X' \to X(\mathbb{C})$</span>, the topological space <span class="math-container">$X'$</span> can be given the structure of a <span class="math-container">$\overline{K}$</span>-variety in such a way so that the morphism <span class="math-container">$f: X' \to X$</span> inducing the topological map is a finite etale morphism over <span class="math-container">$\overline{K}$</span>. However, the variety <span class="math-container">$X'$</span> and the morphism <span class="math-container">$f$</span> may not descend to <span class="math-container">$K$</span>.</p>
<p>My question is as follows: does there always exist a further finite etale covering <span class="math-container">$f' : X'' \to X'$</span> such that the composition <span class="math-container">$X'' \to X$</span> may be defined over <span class="math-container">$K$</span>?</p>
<p>EDIT: Just to be clear, I'd like all the covers involved to be geometrically connected to avoid trivial solutions.</p>
| S. carmeli | 115,052 | <p>Adding on Will's and Sasha's answers, the condition of having a rational point, or at least a "1-truncated homotopy fixed point" for the action is necessary. For example, let <span class="math-container">$C_2$</span> act on the circle <span class="math-container">$S^1$</span> by half rotation. The covers of <span class="math-container">$S^1$</span> are the standard n-fold ones, and we can ask what it takes to lift the action of <span class="math-container">$C_2$</span> to the cover, so that it is "defined over <span class="math-container">$BC_2$</span>". In particular, we need to lift that half-circle rotation to the n-fold cover, for which the options are <span class="math-container">$1/2n + k/n$</span> rounds rotation. For this to be an involution, we need that applying it twice gives the identity, i.e. that <span class="math-container">$1/n +2k/n$</span> is an integer. If <span class="math-container">$n$</span> is even, this is impossible, and so the double cover of this action on <span class="math-container">$S^1$</span> has no cover definable over <span class="math-container">$BC_2$</span>. To turn this topological picture to arithmetic, take <span class="math-container">$K=\mathbb{R}$</span> and let complex conjugation act on <span class="math-container">$\mathbb{C}^\times$</span> by <span class="math-container">$z\mapsto -1/\bar{z}$</span> (which is a form of the multiplicative group with no rational points). The action on the unit circle is then half rotation, so the Galois story realized to the topological one up to profinite completion.</p>
<p>I would add that what happens topologically is that if we have a fixed point, we can use it to define a "connected" compositum of pointed covers, by taking the component of the tuple of base points lifts. This is what missing in this example esssentially, even though up to isomorphism all covers are actually "the same".</p>
|
45,570 | <p>I'm writing a little package in Mathematica for geology where a particular stone may be approximated as an hemisphere. Anyway this is a rough estimation because a real hemisphere has its height as loong as its radius. Instead, a reservoir stone (for an hydrocarbon) has often a form of a section of an hemisphere, its height is lower than the radius. For example, I can have an hemisphere with radius long 5 km and height of only 3 km and I can plot it like that:</p>
<pre><code>semisfera[x_, y_, raggio_] := Sqrt[raggio^2 - (x - raggio)^2 - (y - raggio)^2];
plotsemisfera = Plot3D[semisfera[x, y, raggioSfera], {x, 0, 2 raggioSfera}, {y, 0, 2 raggioSfera}, PlotRange -> {0, 3}, AxesLabel -> {"lunghezza km" , "larghezza km","profondità km"}, PlotLabel -> Style[Framed["Referenced Theorical Hemisphere"], 22, Black]]
</code></pre>
<p>and I get the following graphic:
<img src="https://i.stack.imgur.com/O5pSO.jpg" alt="enter image description here"></p>
<p>you'll agree with me that is a section of ah hemisphere without the top part, won't you?</p>
<p>Sometimes it may happen that the height is << radius.
In my case, my geology student worked on a stone with radius of 5km and an height of only 0.2 km.
If I try to plot this as I've done before, I get a very awful graphic, here:</p>
<p><img src="https://i.stack.imgur.com/Zo3eb.jpg" alt="enter image description here"></p>
<p>So, I'd just like to know if there is a way to plot a more precise graphic, without all that irregular part at the base of the hemisphere.</p>
<p>The centre of the "hemisphere" should be in = <0,0></p>
<p>Maybe it could be something like that:
<a href="http://uploadpie.com/eAVvq" rel="nofollow noreferrer">http://uploadpie.com/eAVvq</a></p>
<p>but I really don't understand why for low values of the height the base of the hemisphere is so jagged!</p>
<p>How can I plot that? Thank you</p>
| Jens | 245 | <p>The ragged edges were caused by the fact that the parametrization of the surface in terms of height over the equatorial plane is singular at the equator, as you can also see in the increased distance between mesh lines on the plotted surface. So the main part of the solution is to choose a better parametrization, and the most common one is of course some variation of spherical coordinates. </p>
<p>Here is a completely different approach which also uses an angle to parametrize the surface, but should yield a more reusable and render-friendly object that can be displayed in <code>Graphics3D</code>. </p>
<p>It uses the fact that <code>Tube</code> can be provided not just with a list of points on which it is centered, but also accepts a second argument that contains a list of <em>diameters</em>. I use this to draw the shell. <code>Tube</code> is therefore a quick and simple way to define rotationally symmetric shapes without using one of the <code>Plot3D</code> family commands.</p>
<pre><code>Clear[pebble];
pebble[h_, n_: 40] := {
CapForm["Butt"],
Apply[Tube,
Transpose[
Table[
{{0, 0, Sin[θ]}, Cos[θ]},
{θ, 0, ArcSin[h], Pi/(2 n)}]]]
}
Graphics3D[{Lighter[Brown], pebble[.7]}, Lighting -> "Neutral",
Boxed -> False]
</code></pre>
<p><img src="https://i.stack.imgur.com/1aTD6.png" alt="pebble1"></p>
<pre><code>Graphics3D[{Lighter[Brown], Table[
Translate[
Rotate[Scale[ pebble[.1 i], RandomReal[{.1, 1.5}]],
i Pi/10, {0, 1, 0}], {2 i, i, 0}], {i, 1, 10}]},
Lighting -> "Neutral", Boxed -> False]
</code></pre>
<p><img src="https://i.stack.imgur.com/WCK15.png" alt="pebbles"></p>
<p>The function <code>pebble</code> has only one required argument: the height from the equator, assumed to be between <code>0</code> and <code>1</code>. So it's basically the aspect ratio, and in the second plot I show how to adjust the overall size using <code>Scale</code> in <code>Graphics3D</code>. The second optional argument to <code>pebble</code> is the resolution with which the surface is drawn, given in terms of the number of latitude divisions. </p>
|
155,547 | <p>Given $X_1, \ldots, X_n$ from $\mathcal{N} (\mu, \sigma^2)$.</p>
<p>I have to compute the probability:
$$P\left(|\bar{X} - \mu| > S\right)$$
where $\bar{X}$ is the sample mean and $S^2$ is the sample variance.</p>
<p>I tried to expand:
$$P\left(\bar{X}^2 + \mu^2 - \bar{X}\mu > \frac{1}{n}\sum {X_i}^2 + \frac{1}{n}\sum\bar{X} - 2\left(\frac{1}{n}\sum X_i\right) \bar{X} \right) $$
$$P\left( \mu^2 - \bar{X}\mu > \frac{1}{n}\sum {X_i}^2 - 2\bar{X}^2 \right) $$</p>
<p>but it does not seems to be helpful.</p>
<p>Can someone help me?</p>
| Did | 6,179 | <p>The empirical mean and empirical variance of i.i.d. normal samples are independent and follow known distributions, which are respectively normal and chi-squared. This indicates that
$$
\mathrm P\left(|\bar X-\mu|\gt S\right)=\mathrm P\left((n-1)Z_1^2\gt n(Z_2^2+\cdots+Z_n^2)\right),
$$
where $(Z_k)_{1\leqslant k\leqslant n}$ is i.i.d. and standard normal. More simply, this is $\mathrm P(|T_{n-1}|\gt 1)$, where the distribution of $T_{n-1}$ is the <a href="http://en.wikipedia.org/wiki/Student%27s_t-distribution#How_the_t-distribution_arises" rel="nofollow">Student's $t$-distribution</a> with $n-1$ degrees of freedom. Hence,
$$
\mathrm P\left(|\bar X-\mu|\gt S\right)=\mathrm P\left(|T_{n-1}|\gt1\right)=I_{\frac{n-1}n}\left(\frac{n-1}2,\frac12\right),
$$
where $I$ denotes the regularized incomplete beta function. The cases $n=2$, $3$, $4$ and $\infty$ are <a href="http://en.wikipedia.org/wiki/Student%27s_t-distribution#Special_cases" rel="nofollow">somewhat explicit</a>.</p>
<p><strong>Edit:</strong> Recall that the empirical mean $\bar X$ and the empirical variance $S^2$ of the sample $(X_k)_{1\leqslant k\leqslant n}$ are defined as
$$
\bar X=\frac1n\sum\limits_{k=1}^nX_k,\qquad\qquad S^2=\frac1{n-1}\sum\limits_{k=1}^n(X_k-\bar X)^2.
$$</p>
|
329,513 | <p>$$
\int \frac{\sqrt{\frac{x+1}{x-2}}}{x-2}dx
$$</p>
<p>I tried:
$$
t =x-2
$$
$$
dt = dx
$$
but it didn't work.
Do you have any other ideas?</p>
| littleO | 40,119 | <p>The phenomenon in convex optimization that the dual of the dual problem is (usually) the same as the primal problem is seemingly a total surprise, and it is only rarely explained. But there's a nice, enlightening explanation that I learned from reading Ekeland and Temam. This material can also be found in the book Variational Analysis by Rockafellar and Wets, starting on p. 502.</p>
<p>The ideas are most clear when we work at an appropriate level of generality. We don't obtain a dual problem until we specify how to <strong>perturb</strong> the primal problem. </p>
<p>Suppose that the primal problem is
$$
\operatorname{minimize}_x \,\phi(x,0),
$$ where $\phi:\mathbb R^m \times \mathbb R^n \to \mathbb R \cup \{ \infty \}$ is a convex function. For a given $y$, the problem of minimizing $\phi(x,y)$ with respect to $x$ can be viewed as a "perturbed" version of the primal problem. Let's introduce the "value function" $h(y) = \inf_x \, \phi(x,y)$. So, the primal problem is to evaluate $h(0)$. If we have a basic understanding of the Fenchel conjugate, then we know that $h(0) \geq h^{**}(0)$, and typically $h(0) = h^{**}(0)$. <strong>The dual problem is simply to evaluate $h^{**}(0)$.</strong></p>
<p>Let's try to write the dual problem more explicitly.
First of all,
\begin{align}
h^*(z) &= \sup_y \, \langle y, z \rangle - h(y) \\
&= \sup_y \, \langle y, z \rangle - \inf_x \, \phi(x,y) \\
&= \sup_y \, \langle y, z \rangle + \sup_x - \phi(x,y) \\
&= \sup_{x,y} \, \langle x, 0 \rangle + \langle y, z \rangle - \phi(x,y) \\
&= \phi^*(0,z).
\end{align}
It follows that
\begin{align}
h^{**}(0) &= \sup_z \, \langle 0, z \rangle - \phi^*(0,z) \\
&= - \inf_z \, \phi^*(0,z).
\end{align}
So the dual problem, written as a minimization problem, is
$$
\operatorname{minimize}_z \, \phi^*(0,z).
$$
Look at the <strong>beautiful similarity</strong> between the primal and dual problems.</p>
<p>We did not obtain a dual problem until we specified how to perturb the primal problem. So, what if we now perturb the dual problem in the obvious way? A perturbed dual problem is
$$
\operatorname{minimize}_z \, \phi^*(w,z).
$$
Now that we have specified how to perturb the dual problem, we can obtain a dual for the dual problem, in exactly the same manner as above. <em>And you can see immediately what the dual of the dual problem will be</em>, without doing any work.
The dual of the dual problem is:
$$
\operatorname{minimize}_x \, \phi^{**}(x,0).
$$
But typically we have $\phi^{**} = \phi$, in which case the dual of the dual problem is exactly the primal problem.</p>
<hr>
<p>You might wonder how this dual problem construction connects to the standard dual problem construction (where you first form the Lagrangian, etc.). Suppose the primal problem is
\begin{align}
\text{minimize} & \quad f(x) \\
\text{subject to} & \quad g(x) \leq 0.
\end{align}
A perturbed problem is
\begin{align}
\text{minimize} & \quad f(x) \\
\text{subject to} & \quad g(x) + y\leq 0.
\end{align}
Having specified how to perturb the primal problem, we now obtain a dual problem, and if you work out the details it turns out to be exactly the dual problem that you would expect. I gave more details here:</p>
<p><a href="https://math.stackexchange.com/questions/223235/please-explain-the-intuition-behind-the-dual-problem-in-optimization">Please explain the intuition behind the dual problem in optimization.</a></p>
|
2,631,220 | <p>If $z$ is a variable complex number , and $a$ is a fixed complex number , is it true that if $z$ , $a$ satisfy the following condition </p>
<p>$|z+a| = |z-a|$ </p>
<p>Then the locus of $z$ is the perpendicular bisector of $a$ and $-a$ ?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>let $$z=x+iy$$ then $$|x+a+iy|=|x-a+iy|$$ and we get
$$\sqrt{(x+a)^2+y^2}=\sqrt{(x-a)^2+y^2}$$
Can you proceed?</p>
|
92,020 | <p>Let $W_t$ be a Brownian motion with $m$ independent components on $(\Omega,F,P)$.<br>
Let $G(\omega,t)=[g_{ij}(\omega,t)]_{1\leq i\leq n,1\leq j\leq m}$ in $V^{n\times m}[S,T]$ such that<br>
$$\limsup_{\omega,t \in\Omega\times[S,T]} \sum_i^m \sum_j^n | g_{ij}(\omega,t)|<\infty$$ and<br>
$$\int_S^T E|G(\omega,t)^6|dt<\infty.$$<br>
I have to prove that:<br>
$$E\left|\int_S^T G(\omega,t)dW_t\right|^6 \le 15^3(T-S)^2\int_S^TE|G(\omega,t)^6|dt<\infty.$$<br>
I've also a hint:<br>
$$\int_S^T \int_\Omega H(\omega,t)^4 K(\omega,t)^2dtdP(\omega) \le \left\{\int_S^T \int_\Omega H(\omega,t)^6 dtdP(\omega) \right\}^{4/6} \left\{ \int_S^T \int_\Omega K(\omega,t)^6dtdP(\omega))\right\}^{2/6}.$$ </p>
<p>My idea was to use Itō's isometry in order to pass from $dW_t$ in $dt$ but I don't know if it's possible with $6$ at the exponent. Maybe a change of variable? Anyway I can't figure out where the coefficient $15^3(T-S)^2$ came from... </p>
<p>Thank you for your help</p>
<p>EDIT: I found a very interesting article from Novikov called "On moment inequalities and identities for stochastic integrals", which analyses a very similar case.
I had not time to properly study this work but the key was applying the Itō's formula to a specific function. </p>
| Brenton | 226,184 | <p>This is an excerpt from the book "Stochastic Differential Equations and Applications" by Xuerong Mao. I've frequently used this book as a reference for SDEs. This looks like exactly what you're asking for, but specifically for $p=6$ (Theorem 5.21 they refer to is the Ito Isometry property)</p>
<p><img src="https://i.stack.imgur.com/dXeSV.png" alt="enter image description here"></p>
<p><img src="https://i.stack.imgur.com/UF7B7.png" alt="enter image description here"></p>
|
1,092,665 | <p>My question is really simple, how can I write symbolically this phrase: </p>
<blockquote>
<p>$x=\sum a_mx^m$ where $m$ range over
$\{1,\ldots,g\}\setminus\{t_1,\ldots,t_u\}$</p>
</blockquote>
<p>Being more specific, I would like to know how to write with mathematical symbols this part: "range over $\{1,\ldots,g\}\setminus\{t_1,\ldots,t_u\}$"</p>
<p>Thanks</p>
| Community | -1 | <p>You could put the limits of the sum behind the sum, making them ore inline instead of the big displaystyle format.</p>
<p>If you are using LaTeX I do wonder why there is no space left to use the slightly better looking alternative Kez gave. (if you are not using LaTeX, why not..?)</p>
<p>$\Large x=\sum_{{m=1\ldots g,\,\, m\,\notin\, \{t_1,\,\ldots\,,\,t_u\}}} a_mx^m$</p>
|
201,236 | <p>how to compute numerically the integral </p>
<pre><code>NIntegrate[6 x/(1 - x), {x, 0, 1}]
</code></pre>
<p>to give a value which is approximately equal to 891.441</p>
| David Keith | 44,700 | <p>The integral does not exist:</p>
<pre><code>Limit[Integrate[6 x/(1 - x), x], x -> 1]
(* \[Infinity] *)
out = Quiet@
Table[{wp,
NIntegrate[6 x/(1 - x), {x, 0, 1}, Exclusions -> 1,
WorkingPrecision -> wp]}, {wp, 10, 500, 5}];
ListPlot[out, Frame -> True,
FrameLabel -> {"Working Precisin", "NIntegrate Result"}]
</code></pre>
<p><a href="https://i.stack.imgur.com/a3Ca2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a3Ca2.png" alt="enter image description here"></a></p>
|
201,236 | <p>how to compute numerically the integral </p>
<pre><code>NIntegrate[6 x/(1 - x), {x, 0, 1}]
</code></pre>
<p>to give a value which is approximately equal to 891.441</p>
| bill s | 1,783 | <p>You can approach this by finding the integral from 0 to something a little less than 1:</p>
<pre><code>f[eps_] = Integrate[6 x/(1 - x), {x, 0, eps}, Assumptions -> 0 < eps < 1]
-6 (eps + Log[1 - eps])
</code></pre>
<p>Now you can see explicitly that as eps -> 1, the integral diverges to -Infinity.</p>
<pre><code>Limit[f[eps], eps -> Infinity]
-\[Infinity]
</code></pre>
|
1,512,171 | <p>I want to show that there exists a diffeomorphic $\phi$ such that the following diagram commutes:
$$
\require{AMScd}
\begin{CD}
TS^1 @>{\phi}>> S^1\times\mathbb{R}\\
@V{\pi}VV @V{\pi_1}VV \\
S^1 @>{id_{S^1}}>> S^1
\end{CD}$$
where $\pi$ is the associated projection of $TS^1$, and $\pi_1(x,y)=x$ is the standard projection function in the first component.</p>
<p>A hint was given along with the exercise that I should find a nowhere vanishing vector field on $S^1$. However, I don't know how to find one exactly, or what to do subsequent to finding such a vector field. I have seen an analogous example where $\phi$ was given without reason where $S^1$ and $\mathbb{R}$ were both instead $\mathbb{R}^n$. The definition of that $\phi$ was:$$\phi(a^i\frac{\partial}{\partial x^i}(p)) = (p,(a^1,...,a^n)).$$Perhaps the nowhere vanishing vector field on $S^1$ is used in an analogous formula?</p>
<p>Could anyone give some additional hints or a sketch of a proof?</p>
<p><strong>EDIT:</strong> Thinking about it, if I get the nowhere vanishing vector field, say, $u$, then because $S^1$ is a 1-manifold, I have that $T_pS^1$ is 1-dimensional as well. So that means that $T_pS^1$ is spanned by $u_p$. So I am thinking we use $\forall v_p\in TS^1$ the unique coefficient given by $\alpha\in\mathbb{R}$ such that $v_p = \alpha u_p$. So perhaps:$$\phi(v_p)=(p,\alpha),$$is our diffeomorphism? In that case, is there a condition that is met by $S^1$ such that it has to have a nowhere vanishing vector field (i.e. I don't have to find an exact formula for one)?</p>
| C. Falcon | 285,416 | <p>$f$ isn't Riemann-integrable but Lebesgue-integrable and indeed its integral is $1$, because $f=1$ almost everywhere on $[0,1]$, since $\mathbb{Q}$ is countable.</p>
|
1,341,440 | <p>I came across a claim in a paper on branching processes which says that the following is an <em>immediate consequence</em> of the B-C lemmas:</p>
<blockquote>
<p>Let $X, X_1, X_2, \ldots$ be nonnegative iid random variables. Then $\limsup_{n \to \infty} X_n/n = 0$ if $EX<\infty$, and $\limsup_{n \to \infty} X_n/n = \infty$ if $EX=\infty$.</p>
</blockquote>
<p>So to apply the BC lemmas to these, I want to essentially show that
$$(1) \; \textrm{If } EX<\infty, \textrm{ then } P(\limsup \{X_n/n > \epsilon\}) = 0 \quad \forall \epsilon>0$$
$$(2) \; \textrm{If } EX=\infty, \textrm{ then } P(\limsup \{X_n/n > \delta\}) = 1 \quad \forall \delta>0$$</p>
<p>But I keep getting stuck. For example if I want to apply the first BC lemma to (1), then using Markov's inequality only gives $P(X_n > n\epsilon) < EX/n\epsilon$, which isn't summable. Am I missing something right under my nose?</p>
| Bhaskar Vashishth | 101,661 | <p>Let $|G|=8$ where $x\in G$ and $|x|=4$. Now $|x^2|=2$ and denote $Z=Z(G)$</p>
<p>Consider canonical homomorphism $\eta :G \to G/Z$.</p>
<p>Case 1- $|Z|=8$, $G$ is abelian, nothing to prove.</p>
<p>Case 2- $|Z|=4$, then $\eta : G \to \Bbb{Z}_2$, so if $x \to \bar{0}$ then so does $x^2$, and if $x \to \bar{1}$, then $\eta(x^2)=\bar{1}+\bar{1}=\bar{0}$</p>
<p>Case 3- $|Z|=2$, and $|G/Z|=4$. This implies $x \notin Z$. Note as $\eta(x)=xZ \in G/Z$. But $|Z|=|xZ|=2$ (as they are distinct cosets, so have equal size). But $|xZ|=2 \implies x^2\in Z$</p>
<p>Case 4- $|Z|=1$. This is not possible by class equation, as all non trivial conjugacy classes has even order.</p>
|
3,476,022 | <p>I was watching this Mathologer video (<a href="https://youtu.be/YuIIjLr6vUA?t=1652" rel="noreferrer">https://youtu.be/YuIIjLr6vUA?t=1652</a>) and he says at 27:32</p>
<blockquote>
<p>First, suppose that our initial <em>chunk</em> is part of a parabola, or if you like a cubic, or any polynomial. If I then tell you that my <em>mystery function</em> is a polynomial, there's always going to be exactly one polynomial that continues our initial <em>chunk</em>. In other words, <strong>a polynomial is completely determined by any part of it.</strong> [...] Again, just relax if all this seems a little bit too much.</p>
</blockquote>
<p>So he didn't give a proof of the theorem in bold text – I think this is very important.</p>
<p>I understand that there always exists a polynomial of degree <span class="math-container">$n$</span> that passes through a set of <span class="math-container">$n+1$</span> points (i.e. there are <strong>finitely many</strong> custom points to be passed by, the <em>chunk</em> has to be discrete, like <span class="math-container">$(1,1),(2,2),(3,3),(4,5)$</span>). But there also exists some polynomial of degree <span class="math-container">$m$</span> (<span class="math-container">$m\ne n$</span>) that passes through the same set of points.</p>
<p>But how do I prove that there exists one and only one polynomial that passes through a set of <strong>infinitely many</strong> points?</p>
| ncmathsadist | 4,154 | <p>Polynomials are analytic functions. If two analytic functions agree on a set having a limit point, they must be equal by the <a href="https://en.wikipedia.org/wiki/Identity_theorem" rel="nofollow noreferrer">Identity Theorem.</a></p>
|
173,112 | <blockquote>
<p>Solve for $x$. $12x^3+8x^2-x-1=0$ all solutions are rational and between $\pm 1$</p>
</blockquote>
<p>As mentioned in my previous answers, I'm guessing I have to use the Rational Root Theorem. But I've done my research and I do not understand what to plug in or anything about it at all. Can someone please dumb this theorem down so I can try to solve this equation. I also <strong>do not</strong> want anyone to solve this problem for me. Thanks!</p>
| Bill Dubuque | 242 | <p>The reciprocals of the roots are roots of the (negated) reversed polynomial $\rm\:x^3+x^2-8\,x-12.\:$ By the Rational Root Test all its roots are integers. If the roots are $\rm\:a,b,c\:$ then by Vieta's Formulas we have $\rm\:abc = 12,\ a+b+c =-1\:$ so $\rm\:a,b,c = \ldots$</p>
<p><strong>Remark</strong> $\ $ I chose to work with the reciprocals of the roots because I know they are integers by RRT, and it is more intutive to do arithmetic with integers than with their reciprocals. Notice that this transformation makes the problem so simple that it can easily be solved <em>mentally</em>. Indeed, it took me less than $10$ seconds to do so. With a little practice, anyone can be just as proficient.</p>
|
1,175,993 | <p>I want to show $T=d/dx$ is unbounded on $C^1[a,b]$ with $b>1$. Take a sequence $f(x)=x^n$, and $\|T\|=\sup_{x\in[a,b]}\frac{\|Tx\|}{\|x\|}=\frac{\|n\cdot b^{n-1}\|}{\|b\|}$. I want to claim as $n$ goes to infinity, the operator norm goes to infinity, and hence it's unbounded. But the definition of operator norm only says I can take sup w.r.t. $x$, and I'm confused about why I can take sup w.r.t. $n$ here.</p>
| pabodu | 64,543 | <p>Pedro M., I expect $||f||_1$ in $C^1$ is $||f||_C+||f'||_C$. The operator $T$ is supposed to map functions from $C^1$ into itself. Definitely, its domain is narrower than $C^1$.</p>
|
1,688,762 | <p>$$\int \sqrt{\frac{x}{2-x}}dx$$</p>
<p>can be written as:</p>
<p>$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx.$$</p>
<p>there is a formula that says that if we have the integral of the following type:</p>
<p>$$\int x^m(a+bx^n)^p dx,$$ </p>
<p>then:</p>
<ul>
<li>If $p \in \mathbb{Z}$ we simply use binomial expansion, otherwise:</li>
<li>If $\frac{m+1}{n} \in \mathbb{Z}$ we use substitution $(a+bx^n)^p=t^s$
where $s$ is denominator of $p$;</li>
<li>Finally, if $\frac{m+1}{n}+p \in \mathbb{Z}$ then we use substitution
$(a+bx^{-n})^p=t^s$ where $s$ is denominator of $p$.</li>
</ul>
<p>If we look at this example:</p>
<p>$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx,$$</p>
<p>we can see that $m=\frac{1}{2}$, $n=1$, and $p=\frac{-1}{2}$ which means that we have to use third substitution since $\frac{m+1}{n}+p = \frac{3}{2}-\frac{1}{2}=1$ but when I use that substitution I get even more complicated integral with square root. But, when I tried second substitution I have this:</p>
<p>$$2-x=t^2 \Rightarrow 2-t^2=x \Rightarrow dx=-2tdt,$$ </p>
<p>so when I implement this substitution I have:</p>
<p>$$\int \sqrt{2-t^2}\frac{1}{t}(-2tdt)=-2\int \sqrt{2-t^2}dt.$$</p>
<p>This means that we should do substitution once more, this time:</p>
<p>$$t=\sqrt{2}\sin y \Rightarrow y=\arcsin\frac{t}{\sqrt{2}} \Rightarrow dt=\sqrt{2}\cos ydy.$$</p>
<p>So now we have:</p>
<p>\begin{align*}
-2\int \sqrt{2-2\sin^2y}\sqrt{2}\cos ydy={}&-4\int\cos^2ydy = -4\int \frac{1+\cos2y}{2}dy={} \\
{}={}& -2\int dy -2\int \cos2ydy = -2y -\sin2y.
\end{align*}</p>
<p>Now, we have to return to variable $x$:</p>
<p>\begin{align*}
-2\arcsin\frac{t}{2} -2\sin y\cos y ={}& -2\arcsin\frac{t}{2} -2\frac{t}{\sqrt{2}}\sqrt\frac{2-t^2}{2}={} \\
{}={}& -2\arcsin\frac{t}{2} -\sqrt{t^2(2-t^2)}.
\end{align*}</p>
<p>Now to $x$:</p>
<p>$$-2\arcsin\sqrt{\frac{2-x}{2}} - \sqrt{2x-x^2},$$</p>
<p>which would be just fine if I haven't checked the solution to this in workbook where the right answer is:</p>
<p>$$2\arcsin\sqrt\frac{x}{2} - \sqrt{2x-x^2},$$ </p>
<p>and when I found the derivative of this, it turns out that the solution in workbook is correct, so I made a mistake and I don't know where, so I would appreciate some help, and I have a question, why the second substitution works better in this example despite the theorem i mentioned above which says that I should use third substitution for this example?</p>
| 3SAT | 203,577 | <blockquote>
<p>$$\int \sqrt{\frac{x}{2-x}}dx$$</p>
</blockquote>
<p>Set $t=\frac {x} {2-x}$ and $dt=\left(\frac{x}{(2-x)^2}+\frac{1}{2-x}\right)dx$</p>
<p>$$=2\int\frac{\sqrt t}{(t+1)^2}dt$$</p>
<p>Set $\nu=\sqrt t$ and $d\nu=\frac{dt}{2\sqrt t}$</p>
<p>$$=4\int\frac{\nu^2}{(\nu^2+1)^2}d\nu\overset{\text{ partial fractions}}{=}4\int\frac{d\nu}{\nu^2+1}-4\int\frac{d\nu}{(\nu^1+1)^2+\mathcal C}$$</p>
<p>$$=4\arctan \nu-4\int\frac{d\nu}{(\nu^2+1)^2}$$</p>
<p>Set $\nu=\tan p$ and $d\nu=\sec^2 p dp.$ Then $(\nu^2+1)^2=(\tan^2 p+1)^2=\sec^4 p$ and $p=\arctan \nu$</p>
<p>$$=4\arctan \nu-4\int \cos^2 p dp$$</p>
<p>$$=4\arctan \nu-2\int \cos(2p)dp-2\int 1dp$$</p>
<p>$$=4\arctan \nu-\sin(2p)-2p+\mathcal C$$</p>
<p>Set back $p$ and $\nu$:</p>
<p>$$=\color{red}{\sqrt{-\frac{x}{x-2}}(x-2)+2\arctan\left(\sqrt{-\frac{x}{x-2}}\right)+\mathcal C}$$</p>
|
276,329 | <p>I have a problem, from Gelfand's "Algebra" textbook, that I've been unable to solve, here it is:</p>
<p><strong>Problem 268.</strong> </p>
<p>What is the possible number of solutions of the equation $$ax^6+bx^3+c=0\;?$$</p>
<p>Thanks in advance.</p>
| DonAntonio | 31,254 | <p>Put $\,t:=x^3\,$ , so your equation becomes</p>
<p>$$(*) at^2+bt+c=0\Longrightarrow \Delta= b^2-4ac$$</p>
<p>Now, if $\,\Delta=0\,\,$ then $\,(*)\,$ has one unique solution. $\,x^3=t={-b/2a}\,$ , and if $\,\Delta >0\,$ then there're two solutions for $\,t=x^3\,$.</p>
<p>Since $\,3\,$ is an odd natural we don't care whether the solutions above are positive or negative, there <em>always</em> are solutions as long as $\,\Delta\geq0\,$, so now you have to take care of the different cases...</p>
|
2,451,350 | <p>Currently I am reading into functional data analysis. A common assumption is that the expected value of some random function is $0$, i.e. $\mathbb{E}(x) = 0$ where $x \in L^2$, the space of all squared integrable functions with inner product $\langle x,y \rangle = \int x(t)y(t) \text{d}t$. </p>
<p>My question might appear a little trivial to many of you, but I just want to be certain that I don't get this basic concept of zero expectation wrong: Does $\mathbb{E}(x) = 0$ mean, that $\mathbb{E}\left[x(t)\right] = 0 ~\forall t$?</p>
<p>Thanks for your help!</p>
| Kenny Lau | 328,173 | <blockquote>
<p>If $Ax = b$ has a solution $x = u$, then $u + v$ is also a solution to $Ax = b$ for all solutions $x = v$ to $Ax = 0$.</p>
</blockquote>
<p>This sentence may be a little bit difficult to understand. Allow me to rephrase it:</p>
<blockquote>
<p>If $Ax = b$ has a solution $x = u$, then let $x=v$ be a solution to $Ax = 0$: $x = u+v$ would also be a solution to $Ax = b$.</p>
</blockquote>
<hr>
<p>About your question 3:</p>
<ol>
<li><p>$Ax = b$ either has no solution or has a solution.</p></li>
<li><p>If it has a solution, then it has infinitely many solutions.</p></li>
<li><p>Therefore, $Ax = b$ either has no solution or has infinitely many solutions.</p></li>
</ol>
|
4,602,683 | <p>Let <span class="math-container">$\mathbb{F}$</span> be a field, and consider <span class="math-container">$\mathbb{F}^\mathbb{F}$</span> as an algebra over <span class="math-container">$\mathbb{F}$</span> with the standard function multiplication. Let <span class="math-container">$D$</span> be a linear transformation on a subalgebra of <span class="math-container">$\mathbb{F}^\mathbb{F}$</span> closed under function composition that satisfies the chain rule. Does <span class="math-container">$D$</span> necessarily satisfy the product rule for arbitrary <span class="math-container">$\mathbb{F}$</span>? (Inspired by a comment on <a href="https://math.stackexchange.com/questions/4602208/does-the-product-rule-imply-the-chain-rule">this question.</a>) What if the subalgebra must be unital?</p>
| Marius S.L. | 760,240 | <p>Not an answer, but a helpful heuristic.</p>
<p>The chain rule and the product rule cannot be compared. A product of functions, which you called standard, requires two functions with the same domain and range: <span class="math-container">$f,g\, : \,D\rightarrow R$</span> since we plug in the same value and multiply in a set where multiplication has to be defined:
<span class="math-container">$$(f\cdot g)(x)=f(x)\cdot g(x).$$</span>
On the other hand, a composition of functions requires that the range from one function is within the domain of another: <span class="math-container">$f\, : \,R\rightarrow S$</span> and <span class="math-container">$g\, : \,D \rightarrow R$</span> in order to make
<span class="math-container">$$
(f\circ g)(x)=f(g(x))
$$</span><br />
possible. If you simplify all these requirements by setting e.g. <span class="math-container">$D=R=S=\mathbb{R}$</span> then you disguise those subtleties. They are still there. Hence product and composition are two very different operations. It is even more obvious if we consider the derivatives:
<span class="math-container">\begin{align*}
D_p(f\circ g)&=D_{g(p)}(f)\cdot D_p(g)\\
D_p(f\cdot g)&=D_p(f)\cdot g+ f\cdot D_p(g)
\end{align*}</span>
The fact that the evaluation points differ significantly makes them incomparable.</p>
|
168,053 | <p>If g is a positive, twice differentiable function that is decreasing and has limit zero at infinity, does g have to be convex? I am sure, from drawing a graph of a function which starts off as being concave and then becomes convex from a point on, that g does not have to be convex, but can someone show me an example of an actual functional form that satisfies this property?</p>
<p>We know that since g has limit at infinity, g cannot be concave, but I am sure that there is a functional example of a function g:[0,∞)↦(0,∞) which is increasing, has limit zero at infinity, and is not everywhere convex, I just can't come up with it. Any ideas?</p>
<p>Thank you!</p>
| Community | -1 | <p>Since the functions mentioned so far are <strong>eventually</strong> convex, here is one more:
$$
f(x)=e^{-x}(3+2\sin x)
$$
The first derivative $$f\,'(x)=e^{-x}(2\cos x-2\sin x-3)$$ is always negative because $\cos x-\sin x\le \sqrt{2}$ for all $x$. But the second derivative $$f\,''(x)=e^{-x}(3-4\cos x)$$ changes sign infinitely many times. </p>
|
2,129,830 | <p>I am wondering if this is generally true for any topology. I think there might be counter examples, but I am having trouble generating them. </p>
| Nosrati | 108,128 | <p>In the Sierpinski topology $\{X,\emptyset,\{0\}\}$, the set $\{0\}$ is an open set that isn't the interior of any closed set.</p>
|
50,002 | <p>a general version: connected sums of closed manifold is orientable iff both are orientable.
I think this can be prove by using homology theory, but I don't know how.Thanks.</p>
| Jason DeVito | 331 | <p>If the connect sum is orientable, so are both pieces:</p>
<p>Proof: We'll use the fact that an $n$-manifold is closed and orientable iff $H_n(M) = \mathbb{Z}$. Assume $M_1$ is nonorientable and consider the connect sum $M_1\sharp M_2$.</p>
<p>The pair $(M_1\sharp M_2, M_2-\{p\})$ gives rise to a long exact sequence, a portion of which is $$...\rightarrow H_n(M_2-\{p\})\rightarrow H_n(M_1\sharp M_2)\rightarrow H_n(M_1\sharp M_2, M_2-\{p\})\rightarrow...$$</p>
<p>Now, $M_2-\{p\}$ is not closed so $H_n(M_2-\{p\}) = 0$. Also, we can identify $H_n(M_1\sharp M_2, M_2-\{p\})$ with $H_n(M_1\sharp M_2/M_2-\{p\}) = H_n(M_1) = 0$, since $M_1$ is nonorientable. By exactness, the middle term $H_n(M_1\sharp M_2)$ must be 0. Since $M_1\sharp M_2$ is clearly closed, it must be nonorientable. $\square$</p>
<p>I don't know how to show the converse using homology, but one can see that the connect sum of orientable manifolds is orientable as follows. Choose orientations on $M_1$ and $M_2$. These choices induce orientations at every point of $M_1\sharp M_2$; the only issue is whether or not these orientations agree on the intersection of $M_1$ and $M_2$, i.e., on an $S^{n-1}\times (0,1)$. Since $S^{n-1}\times (0,1) $ is connected (if $n > 1$, which we may as well assume since all $1$-manifolds are orientable), these orientations either agree on every point of $S^{n-1}\times (0,1)$ or disagree on every such point. If they disagree, it's clear that choosing the reverse orientation on $M_2$ will make them agree. But then this defines an orientation on $M_1\sharp M_2$, so it's orientable.</p>
|
942,470 | <p>I am trying to count how many functions there are from a set $A$ to a set $B$. The answer to this (and many textbook explanations) are readily available and accessible; I am <strong>not</strong> looking for the answer to that question and <strong>please do not post it</strong>. Instead I want to know what fundamental mistake(s) I am making in counting the number of these functions. My reasoning is below, which I know is wrong after checking this question: <a href="https://math.stackexchange.com/questions/639326/how-many-functions-there-is-from-3-element-set-to-2-element-set">How many functions there is from 3 element set to 2 element set?</a>.</p>
<hr>
<p>For an example case, I consider counting how many functions there are from set $A = \{0,1\}$ to set $B = \{a,b\}$. My understanding of the term <em>function</em> is that it is any possible mapping between elements of set $A$ to elements of set $B$. Thus, a possible function $F: A \times B$ is the function that maps each element of $A$ to no element of $B$, i.e. $f_0(0) = \emptyset, f_0(1) = \emptyset$. Another possible function is $f_1(0) = a, f_1(1) = \{a, b\}$. </p>
<p>I notice a pattern here: for each element of the set $A$, there are $|\mathcal P (B)|$ unique combinations of elements that it can map to. In this case, $\mathcal P(B) = \{\{a,b\}, \{a\}, \{b\}, \emptyset\}$. To count these functions, then, we can use the product rule, since the choice of what each element of $A$ maps to does not affect what another element of $A$ can map to (since we consider all functions). </p>
<p>There are $4$ choices for $0$ and $4$ choices for $1$. Therefore there are $16$ unique functions $F: A \times B$. For a sanity check, I've listed out all <strong>16</strong> possible functions.</p>
<p>$f_0(0) = \emptyset, f_0(1) = \emptyset$</p>
<p>$f_1(0) = \emptyset, f_1(1) = \{a\}$</p>
<p>$f_2(0) = \emptyset, f_2(1) = \{b\}$</p>
<p>$f_3(0) = \emptyset, f_3(1) = \{a, b\}$</p>
<p>$f_4(0) = \{a\}, f_4(1) = \emptyset$</p>
<p>$f_5(0) = \{a\}, f_5(1) = \{a\}$</p>
<p>$f_6(0) = \{a\}, f_6(1) = \{b\}$</p>
<p>$f_7(0) = \{a\}, f_7(1) = \{a, b\}$</p>
<p>$f_8(0) = \{b\}, f_8(1) = \emptyset$</p>
<p>$f_9(0) = \{b\}, f_9(1) = \{a\}$</p>
<p>$f_{10}(0) = \{b\}, f_{10}(1) = \{b\}$</p>
<p>$f_{11}(0) = \{b\}, f_{11}(1) = \{a, b\}$</p>
<p>$f_{12}(0) = \{a,b\}, f_{12}(1) = \emptyset$</p>
<p>$f_{13}(0) = \{a,b\}, f_{13}(1) = \{a\}$</p>
<p>$f_{14}(0) = \{a,b\}, f_{14}(1) = \{b\}$</p>
<p>$f_{15}(0) = \{a,b\}, f_{15}(1) = \{a, b\}$</p>
<p>The generalization: The number of functions $F: A \times B$ is $|\mathcal P(B)|^{|A|}$.</p>
<hr>
<p>Now I know my reasoning is completely wrong, but why? Am I double counting? Do I misunderstand the definition of a function? </p>
| adrija | 173,185 | <p>A function $f:A\rightarrow B$ is a rule that assigns to an element of $A$ an $unique$ element of $B$. So, first of all, given $a\in A$, you can't say that it maps to nothing or to a subset of two or more elements. That won't be a function at all from $A$ to $B$, but since with each element of $A$ you are associating a subset of $B$, it will be a function from $A$ to the power set of $B$. And in that case, what you've computed is correct.</p>
|
2,542,056 | <p>Baire's Category Theorem states that a meager subset of a complete metric space has empty interior. </p>
<p>Are there examples of meager subsets of non-complete metric spaces which do not have empty interior?<br>
In particular, are the rationals numbers as a subset of themselves an example?</p>
| HBHSU | 397,029 | <p>Use $\mathbb{Q}$ as the underlying space. For each $G_n$, use $\mathbb{Q}-\{{q_n\}}$, where $q_n\in\mathbb{Q}$, so that the countably infinite intersection is $\emptyset$.</p>
|
4,203,906 | <p>Does there exist real numbers a and b such that</p>
<p>(i) <span class="math-container">$a+b$</span> is rational and <span class="math-container">$a^
n +b^
n$</span>
is irrational for each natural <span class="math-container">$n ≥ 2$</span>;</p>
<p>(ii) <span class="math-container">$a+b$</span> is irrational and <span class="math-container">$a^
n +b^
n$</span>
is rational for each natural <span class="math-container">$n ≥ 2$</span>?</p>
<p>for (i), I tried to prove yes, and I was thinking of some rational <span class="math-container">$x$</span> and irrational <span class="math-container">$z$</span> such that <span class="math-container">$a = x+z, b = x-z$</span>, but I don't quite know how to show <span class="math-container">$a^n + b^n$</span> is always irrational for <span class="math-container">$n \geq 2.$</span> I tried to use induction, but since you can't say irrational + irrational = irrational, I'm at a loss as to what to do.</p>
<p>for (ii), I tried to prove no by factorizing some <span class="math-container">$a^n + b^n$</span> for some odd <span class="math-container">$n$</span>, say <span class="math-container">$a^3 + b^3 =(a+b)(a^2 -ab + b^2)$</span>, and somehow proving that <span class="math-container">$\frac{a^n + b^n}{a+b}$</span> is rational for some odd <span class="math-container">$n$</span>, but I don't know what to do next.</p>
| AAA | 627,380 | <p>For ii, I claim that if <span class="math-container">$a^n+b^n$</span> is rational for all <span class="math-container">$n\geq 2$</span>, then <span class="math-container">$a+b$</span> is rational.</p>
<p>Proof: Let <span class="math-container">$f_n=a^n+b^n$</span>. Then <span class="math-container">$f_n^2-f_{2n}=2(ab)^n$</span>. So we conclude that <span class="math-container">$2(ab)^n$</span> is rational for all <span class="math-container">$n\geq 2$</span>. But then we divide <span class="math-container">$2(ab)^3$</span> by <span class="math-container">$2(ab)^2$</span> to conclude that <span class="math-container">$ab$</span> is rational. Then we conclude that <span class="math-container">$f_2-ab=a^2-ab+b^2$</span> is rational. Finally, since <span class="math-container">$f_3=(a^2-ab+b^2)f_1$</span>, we conclude that <span class="math-container">$f_1$</span> is rational.</p>
|
3,255,654 | <p>In a multiple choice question, there are five different answers, of which only one is correct. The probability that a student will know the correct answer is 0.6. If a student does not know the answer, he guesses an answer at random.</p>
<p>a) What is the probability that the student gives the correct answer?</p>
<p>b) If the student gives the correct answer, what is the probability that he guessed?</p>
<p>Let <strong>A<span class="math-container">$_1$</span></strong> be: student knows the answer, <strong>A<span class="math-container">$_2$</span></strong>: student doesn't know the answer, <strong>B</strong>: student gives the correct answer</p>
<p>P(A<span class="math-container">$_1$</span>) = 0.6 and P(A<span class="math-container">$_2$</span>) = 0.4. How do I find P(B|A<span class="math-container">$_1$</span>) and P(B|A<span class="math-container">$_2$</span>)?</p>
| Henno Brandsma | 4,280 | <p><span class="math-container">$P(B|A_2)$</span> is the probability that the student gives the correct answer, while guessing (i.e. not knowing), so that is <span class="math-container">$\frac{1}{5} = 0.2$</span> </p>
<p>Clearly, by definition almost, <span class="math-container">$P(B|A_1)=1$</span>, if the student knows the answer he/she gives it (basic assumption of the problem, I suppose).</p>
<p>Now complete your plan by applying the law of total probability:</p>
<p><span class="math-container">$$P(B)=P(B|A_1)P(A_1) + P(B|A_2)P(A_2)$$</span></p>
<p>now that all values are known.</p>
<p>b) Is applying Bayes' rule, asking for <span class="math-container">$P(A_1|B)$</span> etc. </p>
|
1,480,331 | <blockquote>
<p>Let $A$ be an $m \times n$ matrix with $m < n$ and $\operatorname{rank}(A) = m$. Prove that there exist infinitely many matrices $B$ such that $AB = I$.</p>
</blockquote>
<p>Stumped. How do I begin to prove this?</p>
| Robert Israel | 8,508 | <p>Note that this does exist, because with probability $1$ you will eventually get a $6$ or an odd number. Suppose the first time you get a $6$ or an odd number is on the $n$'th roll. There is one $6$ and there are $3$ odd numbers, so the conditional probability, given that it happens on the $n$'th roll, is $1/4$. And since that is the same for all $n$, the answer is again $1/4$.</p>
|
288,499 | <p>Simply stated, I've been trying for a long time to either find in the literature, or derive myself, a notion of path in Cech closure spaces, that specialises to paths in a topological space, and to graph-like paths in so-called "quasi-discrete closure spaces". </p>
<p>Let me recall the definitions:</p>
<p>A closure space is a pair <span class="math-container">$(X,C)$</span> where <span class="math-container">$C : \mathcal P (X) \to \mathcal P (X)$</span> is a function satisfying <span class="math-container">$C(\emptyset) = \emptyset$</span>, <span class="math-container">$A \subseteq C(A)$</span>, <span class="math-container">$C(A \cup B) = C(A) \cup C(B)$</span>.</p>
<p>A continuous function <span class="math-container">$f$</span> is a function between two spaces such that <span class="math-container">$f(C(A)) \subseteq C(f(A))$</span></p>
<p>A topological space is (via the Kuratowski definition) a closure space with the additional axiom <span class="math-container">$C(C(A)) = C(A)$</span> (idempotence of closure).</p>
<p>Any reflexive relation <span class="math-container">$R$</span> generates a closure space by <span class="math-container">$C(A) = \{y \in A | \exists x \in A . x R y\}$</span>. That's called a "quasi-discrete closure space". </p>
<p>Topological paths are defined as continuous functions from the unit interval. </p>
<p>Let me now make two examples. </p>
<p>Example 1: <span class="math-container">$\mathbb R^2$</span>. Topological paths work fine (indeed!). </p>
<p>Example 2: the closure space on <span class="math-container">$\mathbb N$</span> generated by the successor relation. It's a nice closure space, but topological paths exist that do not "follow the edges" of non-symmetric relations, due to "directionality" of <span class="math-container">$R$</span>; topology (e.g. the unit interval) is intrinsically symmetric; relations are not. For an example of this consider the set <span class="math-container">$\{a,b\}$</span> and the relation <span class="math-container">$R = \{ (a,b) \}$</span>. This generates a quasi-discrete closure space. Consider the function <span class="math-container">$f : [0,1] \to \{a,b\}$</span> with <span class="math-container">$f(0) = b$</span> and <span class="math-container">$f((0,1])) = a$</span>. This function is continuous but not a graph-like path in <span class="math-container">$R$</span>.</p>
<p>Further clarifications (due to comments)</p>
<p>I understand that [0,1]-paths in topological spaces can't be directional. That's absolutely the case and for good reasons. But then, is there a more general construction that becomes the "natural" notion of path in closure spaces, and in topological spaces, it is not directional, since this is very natural in topological spaces?</p>
<p>Let's say it more formally: perhaps there's a universal construction in the category of closure spaces, of which topological spaces are a full subcategory, that captures the notion of paths in such a way that in directional graph structures like quasi-discrete closure spaces, paths are directional, and in topological spaces, paths are in one-to-one-correspondence to classical, topological paths, a.k.a. <span class="math-container">$[0,1]$</span>-morphisms?</p>
| Igor Rivin | 11,142 | <p>I am not sure of the notation, but I assume this can be derived from the Schlafli formula for the volume of a tetrahedron (so this seems to indicate that Gauss knew Schlafli's formula three quarters of a century prior to Schlafli):</p>
<p>$$
dV = -\frac12 \sum_{ij}l_{ij} d \alpha_{ij},$$ where $l$ is the length of the edge, and $\alpha$ is the dihedral angle of the edge. Notice, in particular, that if the dihedral angle at an edge does not change, then that edge does not contribute to the sum. Further note that if you look at the link of the vertex $1$ (wlog), this is a spherical triangle, whose angles are the dihedral angles $12, 13, 14$ while its sides are the face angles of the three adjacent faces. By the Gauss-Bonnet theorem (note the first author), the variation of the area of the face is equal to minus the sum of the variations of the angles.</p>
<p>Put this all together, and you should get Gauss' formula. As for Schlafli's formula, there are many nice proofs, a simple geometric one by Vinberg (which appeared in the Geometry of Spaces of Constant Curvature survey), and a very pretty analytic one by Hellmuth Kneser, which appeared in Deutsche Mathematik, and thus is hard to find, but there is a more recent exposition a paper of Feng Luo <a href="https://arxiv.org/abs/math/0412208" rel="noreferrer">https://arxiv.org/abs/math/0412208</a>.</p>
|
3,096,572 | <p>I am trying to find whether the following is stable absolutely using the improved Euler and the Adams-Bashforth 2 scheme,
<span class="math-container">$u'=\begin{bmatrix} -20&0&0\\ 20&-1&0\\0&1&0\end{bmatrix}u=Au$</span>, where the timestep is <span class="math-container">$\frac{1}{2}$</span>.</p>
<p>From my notes we know that for <span class="math-container">$u'=\lambda u$</span> is absolutely stable iff <span class="math-container">$|1+h\lambda|<1$</span>. But I am confused as to what the `<span class="math-container">$\lambda$</span>' is. Is it simply the eigenvalue(s) of the problem, which then confuses me if we are using other methods, or is it just a scalar? If it is the eigenvalues, then how to do you derive the absolute stability for the system using the Adams-Bashforth 2 scheme?</p>
<p>So far I found, for Euler, that <span class="math-container">$\lambda=-20,-1,-0.1$</span> which is not stable for our timestep. However for deriving the stability for the Adams-Bashforth I got as far as,</p>
<p><span class="math-container">$$u^{n+1}=u^nA(1-\frac{A}{2}+\frac{A^2}{4})$$</span>
but struggle as to how to proceed. Any advice would be appreciated.</p>
| Cesareo | 397,348 | <p>With the improved Euler method we have</p>
<p><span class="math-container">$$
u_{k+1}=\left(I_3+hA + \frac h2 A^2\right)u_k = M u_k
$$</span></p>
<p>this schema is contractive if <span class="math-container">$\max|\text{eigenvalues}(M)| < 1$</span> but calculating we get</p>
<p><span class="math-container">$$
M = \left(
\begin{array}{ccc}
200 h^2-20 h+1 & 0 & 0 \\
20 h-210 h^2 & \frac{h^2}{2}-h+1 & 0 \\
10 h^2 & h-\frac{h^2}{2} & 1 \\
\end{array}
\right)
$$</span></p>
<p>and <span class="math-container">$\text{eigenvalues}(M) = \left\{1,\frac{1}{2} \left(h^2-2 h+2\right),200 h^2-20 h+1\right\}$</span> so no stability chance. This is due mainly to the null last column in matrix <span class="math-container">$A$</span>.</p>
|
4,507,155 | <p>In <a href="https://math.stackexchange.com/questions/4454551/are-fracp212-and-fracp5np5-12-are-coprime-to-each-other">previous post</a>, I got the answer that <span class="math-container">$\gcd \left(\frac{p^2+1}{2}, \frac{p^5-1}{2} \right)=1$</span>, where <span class="math-container">$p$</span> is prime number.</p>
<p>I am looking for more general case, that is for <span class="math-container">$p$</span> prime,</p>
<blockquote>
<p>When is <span class="math-container">$\gcd \left(\frac{p^r+1}{2}, \frac{p^t-1}{2} \right)=1$</span> ?</p>
</blockquote>
<p>where <span class="math-container">$t=r+s$</span> such that <span class="math-container">$\gcd(r,s)=1$</span>.</p>
<p>I am excluding the cases <span class="math-container">$r=1=s$</span>.</p>
<hr />
<p>In the <a href="https://math.stackexchange.com/questions/4454551/are-fracp212-and-fracp5np5-12-are-coprime-to-each-other">previous post</a>, it was <span class="math-container">$r=2,~t=5$</span>. So <span class="math-container">$t=5=2+3=r+s$</span> with <span class="math-container">$s=3$</span> so that <span class="math-container">$\gcd(r,s)=1$</span>.</p>
<p>In this current question:</p>
<p>Since <span class="math-container">$\gcd(r,s)=1$</span>, we also have <span class="math-container">$\gcd(r,t)=1$</span>.
I have the following intuition:</p>
<p><strong>Case-I</strong>:</p>
<p>Assume <span class="math-container">$r$</span>=even and <span class="math-container">$s$</span>=odd so that <span class="math-container">$\gcd(r,s)=1$</span> as well as <span class="math-container">$\gcd(r,t)=1$</span>. We can also assume <span class="math-container">$r$</span>=odd and <span class="math-container">$s$</span>=even.</p>
<p>I think the same strategies of <a href="https://math.stackexchange.com/questions/4454551/are-fracp212-and-fracp5np5-12-are-coprime-to-each-other">previous post</a> can be applied to show that <span class="math-container">$$\gcd \left(\frac{p^r+1}{2}, \frac{p^t-1}{2} \right)=1.$$</span></p>
<p><strong>Case II:</strong></p>
<p>The problem arises when both <span class="math-container">$r$</span> and <span class="math-container">$s$</span> are odd numbers so that <span class="math-container">$t=r+s$</span> is even.</p>
<p>If I take <span class="math-container">$r=3, ~s=5$</span>, then <span class="math-container">$t=8$</span>.</p>
<p>For prime <span class="math-container">$p=3$</span>, <span class="math-container">$\frac{p^r+1}{2}=\frac{3^3+1}{2}=14$</span> and <span class="math-container">$\frac{p^t-1}{2}=\frac{3^8-1}{2}=3280$</span> so that the gcd is <span class="math-container">$2$</span> at least.</p>
<p>For other primes also we can find gcd is not <span class="math-container">$1$</span>.</p>
<hr />
<p>So I think it is possible only for Case I, where among <span class="math-container">$r$</span> and <span class="math-container">$s$</span>, one is odd and another is even so that <span class="math-container">$t$</span> is odd.</p>
<p>In other word, <span class="math-container">$t$</span> can not be even number.</p>
<p>But I need to be ensured with a general method.</p>
<p>So the question reduces to</p>
<blockquote>
<p>How to prove <span class="math-container">$\gcd \left(\frac{p^m+1}{2}, \frac{p^n-1}{2} \right)=1$</span> ?</p>
</blockquote>
<p>provided <span class="math-container">$\gcd(m,n)=1$</span> and <span class="math-container">$n$</span> is odd number and <span class="math-container">$p$</span> is prime number.</p>
<p>Thanks</p>
| Keith Backman | 29,783 | <p>For primes of the form <span class="math-container">$6k-1$</span>, and choosing <span class="math-container">$m$</span> odd and <span class="math-container">$n$</span> even, you will obtain <span class="math-container">$p^m=6a-1$</span> and <span class="math-container">$p^n=6b+1$</span>. Hence <span class="math-container">$p^m+1=6a$</span> and <span class="math-container">$p^n-1=6b$</span> with <span class="math-container">$\gcd(6a,6b) \ge 6$</span>. Your final conjecture <span class="math-container">$\gcd \left( \frac{p^m+1}{2}, \frac{p^n-1}{2}\right)=1$</span> is never true in the circumstances considered here.</p>
|
431,236 | <p>I have a cylinder of radius 4 and height 10 that is at a 30 degree angle. I need to find the volume.</p>
<p>I have no clue how to do this, I have spent quite a while on it and went through many ideas but I think my best idea was this.</p>
<p>I know that the radius is 4 so if I cut the cylinder in half from corner to corner I will have two side lengths giving me a third side length. So this gives</p>
<p>$$\sqrt{116} = height$$ </p>
<p>Or the length of the tall sides.</p>
<p>Now I just plug this into my formula</p>
<p>$$\pi r^2 h$$</p>
<p>$$\pi *16*\sqrt116$$</p>
<p>This is about $34\pi$ which is way off. What did I do wrong?</p>
| apnorton | 23,353 | <p>Picture for reference:<br>
<img src="https://i.stack.imgur.com/VIfvF.jpg" alt="Graphic"></p>
<p>Let's get our terms straight here. $h$ is the height of the cylinder; $\ell$ is the side length, and $r$ is the radius. This cylinder is tilted at $30^\circ$.</p>
<p>The volume of a cylinder like this is given by the formula:
$$V = \pi r^2 h$$</p>
<p>For your problem, when you say "height of $10$," I'm assuming you actually mean $\ell=10$. From some trig, we see that:
$$h = \ell\sin30^\circ = \frac \ell 2 = 5$$</p>
<p>Thus, our volume is:
$$V = \pi (4)^2(5) = 80\pi = 251.3\;\text{cubic units}$$</p>
<p><strong>EDIT:</strong><br>
In the comments, it was mentioned that the vertical length is known. Thus, the solution is much simpler:</p>
<p>$$V=\pi r^2 h = \pi(4)^2(10) = 160\pi = 502.6$$</p>
|
1,998,244 | <p>Given the equation of a damped pendulum:</p>
<p>$$\frac{d^2\theta}{dt^2}+\frac{1}{2}\left(\frac{d\theta}{dt}\right)^2+\sin\theta=0$$</p>
<p>with the pendulum starting with $0$ velocity, apparently we can derive:</p>
<p>$$\frac{dt}{d\theta}=\frac{1}{\sqrt{\sqrt2\left[\cos\left(\frac{\pi}{4}+\theta\right)-e^{-(\theta+\phi)}\cos\left(\frac{\pi}{4}-\phi\right)\right]}}$$</p>
<p>where $\phi$ is the initial angle from the vertical. How can we derive that? Obviously $\frac{dt}{d\theta}$ is the reciprical of $\frac{d\theta}{dt}$, but I don't see how to deal with the second derivative.</p>
<p>I've found a similar derivation at <a href="https://en.wikipedia.org/wiki/Pendulum_(mathematics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Pendulum_(mathematics)</a>, where the formula</p>
<p>$${\frac {d\theta }{dt}}={\sqrt {{\frac {2g}{\ell }}(\cos \theta -\cos \theta _{0})}}$$</p>
<p>is derived in the "Energy derivation of Eq. 1" section. However, that uses a conservation of energy argument which is not applicable for a damped pendulum.</p>
<p>So how can I derive that equation?</p>
| Simply Beautiful Art | 272,831 | <p>Out of the 17 who had at least one brother, 11 had no sisters. Combined with the 18 who had at least one sister, this gives us 29 students. Adding in the 5 who had no brothers or sisters, we get 34. So there is exactly 34 students.</p>
<p>$$\text{no sister/brother $+$ no sisters some brothers $+$ sisters, with or w/out brothers}=34$$</p>
|
649,502 | <p>What do we mean when we talk about a topological <em>space</em> or a metric <em>space</em>? I see some people calling metric topologies metric spaces and I wonder if there is some synonymity between a topology and a space? What is it that the word means, and if there are multiple meanings how can one distinguish them?</p>
| ncmathsadist | 4,154 | <p>I think of a "space" as the conceptually smallest place in which a given abstraction makes sense. For example, in a metric space, we have distilled the notion of distance. In a topological space, we are in the minimal setting for continuity. </p>
|
66,570 | <pre><code>tmp = {x, y, z}^{1, 2, 3}
Times @@ tmp
Length[%]
</code></pre>
<p>This gives a length of 3. But I was expecting 1.</p>
<p>What is exactly this "length" of x*y^2*z^3 called?
I would think this as a scalar of length 1?</p>
<p>Thanks!</p>
| Kellen Myers | 9,482 | <p>The <code>Length</code> operator will operate on lists, but if your object is not a list, it is not automatically considered to be length 1. When you apply <code>Times@@</code> to your <code>tmp</code>, it is no longer a list.</p>
<p><code>Length</code> will apply to many other expressions. For example:</p>
<pre><code>Length[p+q+r]
(*Output: 3*)
Length[a*b+c]
(*Output: 2*)
Length[5+2]
(*Output: 0*)
</code></pre>
<p>In the first case, the sum has three parts, so it is a sum of length three. Note that if you did <code>Length[List@@(p+q+r)]</code> you'd get 3 also.</p>
<p>The next one is only length 2 because the "outermost" operation has length 2. It is a sum of two things. You can even do something like <code>(a*b+c)[[1]]</code> or even <code>Length[(a*b+c)[[1]]]</code>.</p>
<p>The last one will produce 7 before attempting to evaluate <code>Length</code>, and because 7 is an "atomic" thing (it is not a list or other compound expression like a symbolic sum, product, etc.) it returns length 0.</p>
<p>Or try:</p>
<pre><code>tmp2 = Times @@ tmp
Map[Length, List @@ tmp2]
</code></pre>
<p>Note that you <strong>are</strong> required to <code>List@@</code> the expression. I don't think <code>Map</code> will map itself over products or sums, so you have to convert <code>tmp2</code> back to a list, which would be <code>{x,y^2,z^3}</code>. Notice that <code>x</code> is not a compound expression so it has 0 length, just like 7.</p>
|
66,570 | <pre><code>tmp = {x, y, z}^{1, 2, 3}
Times @@ tmp
Length[%]
</code></pre>
<p>This gives a length of 3. But I was expecting 1.</p>
<p>What is exactly this "length" of x*y^2*z^3 called?
I would think this as a scalar of length 1?</p>
<p>Thanks!</p>
| Basheer Algohi | 13,548 | <p>According to the documentation:</p>
<pre><code>Length[expr]
</code></pre>
<p>gives the number of elements in expr.</p>
<p>in your case</p>
<pre><code>tmp = {x, y, z}^{1, 2, 3}
r=Times @@ tmp
(*x y^2 z^3*)
</code></pre>
<p>r is an expression with thee elements. to see this you do:</p>
<pre><code>FullForm[r]
(*Times[x, Power[y, 2], Power[z, 3]]*)
</code></pre>
<p>the expression here is <code>Times</code> and the elements of <code>Times</code> are 3 (<code>x</code>, <code>Power[y, 2],</code> and <code>Power[z, 3]</code>)</p>
|
3,745,273 | <p>I am looking for a way to solve :</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{x\sin(3x)}{x^4+1}\,dx $$</span></p>
<p>without making use of complex integration.</p>
<p>What I tried was making use of integration by parts, but that didn't reach any conclusive result. (i.e. I integrated <span class="math-container">$\sin(3x)$</span> and differentiated the rest)</p>
<p>I can't see a clear starting point to solve this question. Any help appreciated.</p>
<p>This problem is posted by Vilakshan Gupta on <a href="https://brilliant.org/problems/integrate-it-8/?ref_id=1591957" rel="nofollow noreferrer">Brilliant</a>.</p>
| Claude Leibovici | 82,404 | <p><em>Too long for a comment</em></p>
<p>The more general problem
<span class="math-container">$$I=\int_{-\infty}^\infty \frac {x^m\,\sin(px)}{P_{2n}(x)} \,dx \qquad\text{where}\qquad m <2n\qquad p > 0$$</span> is not too bad if you feel conformatble with the manipulation of complex numbers. For sure, the condition is that <span class="math-container">$P_{2n}(x)$</span> has no real root (<span class="math-container">$P_{2n}(x)$</span> has <span class="math-container">$n$</span> pairs of complex conjugate roots).</p>
<p>So, let us write
<span class="math-container">$$P_{2n}(x)=\sum_{k=0}^{2n} a_k\,x^k=a_{2n} \prod_{i=0}^{2n}(x-r_i)$$</span> Now partial fraction decomposition leads to
<span class="math-container">$$I=\frac 1 {a_{2n}}\sum_{i=0}^{2n} b_i \int_{-\infty}^\infty \frac {\sin(px)}{x-r_i}\,dx$$</span> The antiderivative involves sine and cosine integrals but at the end
<span class="math-container">$$J_r=\int_{-\infty}^\infty \frac {\sin(px)}{x-r}\,dx=\pi\, e^{i p r}$$</span></p>
<p><strong>Edit</strong></p>
<p>For the specific case of
<span class="math-container">$$I=\int_{-\infty}^\infty \frac {x\,\sin(px)}{x^4+1} \,dx \qquad\text{where}\qquad p > 0$$</span> we then have
<span class="math-container">$$\frac {x}{x^4+1}=\frac i 4\left(\frac{1}{ x+\frac{1-i}{\sqrt{2}}}+\frac{1}{
x-\frac{1-i}{\sqrt{2}}}-\frac{1}{ x+\frac{1+i}{\sqrt{2}}}+\frac{1}{
x-\frac{1+i}{\sqrt{2}}}\right)$$</span>
<span class="math-container">$$I=\frac i 4\left(-4 i \pi e^{-\frac{p}{\sqrt{2}}} \sin \left(\frac{p}{\sqrt{2}}\right)\right)=\pi \, e^{-\frac{p}{\sqrt{2}}}\, \sin \left(\frac{p}{\sqrt{2}}\right)$$</span></p>
|
2,994,900 | <p>Prove that <span class="math-container">$$\sum_{d\mid q}\frac{\mu(d)\log d}{d}=-\frac{\phi(q)}{q}\sum_{p\mid q}\frac{\log p}{p-1},$$</span>
where <span class="math-container">$\mu$</span> is Möbius function, <span class="math-container">$\phi$</span> is Euler's totient function, and <span class="math-container">$q$</span> is a positive integer.</p>
<p>I can get
<span class="math-container">\begin{align}
\sum_{d\mid q} \frac{\mu(d)\log d}{d}& = \sum_{d\mid q}\frac{\mu(d)}{d}\sum_{p\mid d}\log p \\
& = \sum_{p\mid q} \log p \sum_{\substack{d\mid q \\ p\mid d}} \frac{\mu(d)}{d}
= \sum_{p\mid q} \log p \sum_{\substack{d \\ p\mid d \mid q}} \frac{\mu(d)}{d},
\end{align}</span>
Let <span class="math-container">$d=pr$</span>, then <span class="math-container">$\mu(d)=\mu(p)\mu(r)=-\mu(r)$</span>,
<span class="math-container">$$ \sum_{p\mid q} \log p \sum_{\substack{d \\ p\mid d \mid q}} \frac{\mu(d)}{d}= - \sum_{p\mid q} \frac{\log p}{p} \sum_{\substack{r\mid q \\ p \nmid r}} \frac{\mu(r)}{r}.$$</span>
But I don't know why
<span class="math-container">$$- \sum_{p\mid q} \frac{\log p}{p} \sum_{\substack{r\mid q \\ p \nmid r}} \frac{\mu(r)}{r}=-\frac{\phi(q)}{q} \sum_{p\mid q} \frac{\log p}{p-1}?$$</span></p>
<p>Can you help me?</p>
| Fabio Lucchini | 54,738 | <p>Let me write <span class="math-container">$n$</span> instead of <span class="math-container">$q$</span>.
We have
<span class="math-container">\begin{align}
\sum_{d|n}\frac{\mu(d)\log(d)}d
&=\sum_{d|n}\frac{\mu(d)}d\sum_{p|d}\log(p)\\
&=\sum_{p|n}\log(p)\sum_{p|d|n}\frac{\mu(d)}d\\
&=\frac 1n\sum_{p|n}\log(p)\sum_{p|d|n}\mu(d)\frac nd
\end{align}</span>
Write <span class="math-container">$n=p^em$</span> with <span class="math-container">$p\nmid m$</span>.
Then <span class="math-container">$\varphi(n)=p^{e-1}(p-1)\varphi(m)$</span> and
<span class="math-container">\begin{align}
\sum_{p|d|n}\mu(d)\frac nd
&=\sum_{d\mid m}\sum_{i=1}^e\mu(p^id)\frac{p^em}{p^id}\\
&=\sum_{d\mid m}\mu(pd)\frac{p^em}{pd}\\
&=-\sum_{d\mid m}\mu(d)\frac{p^em}{pd}\\
&=-p^{e-1}\varphi(m)\\
&=-\frac{\varphi(n)}{p-1}
\end{align}</span></p>
|
50,362 | <p>I have a question about the basic idea of singular homology. My question is best expressed in context, so consider the 1-dimensional homology group of the real line $H_1(\mathbb{R})$. This group is zero because the real line is homotopy equivalent to a point. The chain group $C_1(\mathbb{R})$ contains all finite formal linear combinations of continuous maps from the interval $[0,1]$ into $\mathbb{R}$. One such map (call it $\mu$) maps the interval along some path that begins and ends at zero. (For my purposes it doesn't matter how exactly.) This map is a cycle, i.e. is contained in the kernel of $\partial_1:C_1 \rightarrow C_0$, because it begins and ends at the same point. It must be that it is also a boundary, i.e. contained in the image of $\partial_2:C_2 \rightarrow C_1$, because otherwise it would represent a nonzero homology class in $H_1$. My question is about exactly how and why it is a boundary.</p>
<p>I have an intuitive understanding of why it is a boundary that does not seem to work when I translate it into formal language, and a formal way to show it is a boundary that does not seem to capture the heart of the intuition. My reference on the formal definitions is Allen Hatcher's <i>Algebraic Topology</i>.</p>
<p>Intuitively, $\mu$ maps $[0,1]$ to a loop and then smooshes it into the real line (i.e. $\mu$ factors through $S^1$). The map from the loop to the line could be extended to a disc without losing continuity, since the whole thing gets smooshed anyway. A triangle could be mapped homeomorphically to the disc, and this would give us a map $\zeta: \Delta^2 \rightarrow \mathbb{R}$ of which, intuitively anyway, $\mu$ is the boundary. However, formally, $\partial_2 (\zeta)$ is the formal sum of the restriction of $\zeta$ to each of its edges; it is thus a formal sum of <i>three</i> maps from the interval to the real line, and thus is not (formally) equal to $\mu$.</p>
<p>Formally, I can define a map $\alpha : \Delta^2 \rightarrow \mathbb{R}$ from a triangle to the real line that does have $\mu$ as a boundary, but I am very unsatisfied with this construction because it involves details that feel essentially extrinsic to the intuition above. Let the vertices of $\Delta^2$ be labeled 0, 1, 2. Map $\Delta^2$ to a disc in the following way: map vertex 0 to the center of the disc; the edges $[0,1]$ and $[0,2]$ to a radius in the same way (so that the restrictions of $\alpha$ to the two edges are equal); the edge $[1,2]$ around the circumference; and extend the map to the interior of the triangle in the obvious way. Then map the disc to the real line as above; the restriction to the circumference is $\mu$. Now, the boundary map $\partial_2$ by definition maps $\alpha$ to $\alpha |_{[0,1]} +\alpha |_{[1,2]}-\alpha |_{[0,2]}$. But $\alpha |_{[0,1]}$ and $\alpha |_{[0,2]}$ are equal and $\alpha |_{[1,2]}$ is equal to $\mu$, so $\partial_2(\alpha)=\mu$.</p>
<p>My question is this: is it correct that the intuitive construction of $\zeta$ does not provide an element of $C_2$ with $\mu$ as a boundary? Is it correct that in order to get $\mu$ as a boundary one must use a construction like that of $\alpha$ above? If so, is the intuition that $\mu$ is a boundary because it is a loop that can be extended to a disc before smooshing wrong? Does the fact that $\mu$ is a boundary really hang on the sign convention in the definition of $\partial_2$? If so, can you give me a reason for why this sign convention works to guarantee that such a construction will always exist when a cycle "seems like it should be" a boundary?</p>
<p>EDIT:</p>
<p>I should add, after reading a few very helpful but somehow-unsatisfying-to-me answers, that I am not just interested in the one-dimensional case. (See my comment on MartianInvader's answer.)</p>
<p>EDIT (7/12):</p>
<p>Thanks for all the help everyone. My immediate acute sense of cognitive dissonance has been addressed, so I'm marking the question answered. I have some residual sense of not getting the whole picture, but expect this to resolve itself with slow processing of more theorems (like homotopy invariance of homology, and the Hurewicz map, thank you Matt E and Dan Ramras).</p>
| Dylan Wilson | 423 | <p>Remember that $C_2X$ does not just consist of all maps $\sigma: \Delta^2 \rightarrow X$ but also all formal sums of these. In particular, consider a map of the unit square that realizes a nullhomotopy of $\mu$. Divide the unit square into two triangles, label the vertices, orient the edges properly, and interpret the nullhomotopy as a sum of two different maps $\Delta^2 \rightarrow X$ (one might have a minus sign, actually). The boundary of this chain should be $\mu$, if all goes well.</p>
<p>I think, actually, this type of thing should work more generally to show that if two paths are homotopic, then they are homologous (i.e. differ by a boundary.)</p>
|
3,424,687 | <blockquote>
<p>Let <span class="math-container">$n$</span> be a positive integer and a complex number with unit modulus is a solution of the equation <span class="math-container">$z^n+z+1=0$</span>. Prove that <span class="math-container">$n $</span> can't be <span class="math-container">$196$</span>. </p>
</blockquote>
<p>The above question has been bothering me since a long time. I 've tried using the Euler's form for <span class="math-container">$z $</span> and have obtained <span class="math-container">$\sin 2nx=-0.5$</span>. I don't know how to use that. Would someone help me to solve this problem?</p>
<p>Thanks in advance.</p>
| lhf | 589 | <p><span class="math-container">$z^n+z+1=0$</span> implies <span class="math-container">$1=|z|^n=|z^n|=|z+1|$</span>.</p>
<p>If moreover <span class="math-container">$|z|=1$</span>, then <span class="math-container">$z$</span> is a primitive cubic root of <span class="math-container">$1$</span> and so <span class="math-container">$z^2+z+1=0$</span>.</p>
<p>(Indeed, <span class="math-container">$|z+1|=1$</span> and <span class="math-container">$|z|=1$</span> define two circles which intersect at the primitive cubic roots of <span class="math-container">$1$</span>.)</p>
<p>Therefore, <span class="math-container">$z^n=z^2$</span> and so <span class="math-container">$z^{n-2}=1$</span>. Thus, <span class="math-container">$n \equiv 2 \bmod 3$</span>. However, <span class="math-container">$196 \equiv 1 \bmod 3$</span>.</p>
|
339,142 | <p>I'm trying to understand the difference between the sense, orientation, and direction of a vector. According to
<a href="http://www.eng.auburn.edu/users/marghdb/MECH2110/c1_2110.pdf">this</a>,
sense is specified by two points on a line parallel to a vector. Orientation is specified by the relationship between the vector and given reference lines (which I'm interpreting to be some basis).</p>
<p>However, these two definitions seem to be synonymous with direction. How do these 3 terms differ?</p>
| JohnDee | 118,544 | <p>Here is how I think of it, lets construct a vector from scratch using just two points in space A,B. Draw a line segment between the two points A,B. The magnitude of the line segment is the 'length' of the vector. The 'orientation' of the line segment we can define as the angle that the line segment makes with the horizontal axis. To be clear this angle is measured counterclockwise from the positive x axis and is an angle between 0 and 180. OK so far we just have a line segment that is situated on the plane somewhere and we know how it is oriented, but there is no arrowhead yet. Now the last thing you need is the 'sense', that basically tells you where to put the arrowhead and implies an order. We can define this vector as AB with an arrow over it, where you read it as the vector starting from A and ending at B.</p>
<p>More generally you can define a vector by defining its magnitude (length), its orientation, and then its sense. But keeping in mind, technically a vector is an equivalence class. So there are an infinite number of vectors which are parallel to each other (have the same orientation) and have the same sense or same choice of where to put the arrowhead (there are only two possible senses, since the arrowhead can be placed on A or on B). But I don't want to confuse you. Usually we discuss vectors that are situated at the origin so we don't concern ourselves over other equivalent vectors.</p>
<p>I just checked wiki which is basically the same as what I wrote: <a href="http://en.wikibooks.org/wiki/Statics/Forces_As_Vectors#Graphically" rel="nofollow">http://en.wikibooks.org/wiki/Statics/Forces_As_Vectors#Graphically</a></p>
<p>"A vector may be represented graphically by an arrow. The magnitude of the vector corresponds to the length of the arrow, and the direction of the vector corresponds to the angle between the arrow and a coordinate axis. The sense of the direction is indicated by the arrowhead."</p>
<p>This pretty much sums up what I just wrote above. You see the 'direction' (your article uses the word orientation) just gives you an angle that the vector makes with the horizontal axis, but that creates an ambiguity since an arrow can point in two opposite directions and still make the same angle. The sense clears this ambiguity and indicates where the arrowhead actually goes. So the sense tells us the order so to speak in which to read the vector. It indicates where to start and end on the vector.</p>
<p>(Also technically you can indicate the angle of the vector however you choose, doesn't have to be a horizontal axis, could be vertical).</p>
<p>Now you are probably more familiar with a vector description that combines orientation and sense by saying for example, give me the vector that is 2 units length and is rotated 30 degrees counterclockwise starting from the point (2,0). But I am combining direction and sense here. Technically I could have said, the vector is 2 units length (magnitude), the vector makes an acute angle of 30 degrees with the horizontal axis (orientation/direction), and the starting point of the vector is (0,0) and ends at (2 cos 30, 2 sin 30) this is 'sense'. </p>
<p>and together, orientation and sense determine 'direction'</p>
<p>In math you just have to be flexible. Different authors mean the same thing but use different words. In an ideal world every author would agree with each other's notation and terminology. Until we reach a universal mathematical language, it is better to try to get an understanding.</p>
|
96,468 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/22537/how-many-fixed-points-in-a-permutation">How many fixed points in a permutation</a> </p>
</blockquote>
<p>Suppose we have a collection of n objects, numbered from 1 to n. These objects are placed in a random order.</p>
<p>What is the probability that p of the objects are in the position in the order corresponding to their order. </p>
<p>For instance, </p>
<p>For n=3, 1-2-3 has all objects in the correct position, p = 3, and has probability P(p=3) = 1/3! = 1/6.</p>
<p>However P(p=2) = 0</p>
<p>P(p=1) = 3/3! = 1/2. (1-3-2, 3-2-1, 2-1-3)</p>
<p>P(p=0) = 2/3!. (2-3-1, 3-1-2)</p>
| André Nicolas | 6,312 | <p>We will count the number of ways to have exactly $p$ objects in the correct places. Then one only needs to divide by $n!$ to find the probability.</p>
<p>Which $p$ objects are in the correct place? These can be chosen in $\binom{n}{p}$ ways. Now for every such choice, we must arrange the remaining $n-p$ objects so that <em>none</em> of them is in the correct place.</p>
<p>For any $k$, the number of arrangements of a set of $k$ numbers so that none of them is in the correct place is called the number of <a href="http://en.wikipedia.org/wiki/Derangement" rel="nofollow">derangements</a> of $k$. The subject of derangements is extensively discussed at the preceding link. One fairly common notation for the number of derangements of an ordered set of cardinality $k$ is $!k$, though $D_k$, or $D(k)$, are also quite common. </p>
<p>So the number of ways to do the arranging is
$$\binom{n}{p}(!(n-p)).$$</p>
|
816,088 | <blockquote>
<p>The sum of two variable positive numbers is $200$.
Let $x$ be one of the numbers and let the product of these two numbers be $y$. Find the maximum value of $y$.</p>
</blockquote>
<p><em>NB</em>: I'm currently on the stationary points of the calculus section of a text book. I can work this out in my head as $100 \times 100$ would give the maximum value of $y$. But I need help making this into an equation and differentiating it. Thanks!</p>
| Tunk-Fey | 123,277 | <p>Let the first number be $x$ and the second number be $z$. We have
$$x+z=200\quad\Rightarrow\quad z=200-x.$$
We want to maximize $$y=xz=x(200-x)=200x-x^2.$$
Setting the first derivative equals $0$ yields
\begin{align}
\frac{d}{dx}\left(200x-x^2\right)&=0\\
200-2x&=0\\
2x&=200\\
x&=100.
\end{align}
Check the second derivative
$$
\frac{d^2}{dx^2}\left(200x-x^2\right)=-2.
$$
Since the second derivative of $y<0$, then $y$ will be maximum at $x=100$. Thus, the value of their product is $y=200x-x^2=\color{blue}{10,000}$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.