qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
109,213
<p>In classical complex analysis it is easy to prove that a meromorphic function has at most one analytic continuation (on an open connected subset of $\mathbb C$, say).</p> <p>The problem of non-uniqueness of analytic continuation is one of the reasons why it is not possible (if one wants a good theory) to translate the complex theory to the $p$-adic case without some modifications, and so it is one of the motivation for introducing rigid analytic variety. However, I am not able to find a precise statement that explains under which hypothesis the uniqueness of analytic continuation holds for rigid analytic varieties. So the question is the following.</p> <p>Let $k$ be a non-archimedean field and let $X$ be a connected rigid analytic space over $k$. Let $f \colon X \to k$ be a rigid analytic function that vanishes on $Y$, that is an admissible subdomain of $X$. It is always true that $f$ vanishes on $X$? If this is not the case, under which assumptions it is true?</p> <p>Any references will be very appreciated!</p>
Ramsey
12,107
<p>Let me just complement xbnv's answer with a mild generalization. If $X$ is an <em>irreducible</em> rigid space (and let's suppose we're dealing with reduced spaces from the outset), then its normalization $\tilde{X}$ is connected and normal and is equipped with a finite surjective map $\tilde{X}\to X$. Using xbnv's answer, it follows that the statement holds for $X$ as well.</p> <p>In short, if you replace "connected" by "irreducible" then you're good.</p>
3,910,345
<p>Recently a lecturer used this notation, which I assume is a sort of twisted form of Leibniz notation:</p> <p><span class="math-container">$$y\,\mathrm{d}x - x\,\mathrm{d}y \equiv -x^2\,\mathrm{d}\left(\frac{y}{x}\right)$$</span></p> <p>The logic here was that this could be used as:</p> <p><span class="math-container">$$\begin{align} -x^2\,\mathrm{d}\left(\frac{y}{x}\right) &amp;\equiv -x^2\,\left(\frac{\mathrm{d}y}{x} -\frac{y}{x^2}\,\mathrm{d}x\right)\\ &amp;\equiv y\mathrm{d}x - x\mathrm{d}y \end{align} $$</span></p> <p>Why is this legal?</p> <p>I can see some kind of differentiation going on with the second term in the above equivalence, producing the <span class="math-container">$\frac{1}{x^2}$</span>, but having the single <span class="math-container">$\mathrm{d}$</span> seems like a really weird abuse of notation, and I don't quite follow why it splits the single <span class="math-container">$\frac{y}{x}$</span> fraction into two parts.</p>
Henry Lee
541,220
<p>I believe it is being used as shorthand for: <span class="math-container">$$d\left(\frac yx\right)=dx\frac{d\left(\frac yx\right)}{dx}=dx\left(\frac{dy}{dx}x-\frac{y}{x^2}\right)=xdy-\frac{y}{x^2}dx$$</span></p>
508,790
<p>I always see this word $\mathcal{F}$-measurable, but really don't understand the meaning. I am not able to visualize the meaning of it.</p> <p>Need some guidance on this.</p> <p>Don't really understand $\sigma(Y)$-measurable as well. What is the difference?</p>
Davide Giraudo
9,849
<p>If $f\colon (X_1,\mathcal F_1)\to (X_2,\mathcal F_2)$, $f$ is $(\mathcal F_1,\mathcal F_2)$-measurable if for all $F_2\in\mathcal F_2$, $f^{-1}(F_2)\in\mathcal F_1$. </p> <p>In some contexts we consider the case where $X_2$ is the real line and $\mathcal F_2$ the Borel $\sigma$-algebra. Then for short, we say that $f\colon X\to \mathbb R$ is $\mathcal F$-measurable if $f^{-1}(B)\in\mathcal F$ for each Borel subset $B$. </p> <p>$\sigma(Y)$ is a $\sigma$-algebra, so the same definition applies. </p>
3,995,986
<p>Need help integrating: <span class="math-container">$$\int _0^{\infty }\:\:\frac{6}{\theta}xe^{-\frac{2x}{\theta }}\left(1-e^{-\frac{x}{\theta }}\right)dx$$</span></p> <p>I think I should multiply the <span class="math-container">$$xe^{-\frac{2x}{\theta }}$$</span> out and then use integration by parts but it is not really working for me?</p>
Community
-1
<p>Hint.</p> <p>Abstractly, all you need is to find integrals of the form: <span class="math-container">$$ \int xe^{mx}dx $$</span> which by a change of variables, you only need to find <span class="math-container">$$ \int xe^{mx}dx =\frac{1}{m^2}\int ue^udu= \frac{1}{m^2}\int xe^{x}dx=\frac{1}{m^2}(xe^x-e^x)+C $$</span></p> <hr /> <p>The integrand in your integral is <span class="math-container">$$ xe^{\frac{-2}{\theta}\cdot x}-xe^{\frac{-3}{\theta}\cdot x} $$</span> each term is of the form <span class="math-container">$xe^{mx}$</span> for some <span class="math-container">$m$</span>.</p>
133,418
<p>Let $\langle R,0,1,+,\cdot,&lt\rangle$ be the standard model for R, and let S be a countable model of R (satisfying all true first-order statements in R). Is it true that the set 1,1+1,1+1+1,… is bounded in S? My intuition says "no", but I am yet to find a counter example. I read something about rational functions, but I cannot verify it is, indeed, a non-standard model of R.</p>
zyx
14,120
<p>This is not what "non-standard model of ..." means. </p> <p>You are asking whether non-Archimedean real-closed ordered fields exist. An example is to add some algebraically independent transcendental elements to the real algebraic numbers, or the Puiseux series field over the real algebraic numbers. </p> <p>Archimedean real-closed ordered fields also exist, such as $R$ or any of its real-closed subfields.</p> <p>A nonstandard model is one that applies a construction on the positive integers (in this case, using them to build Z then Q and finally R) to the integers in a nonstandard model of arithmetic or set theory, but maintaining the set of sentences that are true in the model (transfer principle). The integers in a nonstandard model of arithmetic are Archimedean when "seen from inside the model" and non-Archimedean "from the outside".</p>
2,622,583
<blockquote> <p>Prove that if $f:\mathbb R \to \mathbb R$ is a measurable function and $f(x)=f(x+1)$ almost everywhere, then there exists a measurable function $g:\mathbb R \to \mathbb R$ with $f=g$ almost everywhere and $g(x)=g(x+1)$ for every $x \in \mathbb R$</p> </blockquote> <p>I'm trying to prove this by construction. We know that $A= \{x \in \mathbb R :f(x) \not = f(x+1) \}$ is measurable and $m(A)=0$, so I thought $g$ should be something like:</p> <p>$ g(x) = \left\{ \begin{array}{ll} f(x) &amp; \mathrm{if\ } x \notin A \\ ?? &amp; \mathrm{if\ } x \in A \end{array} \right.$</p> <p>And this way,we would get that $f=g$ almost everywhere, and $g$ would be measurable... But using this I haven't been able to find a way to make $g(x)=g(x+1)$ for every $x \in \mathbb R$</p>
John Dawkins
189,130
<p>Suppose, to start, that $f$ is bounded. Define $g_n(x):=n\int_{(x,x+11/n]} f(t)\,dt$. It is easy to check that $g_n(x)=g_n(x+1)$ for all $x$. Define $g(x):=\limsup_ng_n(x)$, $x\in\Bbb R$, and notice that $g(x+1)=g(x)$ for all $x$. If $x$ is a <em>Lebesgue point</em> of $f$, then $\lim_ng_n(x) =f(x)$, and in particular $g(x)=f(x)$. It follows that $g(x)=f(x)$ for all Lebesgue points of $f$, hence a.e.</p> <p>For general measurable $f$ apply the above to $F:=\arctan(f)$ to obtain $1$-periodic $G$ equal to $F$ a.e., and then define $g:=\tan G$.</p>
3,695,439
<p>So I know that we can find <span class="math-container">$dy/dx$</span> of a curve in polar coordinates by leveraging the fact that <span class="math-container">$x=rcos\theta$</span> and <span class="math-container">$y=rsin\theta$</span>, and since <span class="math-container">$r$</span> is a function of <span class="math-container">$\theta$</span> we can take <span class="math-container">$dy/d\theta$</span> and <span class="math-container">$dx/d\theta$</span> and divide them. But my question is, staying in polar coordinates can we glean anything about the tangent slopes from just <span class="math-container">$dr/d\theta$</span>? For instance if we have <span class="math-container">$r=sin\theta$</span>, then <span class="math-container">$dr/d\theta=cos\theta$</span>. So when <span class="math-container">$\theta$</span> is equal to <span class="math-container">$0$</span>, <span class="math-container">$r$</span> is changing with respect to <span class="math-container">$\theta$</span> at a rate of <span class="math-container">$1$</span>? I can't seem to wrap my head around what this means. Obviously in cartesian coords. the slope of the tangent when <span class="math-container">$x=0$</span> is <span class="math-container">$0$</span> and infinite when <span class="math-container">$y=1/2$</span>. Is there no way to intuitively see this from just looking at <span class="math-container">$dr/d\theta$</span>?</p>
Christian Blatter
1,303
<p>If a curve <span class="math-container">$\gamma$</span> is given in <em>polar form</em> <span class="math-container">$$\gamma:\quad r=r(\theta)\qquad(\theta_0\leq\theta\leq \theta_1)$$</span> this is an abbreviation for the parametric representation <span class="math-container">$$\gamma:\quad\theta\mapsto{\bf r}(\theta)=\bigl(r(\theta)\cos\theta, r(\theta)\sin\theta\bigr)\qquad(\theta_0\leq\theta\leq \theta_1)\ .$$</span> You are asking for the geometric meaning of the derivative <span class="math-container">$r'(\theta)={dr\over d\theta}$</span>. This can be seen in the following figure. The curve <span class="math-container">$\gamma$</span> intersects the concentric circles <span class="math-container">$r={\rm const.}$</span> under a varying angle <span class="math-container">$\alpha$</span>. For a given <span class="math-container">$\theta$</span> we have <span class="math-container">$$\tan\alpha={dr\over r\,d\theta}={r'(\theta)\over r(\theta)}\ .$$</span> This shows that <span class="math-container">$r'(\theta)$</span> in the first place carries information about this <span class="math-container">$\alpha$</span>, and not about slopes <span class="math-container">${dy\over dx}$</span>, and similar.</p> <p>When <span class="math-container">$r'(\theta_0)=0$</span> for some <span class="math-container">$\theta_0$</span> then <span class="math-container">$\alpha=0$</span>. This means that the circle <span class="math-container">$r=r(\theta_0)$</span> and <span class="math-container">$\gamma$</span> are touching at the corresponding point <span class="math-container">$P$</span>, but not that they have the same curvature there. Often it means that the function <span class="math-container">$\theta\mapsto r(\theta)$</span> has a local extremum at <span class="math-container">$\theta_0$</span>, hence <span class="math-container">$P$</span> could be the point on <span class="math-container">$\gamma$</span> which is nearest or farthest from the origin.</p> <p><a href="https://i.stack.imgur.com/ymnBi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ymnBi.jpg" alt="enter image description here"></a></p>
29,255
<p>sorry! am not clear with these questions</p> <ol> <li><p>why an empty set is open as well as closed?</p></li> <li><p>why the set of all real numbers is open as well as closed?</p></li> </ol>
Adrián Barquero
900
<p>Well the definition of a <a href="http://en.wikipedia.org/wiki/Topological_space">topological space</a> $X$ specifies that both $X$ and the empty set must be open sets (if the topology is defined in terms of closed sets rather than open sets, it will stipulate that they are closed). But then it is just by definition that it must be open (or closed). </p> <p>Then a set $A$ is said to be closed if and only if its complement $X - A$ is open. So if you look at the empty set its complement is $ X - \emptyset = X$ and $X$ is open by definition. Therefore the empty set is closed.</p>
3,350,251
<p>The integral of velocity plots position and not change in position. But the definition of the integral is the area under the velocity curve and the area under the velocity curve is change in position. So why doesn't the integral of velocity plot change in position?</p>
Ethan Bolker
72,858
<p>The definite integral of the velocity from time <span class="math-container">$a$</span> to time <span class="math-container">$b$</span> is the change in position over that time interval.</p> <p>You can only specify the position of an object relative to some point. "The position" makes sense only in a coordinate system. That might be latitude and longitude or the position on a line with some fixed <span class="math-container">$0$</span> position and scale specified.</p>
717,664
<p>I need a step by step answer on how to do this. What I've been doing is converting the top to $2e^{i(\pi/4)}$ and the bottom to $\sqrt2e^{i(-\pi/4)}$. I know the answer is $2e^{i(\pi/2)}$ and the angle makes sense but obviously I'm doing something wrong with the coefficients. I suspect maybe only the real part goes into calculating the amplitude but I can't be sure.</p>
ZHN
131,755
<p>The secret is to multiply and divide by the conjugate of the denominator:</p> <p>$$\frac{2i+2}{1-i} =\frac{2i+2}{1-i}\frac{1+i}{1+i}=\frac{2(1+i)(1+i)}{1-i^2}=\frac{(1+i^2)}{1+1}=1+2i+i^2=1+2-1=2i.$$</p>
1,109,853
<blockquote> <p><em>The below proof is incorrect. See the answers for more information.</em></p> </blockquote> <p>This question is in the context of exploring how to explain the process of developing a proof.</p> <p>When reading a proof on the irrationality of $ \sqrt{3} $, I came across the following statement, which was not proved in the irrationality proof itself.</p> <ul> <li>If $ a^2 $ is divisible by 3, then $ a $ is divisible by 3.</li> </ul> <p>I believe the following proves the above statement:</p> <ol> <li>Let $k$ be an integer, and $a$ be an integer divisible by $n$, where $ a=n(k+1) $.</li> <li>$ a = nk+n $</li> <li>$ a^2 = (nk + n)(nk + n) $</li> <li>$ a^2 = n^2(k+1)(k+1) $</li> <li>Therefore, $a^2$ is divisible by $n$.</li> </ol> <p>Although the above proof "feels" valid to me, it also seems like the proof is not complete, in a formal sense, because:</p> <ul> <li>Constraints are not placed on the variables.</li> <li>Although the leap from step 4 to step 5 seems intuitive, there is no formal explanation as to why the step is valid. (It seems like something is missing to explain how to go from divisible by $n^2$ to divisible by $n$.)</li> <li>$a$ is divisible by 3 $\implies$ $a^2$ is divisible by 3, but no justification is given for the opposite implication.</li> </ul> <p>All that said, is the above proof sufficient to justify the initial assertion about divisibility by 3? What would formally justify going from step 4 to step 5?</p> <p>And more generally: Are there objective standards for sufficiency of proof, either published or generally accepted?</p>
Bernard
202,857
<p>Congruences allow for a very simple proof of the assertion: ‘ If $a^2$ is divisible by $3$, the $a$ is divisible by $3$.</p> <p>It suffices to draw up the list of squares modulo $3$:</p> <ul> <li>if $a\equiv 0\mod 3$, then $a^2\equiv 0^2=0 $;</li> <li>if $a\equiv \pm 1$, then $a^2\equiv 1 \mod 3$. Hence the only case when $a^2$ is divisible by $3$ is when $a$ itself is.</li> </ul>
386,649
<p>If you were working in a number system where there was a one-to-one and onto mapping from each natural to a symbol in the system, what would it mean to have a representation in the system that involved more than one digit?</p> <p>For example, if we let $a_0$ represent $0$, and $a_n$ represent the number $n$ for any $n$ in $\mathbb{N}$, would '$a_1$$a_0$' represent a number?</p> <p>Is such a system well defined or useful for anything?</p>
MJD
25,554
<p>In base 10, we represent a number $n$ as a sequence of digits $n_0, n_1, \ldots$ such that $$n = \sum_{i=0}^\infty n_i 10^i\qquad\text{where } 0\le n_i&lt;10$$</p> <p>and we require that the sequence of $n_i$ must be eventually zero.</p> <p>By changing the representation a little bit, we get the so-called <a href="http://en.wikipedia.org/wiki/Factorial_base" rel="nofollow">factorial base</a>:</p> <p>$$n = \sum_{i=1}^\infty n_i i!\qquad\text{where } 0\le n_i&lt;i+1$$</p> <p>and again the sequence $n_i$ must be eventually zero. There is no upper bound on the size of the digits $n_i$.</p> <p>In this representation, the number 718 is represented as $\langle 0,2, 3,4,5\rangle$ since $$\begin{align}5\cdot 5! + 4\cdot 4! + 3\cdot 3! + 2\cdot 2! + 0\cdot 1! &amp; = \\ 5\cdot120+4\cdot24 + 3\cdot6 + 2\cdot 2 + 0\cdot 1 &amp; =\\ 600 + 96 + 18 + 4 + 0 &amp; = 718.\end{align}$$</p> <p>This has actual applications; for example it is a useful way to represent a permutation of a list.</p>
199,148
<p><a href="https://i.stack.imgur.com/9BuHp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9BuHp.png" alt="enter image description here"></a> </p> <pre><code>pbdomains = &lt;| "Overall " -&gt; Around[2.6, 0.04], "PB" -&gt; Around[4.25, 0.06] |&gt;; BarChart[pbdomains, ChartStyle -&gt; "BrightBands", LabelStyle -&gt; {FontFamily -&gt; "Times New Roman", 28, Bold, GrayLevel[0]}, Frame -&gt; True, FrameLabel -&gt; {"", " Count"}, BarSpacing -&gt; Tiny, ChartLabels -&gt; Callout[Automatic, Above, Appearance -&gt; "Balloon"]] </code></pre>
Chris Degnen
363
<p>You can use <code>PlotRangePadding</code>.</p> <pre><code>pbdomains = &lt;| "Overall " -&gt; Around[2.6, 0.04], "PB" -&gt; Around[4.25, 0.06] |&gt;; BarChart[pbdomains, ChartStyle -&gt; "BrightBands", LabelStyle -&gt; {FontFamily -&gt; "Times New Roman", 28, Bold, GrayLevel[0]}, Frame -&gt; True, FrameLabel -&gt; {"", " Count"}, BarSpacing -&gt; Tiny, ChartLabels -&gt; Callout[Automatic, Above, Appearance -&gt; "Balloon"], PlotRangePadding -&gt; {{-1.4, -1.37}, {None, Scaled[0.2]}}] </code></pre> <p><a href="https://i.stack.imgur.com/PCf12.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PCf12.png" alt="enter image description here"></a></p>
2,840,333
<p>I know that the easy way to evaluate the mean and variance of the Binomial distribution is by considering it as a sum of Bernoulli distributions.</p> <p>However, I was wondering just for fun if there is a way to evaluate them directly. I got the mean easily: it only involves some fiddling around with the binomial coefficient to absorb the 'extra' $k$ in the summation, followed by a direct application of the binomial theorem. However, in the process of evaluating the variance I need to compute a sum of the form:</p> <p>$$ \sum\limits_{k=0}^{n}k^2 \binom{n}{k} r^k $$</p> <p>The extra $k$ in the sum now doesn't let me apply my previous trick. Wolfram Alpha has no problem evaluating this sum, but it won't give me a step-by-step solution. Any leads would be appreciated.</p>
David
119,775
<p>The differentiation method is good, but you can if you want extend your method* of fiddling around with the binomial coefficient: $$\eqalign{ \sum_{k=0}^{n}k(k-1) \binom{n}{k} r^k &amp;=\sum_{k=0}^n k(k-1)\frac{n!}{k!\,(n-k)!}r^k\cr &amp;=\sum_{k=2}^n n(n-1)\frac{(n-2)!}{(k-2)!\,(n-k)!}r^k\cr &amp;=n(n-1)\sum_{m=0}^{n-2}\binom{n-2}{m}r^{m+2}\cr &amp;=n(n-1)r^2(1+r)^{n-2}\cr}$$ and now add the formula you have already for $$\sum_{k=0}^nk\binom nk r^k\ .$$</p> <blockquote> <p>* "If you use it once, it's a trick; if you use it twice, it's a method."</p> </blockquote>
1,571,099
<blockquote> <p>Consider the rectangle formed by the points $(2,7),(2,6),(4,7)$ and $(4,6)$. Is it still a rectangle after transformation by $\underline A$= $ \left( \begin{matrix} 3&amp;1 \\ 2&amp;\frac {1}{2} \\ \end{matrix} \right) $ ?By what factor has its area changed ?</p> </blockquote> <p>I've defined the point $(2,6)$ as the origin of my vectors $\vec v $ and $\vec u$ with $\vec v = \left(\begin{matrix}0 \\1 \\\end{matrix} \right)$ and $\vec u = \left(\begin{matrix}2 \\0 \\\end{matrix} \right)$ which get transformed to $\underline A \vec v=$$\left(\begin{matrix}1 \\\frac{1}{2} \\\end{matrix} \right)$ and $\underline A \vec u=$$\left(\begin{matrix}6 \\4 \\\end{matrix} \right)$.</p> <p>So my new figure(which is not a rectangle anymore,but is now a parallelogram) has vertices $(2,6)(3,6 \frac{1}{2}),(8,10)$ and $(9,10 \frac{1}{2})$</p> <p>Now the rectangle has area equal to $2 \cdot 1=2$, and after the transformation I have that the area of the resulting parallelogram is $\underline A \vec v \times \underline A \vec u =|1\cdot 4 -\cfrac{1}{2}\cdot 6|=1$</p> <p>Now my problem is that when I calculate the area by geometric methods I have:</p> <p><a href="https://i.stack.imgur.com/JtCyj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JtCyj.jpg" alt="enter image description here"></a></p> <p>You see I get a different answer,so it's clear that I've had it all wrong since the beginning but I don't see where.</p> <p>I upload now the image of the parallelogram where I've applied law of cosines in the last step of the above image.</p> <p><a href="https://i.stack.imgur.com/8N3ug.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8N3ug.png" alt="enter image description here"></a></p> <p>I've tried to be as specific as possible about my steps.Can someone help me ?</p> <p>Thanks in advance.</p>
Rob Arthan
23,171
<p>$[a]u_1$ means that after action $a$ you will be in state $u_1$. $[a](p \land q)$ means that after $a$ you will get into some state that satisfies both the properties $p$ and $q$. So if $u_1$ and $u_2$ are distinct states, $u_1 \land u_2$ will be false so $[a](u_1 \land u_2)$ will be false regardless of how you define $R(a)$. With your definition of $R(a)$, $[a](u_1 \land u_1)$ will be true, but that is because $[a]u_1$ is true for your $R(a)$ and $u_1 \land u_1$ is equivalent to $u_1$. An assertion like $[a](\lnot u_2 \land \lnot u_3)$ is a bit more interesting: it is true for your $R(a)$ and for other possible definitions of $R(a)$ too.</p>
466,757
<p>Suppose we have the following</p> <p>$$ \sum_{i=1}^{\infty}\sum_{j=1}^{\infty}a_{ij}$$</p> <p>where all the $a_{ij}$ are non-negative.</p> <p>We know that we can interchange the order of summations here. My interpretation of why this is true is that both this iterated sums are rearrangements of the same series and hence converge to the same value, or diverge to infinity (as convergence and absolute convergence are same here and all the rearrangements of an absolutely convergent series converge to the same value as the series).</p> <p>Is this interpretation correct. Or can some one offer some more insightful interpretation of this result?</p> <p>Please note that I am not asking for a proof but interpretations, although an insightful proof would be appreciated.</p>
Alp Uzman
169,085
<p>The double sum you are asking about can be considered to be the sum of all terms of the infinite array of numbers $a_{ij}$:</p> <p>$$\begin{pmatrix} a_{11} &amp; a_{12} &amp; \cdots &amp; a_{1j} &amp; \cdots\\ a_{21} &amp; a_{22} &amp; \cdots &amp; a_{2j} &amp; \cdots\\ \vdots&amp; \vdots &amp; &amp;\vdots\\ a_{i1} &amp; a_{i2} &amp; \cdots &amp; a_{ij} &amp; \cdots\\ \vdots &amp; \vdots &amp; &amp; \vdots \\ \end{pmatrix},$$</p> <p>where the sum is taken with the particular order of first adding all elements of particular rows and then adding the resulting "row totals". The result you are citing claims that if all $a_{ij}$ is nonnegative, then it matters not in which order you add the entries of the infinite matrix, be that rows first and then row totals or columns first and then column totals.</p> <hr> <p>I can't say your interpretation is right on the mark, because the issue is that even though one adds the entries of the same infinite array, the order of summation may result in summing the terms of different series in actuality. As the other answers point out this result holds essentially because we have no terms diminishing the total sum. Saying "both this iterated sums are rearrangements of the same series and hence converge to the same value, or diverge to infinity" really does not make any emphasis on this advantage, which I believe is sweeped under the phrase "same series".</p> <hr> <p>I'd like to mention yet another (measure theoretical) interpretation of this result, which I recently encountered in Rudin's <a href="http://rads.stackoverflow.com/amzn/click/0070542341" rel="noreferrer"><em>Real and Complex Analysis</em></a> (p. 23). In the book Rudin gives this result as a corollary of Lebesgue Monotone Convergence Theorem applied to series of functions.</p> <p><strong>Theorem:</strong> Let $X$ be a measure space. If $\forall n: f_n:X\to [0,\infty]$ is measurable, then </p> <p>$$\int_X \sum_n f_n d\mu= \sum_n \int_X f_n d\mu (\ast).$$</p> <p><strong>Corollary:</strong> Let $X:=\{x_1,x_2,...,x_n,...\}$ be a countable set and $\mu:\mathcal{M}_X:=\mathcal{P}(X)\to[0,\infty]$ be the counting measure:</p> <p>$$\mu(E):= \begin{cases} |E|&amp;, \mbox{ if} |E|&lt;\infty\\ \infty&amp;, \mbox{ if} |E|=\infty \end{cases}. $$</p> <p>If $\forall i,j:a_{ij}\geq0$, then</p> <p>$$\sum_i \sum_j a_{ij}=\sum_j \sum_i a_{ij}.$$</p> <p><strong>Proof:</strong> </p> <ul> <li>Set $\forall j: f_j:X\to[0,\infty], f_j(x):=\sum_i a_{ij}\chi_{\{x_i\}}(x)$; and $\forall i: \bar{f_i}:X\to[0,\infty], \bar{f_i}(x):=\sum_j a_{ij}\chi_{\{x_i\}}(x).$ Then $\sum_j f_j=\sum_i \bar{f_i}$. Indeed, let $x_{i_0}\in X$. Then</li> </ul> <p>$$\sum_j f_j(x_{i_0})= \sum_j \sum_i a_{ij} \chi_{\{x_i\}}(x_{i_0})= \sum_j a_{{i_0}j},$$</p> <p>and</p> <p>$$\sum_i \bar{f_i}(x_{i_0})=\bar{f_{i_0}}(x_{i_0})=\sum_j a_{{i_0}j}.$$</p> <ul> <li>Observe that the (inner) sum over $i$ is the integral of $f_j$ and the (inner) sum over $j$ is the integral of $\bar{f_i}$. Then we have:</li> </ul> <p>$$\sum_j\sum_i a_{ij}= \sum_j \int_X f_j d\mu\stackrel{(\ast)}{=}\int_X \sum_j f_j d\mu=\int_X \sum_i \bar{f_i}d\mu\stackrel{(\ast)}{=}\sum_i\int_X\bar{f_i}d\mu =\sum_i \sum_j a_{ij}.$$</p> <hr> <p>Now consider the interpretation induced by the above discourse, viz., the (countable) combinations of characteristic functions behave in such a way that one can concentrate nonnegative "weights" of individual points. In the above proof $f_j$ puts a single weight on each point, while $\bar{f_i}$ concentrates all the weights of the point $x_i$.</p>
2,441,894
<p>The matrix $$\pmatrix{100\sqrt{2}&amp;x&amp;0\\-x&amp;0&amp;-x\\0&amp;x&amp;100\sqrt{2}},\quad x&gt;0$$ have two equal eigenvalues. How can I find $x$? What I tried is this. If $\lambda_1$ is doubly degenerate and $\lambda_2$ the third eigenvalue, then the characteristic equation is $(\lambda-\lambda_1)^2(\lambda-\lambda_2)=0$. Also $2\lambda_1+\lambda_2=200\sqrt{2},\quad \lambda_1^2\lambda_2=200\sqrt{2}x^2$. I do not know how to proceed from here. </p>
Nick
27,349
<p>See the documentation for the <a href="https://www.mathworks.com/help/matlab/ref/reshape.html" rel="nofollow noreferrer">reshape</a> function. Here is how you can use it:</p> <p>First, make a $2$-by-$n$ matrix which has both rows equal to $A$ with:</p> <p><code>[A;A]</code></p> <p>Then use the <code>reshape</code> function to "flatten" this matrix:</p> <p><code>reshape([A;A], 1, [])</code></p> <p>The output will be what your are looking for.</p>
2,172,399
<p>Equation of the segment : $2x + 4y-3 = 0$ Equation of the hyperbola : $7x^2 - 4y^2 =14$</p> <p>How do you find the equation of the two linear functions that are both perpendicular to the segment and tangent to the hyperbola?</p> <p>Thanks</p>
Francis Cugler
405,427
<p>Draw the graphs of both original equations for the segment and the hyperbola. Then visually draw the lines that satisfies being perpendicular to the segment and tangent to the hyperbola. Then from these visualizations you can use vector notation of the two lines you drew and from there once you have vectors you can generate points, perform dot products, trigonometric functions and linear approximation to generate the linear equations of these two lines in the form of $y = mx +b$ </p> <p>However you may need to use the equation $m = \frac{y_2 - y_1}{x_2 - x_1}$ to find its slope. But if you know that the slope of a line is understood as $\frac{rise}{run}$ you can use the unit circle coordinate pairs to know that any point on the unit pair has a point $(x,y)$ where they are defined as $(\cos\theta, \sin\theta)$ where $\theta$ is the angle above the $x-axis$ in standard form. Now when you have the right triangle that is generated by the $x-axis$ and the unit vector radius $\vec r = &lt;1&gt;$; its slope $\frac{rise}{run}$ is defined as $\frac{\delta y}{\delta x}$ which is the same as $\frac{\sin\theta}{\cos\theta} = \tan\theta$. </p> <p>Simply put the tangent of the angle $\theta$ above the horizon or horizontal axis in the standard position or in the positive direction is the slope of the line of any linear equation. So when you look at a the linear equation $y = mx + b$ it can be substituted as $y = x*tan\theta + b$. </p> <p>This all works between linear equations, trigonometric functions and the structure of the circle simply because the Pythagorean Theorem $A^2 + B^2 = C^2$ and the equation of a circle $X^2 + Y^2 = r^2$ are the same exact thing. The only difference between them is the variable notation used and the context in which they are used. </p> <p>Pythagorean Theorem is normally associated and used with triangles, lines and vectors where the equation of circle is used to represent the dual function of the points on a circle according to the $x$ &amp; $y$ coordinate. </p> <p>The reason this relationship works is because a triangle consists of 3 vectors or lines with 3 points of intersection called vertices or singly called a vertex that generates an interior angle between to vectors. Due to this vector relationship you can take a single leg of a triangle that is a unit vector and rotate it around a single fixed point at either the head or the tail end of the vector and that full rotation of $360°$ or $2\pi$ radians gives you the unit circle. </p> <p>Once find the points of intersection between your original graphs and the needed imposed lines. You can easily generate any linear equation or line with two points by using the form $y - y_1 = m(x - x_1)$ Once again you can substitute $m$ with $\tan\theta$. </p>
3,436,515
<p>Please help!</p> <p>How to show that <span class="math-container">$ \lim _{n→∞} \frac{x_{(n+1)}}{x_n} =\frac{1+\sqrt 5}{2}$</span> for a dynamical system <span class="math-container">$$x_{(n+1)}=x_n + y_n\\ y_{(n+1)}=x_n$$</span></p> <p>Thank you!</p>
Dinno Koluh
519,191
<p>We should firstly shift the second equation by <span class="math-container">$1$</span>: <span class="math-container">$$y_{n+1}=x_n \rightarrow y_n = x_{n-1}$$</span> Let us now substitute this into the first equation: <span class="math-container">$$x_{n+1}=x_n + y_n = x_n + x_{n-1}$$</span> Let us again shift the equation by <span class="math-container">$1$</span> to get: <span class="math-container">$$x_{n}= x_{n-1} + x_{n-2}$$</span> This the recursive formula for the Fibonacci sequence where: <span class="math-container">$$ x_n = \frac{\phi^n-(-\phi)^{-n}}{\sqrt{5}} $$</span> The limit now becomes: <span class="math-container">$$ \lim _{n\to\infty} \frac{x_{n+1}}{x_n} = \lim _{n\to\infty}\frac{\frac{\phi^{n+1}-(-\phi)^{-n-1}}{\sqrt{5}}}{\frac{\phi^n-(-\phi)^{-n}}{\sqrt{5}}} = \phi = \frac{1+\sqrt 5}{2} $$</span></p>
442,759
<p>I was reading a book on groups, it points out about the uniqueness of the neutral element and the inverse element. I got curious, are there algebraic structures with more than one neutral element and/or more than one inverse element?</p>
Ronnie Brown
28,586
<p>They do exist but they are algebraic structures with <strong>partial</strong> operations, i.e. the multiplication $a*b$ is not defined for all $a,b$. Typical examples are "journeys": you can compose a journey from $x$ to $y$ with a journey from $z$ to $w$ if and only if $y=z$. Standard mathematical examples are categories and <a href="http://groupoids.org.uk/gpdsweb.html" rel="nofollow">groupoids</a>. So a groupoid is sometimes thought of as a "group with many identities", and a group is a groupoid with only one identity. </p> <p>These ideas lead to double categories and groupoids, which have compositions thought of as in different directions. Double groups are just abelian groups, by what is called the Eckmann-Hilton argument, or interchange law, but double groupoids are quite complicated! </p> <p>Other interesting examples are <em>inverse semigroups</em>. Consider a set $X$ and the set $I(X)$ of all bijections between subsets of $X$. Clearly there is a composition $f \circ g$ of any $f,g \in I(X)$, but the domain of $f \circ g$ may be smaller than expected and even empty. See the <a href="http://en.wikipedia.org/wiki/Inverse_semigroup" rel="nofollow">wiki</a> entry for more information. In particular the identity $1_A$ on a subset $A$ of $X$ is associated with the domain $A$. </p> <p>Sept 21, 2016 I'd like to add the point that my definition of "higher dimensional algebra" is that it is the study of algebraic structures with partial operations whose domains are defined by geometric conditions. This allows for a combination of algebra and geometry, which is exploited in our 2011 book <a href="http://groupoids.org.uk/nonab-a-t.html" rel="nofollow">Nonabelian Algebraic Topology</a>.</p>
1,178,265
<p>I'm supposed to be able to determine <strong><em>without calculations</em></strong> the determinant, inverse matrix, and n-th power matrix of the rotation matrix :</p> <p>$\begin{pmatrix} cos\theta &amp; sin\theta \\ -sin\theta &amp; cos\theta \end{pmatrix} $</p> <p>Can someone explain to me how I can do that ?</p>
Community
-1
<p><strong>HINTS</strong></p> <p>The determinant tells you by how much a linear transformation transforms areas (for $2\times 2$)/ volumes (for $3\times 3$)/ etc. So by what factor does this transformation change the area of say a square in the plane? That'll be your determinant.</p> <p>The inverse matrix is the matrix which does the inverse transformation. What exactly does this matrix do? How could that action be <em>undone</em>?</p> <p>If you applied this matrix to a vector more than once, what should happen?</p>
3,346,775
<p>Do there exist non-zero expectation, dependent, uncorrelated random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>? The examples that I have found have at least one of the variables have zero expectation.</p>
Kavi Rama Murthy
142,385
<p>Let <span class="math-container">$X$</span> have standard normal distribution. Then <span class="math-container">$1+X$</span> and <span class="math-container">$1+X^{2}$</span> satisfy your requirements. </p>
144,818
<p>Let $x_1,x_2,\ldots,x_n$ be $n$ real numbers that satisfy $x_1&lt;x_2&lt;\cdots&lt;x_n$. Define \begin{equation*} A=% \begin{bmatrix} 0 &amp; x_{2}-x_{1} &amp; \cdots &amp; x_{n-1}-x_{1} &amp; x_{n}-x_{1} \\ x_{2}-x_{1} &amp; 0 &amp; \cdots &amp; x_{n-1}-x_{2} &amp; x_{n}-x_{2} \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots &amp; \vdots \\ x_{n-1}-x_{1} &amp; x_{n-1}-x_{2} &amp; \cdots &amp; 0 &amp; x_{n}-x_{n-1} \\ x_{n}-x_{1} &amp; x_{n}-x_{2} &amp; \cdots &amp; x_{n}-x_{n-1} &amp; 0% \end{bmatrix}% \end{equation*}</p> <p>Could you determine the determinant of $A$ in term of $x_1,x_2,\ldots,x_n$?</p> <p>I make a several Calculation: For $n=2$, we get</p> <p>\begin{equation*} A=% \begin{bmatrix} 0 &amp; x_{2}-x_{1} \\ x_{2}-x_{1} &amp; 0% \end{bmatrix}% \text{ and}\det (A)=-\left( x_{2}-x_{1}\right) ^{2} \end{equation*}</p> <p>For $n=3$, we get</p> <p>\begin{equation*} A=% \begin{bmatrix} 0 &amp; x_{2}-x_{1} &amp; x_{3}-x_{1} \\ x_{2}-x_{1} &amp; 0 &amp; x_{3}-x_{2} \\ x_{3}-x_{1} &amp; x_{3}-x_{2} &amp; 0% \end{bmatrix}% \text{ and}\det (A)=2\left( x_{2}-x_{1}\right) \left( x_{3}-x_{2}\right) \left( x_{3}-x_{1}\right) \end{equation*}</p> <p>For $n=4,$ we get</p> <p>\begin{equation*} A=% \begin{bmatrix} 0 &amp; x_{2}-x_{1} &amp; x_{3}-x_{1} &amp; x_{4}-x_{1} \\ x_{2}-x_{1} &amp; 0 &amp; x_{3}-x_{2} &amp; x_{4}-x_{2} \\ x_{3}-x_{1} &amp; x_{3}-x_{2} &amp; 0 &amp; x_{4}-x_{3} \\ x_{4}-x_{1} &amp; x_{4}-x_{2} &amp; x_{4}-x_{3} &amp; 0% \end{bmatrix} \\% \text{ and} \\ \det (A)=-4\left( x_{4}-x_{1}\right) \left( x_{2}-x_{1}\right) \left( x_{3}-x_{2}\right) \left( x_{4}-x_{3}\right) \end{equation*} Finally, I guess that the answer is $\det(A)=2^{n-2}\cdot (x_n-x_1)\cdot (x_2-x_1)\cdots (x_n-x_{n-1})$. But I don't know how to prove it.</p>
Robert Israel
8,508
<p>Clearly the determinant is $0$ if $x_i = x_{i+1}$ (because two adjacent rows are identical) or $x_1 = x_n$ (last row is $-$ first row). So the determinant must be a polynomial divisible by $(x_1 - x_2)(x_2 - x_3) \ldots (x_{n-1} - x_n)(x_n - x_1)$. But the determinant has degree $n$, so it is a constant times this product. To determine what the constant is, you might try a special case: $x_i = i$.</p> <p>EDIT: Thanks to J.M.'s remark, you can show that in that special case the inverse of your matrix $A_n$ looks like this:</p> <p>$$ \pmatrix{ -\frac{1}{2}+\frac{1}{2n-2} &amp; \frac{1}{2} &amp; 0 &amp; 0 &amp; \ldots &amp; 0 &amp; \frac{1}{2n-2}\cr \frac{1}{2} &amp; -1 &amp; \frac{1}{2} &amp; 0 &amp; \ldots &amp; 0 &amp; 0\cr 0 &amp; \frac{1}{2} &amp; -1 &amp; \frac{1}{2} &amp; \ldots &amp; 0 &amp; 0\cr \ldots &amp; \ldots &amp; \ldots &amp; \ldots &amp; \ldots &amp; \ldots &amp; \ldots \cr 0 &amp; 0 &amp; 0 &amp; 0 &amp; \ldots &amp; -1 &amp; \frac{1}{2}\cr \frac{1}{2n-2} &amp; 0 &amp; 0 &amp; 0 &amp; \ldots &amp; \frac{1}{2} &amp; -\frac{1}{2} + \frac{1}{2n-2}\cr}$$ where the elements on the main diagonal are all $-1$ except for the first and last, those just above and below the diagonal are all $1/2$, the top right and bottom left are $1/(2n-2)$, and everything else is $0$.</p>
255,252
<p>Let $\mathfrak{S}_n$ be the permutation group on an $n$-element set. For each fixed $k\in\mathbb{N}$, consider the two sets $$A_n(k)=\{\sigma\in\mathfrak{S}_n\vert\,\, \text{$\exists i,\,\, 1\leq i\leq n\,$ such that $\,\sigma(i)-i=k$}\}$$ and $$B_n(k)=\{\sigma\in\mathfrak{S}_n\vert\,\, \text{$\exists i,\,\, 1\leq i\leq n\,$ such that $\,\sigma(i+1)-\sigma(i)=k$}\}.$$ </p> <blockquote> <p><strong>QUESTION.</strong> I believe the following is true, for each $n$ and $k$: $$\# A_n(k)=\# B_n(k).$$ Is there a <strong>combinatorial</strong> proof of this? If it is known, then can you provide references?</p> </blockquote>
Martin Rubey
3,032
<p>You can use sage and www.findstat.org to find a candidate for a bijection as follows. First define the statistics you are interested in:</p> <pre><code>def A_num(s, k): return len([1 for i,e in enumerate(s,1) if e-i==k]) def B_num(s, k): return len([1 for e,f in zip(s, s[1:]) if f-e==k]) </code></pre> <p>Then ask, what findstat knows about them:</p> <pre><code>sage: findstat("Permutations", lambda s: A_num(s, 2), depth=3) 0: (St000534: The number of 2-rises of a permutation., [Mp00066: inverse, Mp00087: inverse first fundamental transformation, Mp00064: reverse], 200) sage: findstat("Permutations", lambda s: B_num(s, 2), depth=3) 0: (St000534: The number of 2-rises of a permutation., [], 200) sage: findstat("Permutations", lambda s: A_num(s, 1), depth=3) 0: (St000237: The number of indices $i$ such that $\pi_i=i+1$., [], 200) 1: (St000214: The number of adjacencies (or small descents) of a permutation., [Mp00066: inverse, Mp00087: inverse first fundamental transformation], 200) 2: (St000441: The number of successions (or small ascents) of a permutation., [Mp00066: inverse, Mp00087: inverse first fundamental transformation, Mp00064: reverse], 200) sage: findstat("Permutations", lambda s: B_num(s, 1), depth=3) 0: (St000441: The number of successions (or small ascents) of a permutation., [], 200) 1: (St000214: The number of adjacencies (or small descents) of a permutation., [Mp00064: reverse], 200) 2: (St000237: The number of indices $i$ such that $\pi_i=i+1$., [Mp00064: reverse, Mp00086: first fundamental transformation, Mp00066: inverse], 200) </code></pre> <p>So, this suggests that using the composition of the maps <a href="http://www.findstat.org/MapsDatabase/Mp00066" rel="nofollow noreferrer">http://www.findstat.org/MapsDatabase/Mp00066</a>, <a href="http://www.findstat.org/MapsDatabase/Mp00087" rel="nofollow noreferrer">http://www.findstat.org/MapsDatabase/Mp00087</a> and <a href="http://www.findstat.org/MapsDatabase/Mp00064" rel="nofollow noreferrer">http://www.findstat.org/MapsDatabase/Mp00064</a> might be a good idea. No guarantee, of course.</p>
66,951
<p>I am asked to find all rows in a matrix in reduced row echelon form which contain nothing but pivots (pivot is $1$, all other entries are $0$).</p> <p>For example, in this matrix:</p> <p>$$ \begin{bmatrix} 1 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{bmatrix} \sim \begin{bmatrix} \color{red}1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; \color{red}1 \end{bmatrix} $$ the rows whose pivots are marked in red are such rows.</p> <p>I wrote the following code:</p> <pre><code>matrix = {{1, 1, 1, 1}, {0, 1, 1, 0}, {0, 0, 0, 1}}; (* Same example. *) reduced = RowReduce[matrix] For[i = 1, i &lt;= Length[reduced], ++i, row = reduced[[i]]; onlyPivot = False; Clear[pivot]; For[j = 1, j &lt;= Length[row], ++j, If[And[row[[j]] != 0, Not[ValueQ[pivot]]], pivot = row[[j]]; onlyPivot = True, If[And[row[[j]] != 0, ValueQ[pivot]], onlyPivot = False ] ]; ]; Print[onlyPivot]; ]; </code></pre> <p>As you can notice, this is very... C-like (at least it works), and probably very inefficient. Is there a better way to do this in Mathematica? What should I be looking into?</p>
bill s
1,783
<p>I'm sure there are lots of things one might try -- here is an attempt using some of the built in options that are available in ListStreamPlot:</p> <pre><code>Show[Graphics[{Black, CountryData["USA", "Polygon"]}], ListStreamPlot[ Table[{{x, y}, Through[{Cos, Sin}[ WeatherData[{y, x}, "WindDirection"]]]}, {x, -140, -70, 5}, {y, 24, 50, 5}], StreamScale -&gt; Full, StreamStyle -&gt; "Toothpick", StreamColorFunction -&gt; GrayLevel, StreamColorFunctionScaling -&gt; False]] </code></pre> <p><img src="https://i.stack.imgur.com/hbVcK.png" alt="enter image description here"></p>
4,288,188
<p>I am trying to obtain a formulae for a summation problem under section (d) given in a solutions manual for &quot;Data Structures and Algorithm Analysis in C - Mark Allen Weiss&quot;, here's the screen shot</p> <p><a href="https://i.stack.imgur.com/fMie6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fMie6.png" alt="enter image description here" /></a></p> <p>As the pdf is protected i could not download it. Here's my attempt at it. Let <span class="math-container">$S_{N} = \sum_{i=0}^\infty \frac{i^N}{4^i}$</span>, then starting from <span class="math-container">$N = 4$</span> we have <span class="math-container">$S_{0} = \frac{4}{3}, S_{1} = \frac{4}{9} , S_{2} = \sum_{i=0}^\infty \frac{2i + 1}{3*4^i}, S_{3} = \sum_{i=0}^\infty \frac{3i^2 + 3i + 1}{3*4^i},S_{4} = \sum_{i=0}^\infty \frac{4i^3 + 6i^2 + 4i + 1}{3*4^i}$</span>.</p> <p>Using recursion, we have <span class="math-container">$S_{0} = \frac{4}{3}, S_{1} = \frac{1}{3}S_{0} , S_{2} = \frac{2S_{1} + S_{0}}{3} = \frac{5}{9}S_{0} , S_{3} = \frac{3S_{2} + 3S_{1} + S_{0}}{3} = \frac{11}{9}S_{0}, S_{4} = \frac{4S_{3} + 6S_{2} + 4S_{1} + S_{0}}{3} = \frac{95}{27}S_{0}$</span>.</p> <p>After cleaning up further we have <span class="math-container">$$S_{0} = \frac{4}{3}, S_{1} = \frac{1}{3}S_{0} , S_{2} = \frac{5}{9}S_{0} , S_{3} = \frac{11}{9}S_{0}, S_{4} = \frac{95}{27}S_{0}$$</span> I dont see a pattern emerging to make a formulae.I may be doing something wrong, any help is greatly appreciated.</p>
DinosaurEgg
535,606
<p>I agree with the answer by @ThomasAndrews in terms of the fact that this recursion is difficult to be solved directly, although I have a feeling it can be solved using finite difference calculus, since there is at least one solution to it as I demonstrate below.</p> <p>Define the function</p> <p><span class="math-container">$$F_N(e^x)=\sum_{n=0}^{\infty}n^N e^{nx}=\frac{d^N}{d x^N }\left(\frac{1}{1-e^x}\right)$$</span></p> <p>We will attempt to explicitly evaluate the derivatives. We get extremely lucky because of the occurrence of <span class="math-container">$e^x$</span> which massively reduces <a href="https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno%27s_formula" rel="nofollow noreferrer">Faa di Bruno's</a> formula for <span class="math-container">$f(x)=1/(1-x), g(x)=e^x$</span> in terms of the Bell polynomial to:</p> <p><span class="math-container">$$F_N(e^x)=\sum_{n=1}^N f^{(k)}(e^x)B_{N,k}(e^x,..., e^x)$$</span></p> <p>Note that since <span class="math-container">$B_{N,k}(x,...,x)=S(N,k)x^k$</span> - where <span class="math-container">$S(N,k)$</span> are the Stirling numbers of the 2nd kind - we can express the formula for <span class="math-container">$S_N(x)$</span> concisely as follows:</p> <p><span class="math-container">$$F_N(x)=\frac{1}{1-x}\sum_{k=1}^N k!S(N,k)\left(\frac{x}{1-x}\right)^k$$</span></p> <p>Which shows that <span class="math-container">$\frac{3}{4}F_N(1/4)=\sum_{k=1}^N k!S(N,k)\left(\frac{1}{3}\right)^k$</span>.</p> <p><strong>EDIT:</strong> It turns out that there is a simple way to solve the recursion formula directly, by using generating functions. The recursion relation for arbitrary <span class="math-container">$x$</span> reads</p> <p><span class="math-container">$$\frac{1-x}{x}F_N(x)=\sum_{k=0}^{N-1}{N\choose k} F_k(x)$$</span></p> <p>Now multiply by <span class="math-container">$y^N/N!$</span> and sum for <span class="math-container">$N\geq 1$</span>. Define <span class="math-container">$S(y;x)=\sum_{N=1}^{\infty}F_N(x) y^N/N!$</span>. Now it is easy to show that the recursion relation as posed leads to the following generating function</p> <p><span class="math-container">$$S(y;x)=F_0(x)\frac{\frac{x}{1-x}(e^y-1)}{1-\frac{x}{1-x}(e^y-1)}~~,~~ F_0(x)=(1-x)^{-1}$$</span></p> <p>Now expand in powers of <span class="math-container">$e^y-1$</span>, use the fact that <span class="math-container">$$(e^{y}-1)^k=k!\sum_{n=k}^{\infty}\frac{y^n}{n!}S(n,k)$$</span> and exchange the order of summation to obtain the desired result.</p>
1,158,489
<p>Is it the case that, as $N\to\infty$, $$\binom{2N}{N+j}_q\to (-1)^j,$$ where convergence of the $q$-binomial coefficient is seen as a convergence of formal power series in the variable $q$? </p>
Johann Cigler
25,649
<p>Let ${\left( {a;q} \right)_n}=\prod\limits_{j = 0}^{n - 1} {(1-{q^j}a} ).$</p> <p>Then $ {2n\brack {n+j}}=\frac{(q;q)_{2n}}{(q;q)_{n+j}(q;q)_{n-j}}$ converges to $\frac{1}{(q;q)_\infty}=\sum\limits_{n \ge 0} {p(n){q^n}}=1+q+2q^2+3q^3+5q^4+ \dots,$ where $p(n)$ is the number of partitions of $n$.</p>
1,529,324
<p>I've read that if $\Phi$ is a Poisson point process (on $\mathbb{R}^d$, say), then conditional on there being $k$ points in some $A \subseteq \mathbb{R}^d$, the positions $X_1,\ldots,X_k$ of these points are uniformly distributed in $A$.</p> <p>I'm having trouble making sense of what this means. "Conditional on $\Phi(A)=k$ I guess means consider the process $\Phi 1_{\Phi(A)=k}$ and then divide probabilities by $P(\Phi(A)=k)$. But, probabilities of what exactly? How am I labeling the points $X_1,\ldots,X_k$? In $\mathbb{R}$ If I did so by $X_1&lt; X_2 &lt; \cdots &lt; X_k$ then clearly they are not uniformly distributed, so clearly the way that I label them matters. Hence my question, what is meant by saying $X_1,\ldots,X_k$ are uniformly distributed? </p>
Michael Hardy
11,667
<p>For every measurable set $A\subseteq\mathbb R^d$ of finite measure, and every measurable set $B\subseteq A$, let $p$ be the conditional probability that the number of sites in $B$ is $\ell$, given that the number of of sites in $A$ is $k$.</p> <p>Suppose $X_1,\ldots,X_k\sim\text{i.i.d. Uniform}(A)$. Let $q$ be the probability that $|\{ X_1,\ldots,X_k \} \cap B| = \ell$.</p> <p>Then, regardless of which sets are $A$ and $B$ and which numbers are $k$ and $\ell$, we have $p=q$.</p> <p>In other words, the probability distribution of the number of points falling in $B$ given the number in $A$, is always the same in either of those two scenarios.</p>
3,624,662
<p><a href="https://i.stack.imgur.com/kwAMn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kwAMn.png" alt=" c"></a></p> <p>In my mind, I can think of below example which seems to work.</p> <p>If <span class="math-container">$(X,T) = \mathbb{R}$</span>, and <span class="math-container">$A = (0,\infty)$</span>, then as far as I know it comes in the standard (order) topology of <span class="math-container">$\mathbb{R}$</span>, but what I dont know is whether it will be a proper closed subset of <span class="math-container">$\mathbb{R}$</span>. If it works, then it will be perfect b/c for all positive integer <span class="math-container">$i$</span>, if I let <span class="math-container">$P_i$</span> be the open interval <span class="math-container">$(0,i)$</span>, then clearly <span class="math-container">$A \subset \cup$</span> <span class="math-container">$O_i$</span>, where <span class="math-container">$i = 1$</span> to <span class="math-container">$\infty$</span>, </p> <p>however there doesnt exist <span class="math-container">$i_1, i_2,......, i_n$</span>, such that <span class="math-container">$A \subset (0,i_1),(0,i_2),....,(0,i_n)$</span>, therefore by the definition of compactness we can see that we dont have a finite sub-cover, so its not compact.</p> <p>Kindly check my proof, let me know if there is anything wrong. Also, give it better style and notation if required. </p>
José Carlos Santos
446,262
<p>The set <span class="math-container">$(0,\infty)$</span> is not closed. But <span class="math-container">$[0,\infty)$</span> will work. For instance, the sequence <span class="math-container">$1,2,3,\ldots$</span> has no convergent subsequence. Or you can say that <span class="math-container">$\{[0,n)\mid n\in\mathbb N\}$</span> is an open cover with no finite subcover.</p>
1,132,003
<blockquote> <p><strong>Problem</strong> Find the value of $$\frac{1}{\sqrt 1 + \sqrt 3} + \frac 1 {\sqrt 3 + \sqrt 5} + \dots + \frac 1 {\sqrt {1087} + \sqrt{1089}}$$</p> </blockquote> <p>I cant figure out how to solve this problem. I cant use summation.</p>
jameselmore
86,570
<p>To complement the other answer: </p> <p><strong>Hint:</strong></p> <p>$$\frac12(\sqrt{n+4} - \sqrt{n+2}) + \frac12(\sqrt{n+2} - \sqrt{n}) = \frac12(\sqrt{n+4} - \sqrt{n}) + \frac12(\sqrt{n+2} - \sqrt{n+2})$$ $$ = \frac12(\sqrt{n+4} - \sqrt{n})$$</p> <p>A rearrangement of terms shows that they collapse</p> <p>EDIT: Let $a_n = \frac{1}{\sqrt{2n-1}+\sqrt{2n+1}}$ where $n\in\{1,2,3,...,544\}$<br> (notice $n = 544 \implies2n+1 = 1089$)</p> <p>$$a_n = \frac{1}{\sqrt{2n-1}+\sqrt{2n+1}} = \frac{1}{\sqrt{2n-1}+\sqrt{2n+1}}\frac{\sqrt{2n-1}-\sqrt{2n+1}}{\sqrt{2n-1}-\sqrt{2n+1}}$$ $$=\frac{\sqrt{2n-1}-\sqrt{2n+1}}{(2n-1) - (2n + 1)} = \frac12(\sqrt{2n+1} - \sqrt{2n-1})$$</p> <p>And we try to evaluate $$\sum_{n=1}^{544} a_n = \sum_{n=1}^{544}\frac12(\sqrt{2n+1} - \sqrt{2n-1})$$ writing out a few terms we see: $$2\sum_{n=1}^{544} a_n = (\sqrt3 - \sqrt1) + (\sqrt{5} - \sqrt3) + ... +(\sqrt{1087} - \sqrt{1085})+(\sqrt{1089} - \sqrt{1087})$$ $$=\sqrt{1089} + (\sqrt{1087} - \sqrt{1087}) + (\sqrt{1085} - \sqrt{1085}) +...+(\sqrt3 -\sqrt3) - \sqrt1 $$</p> <p>So $$\sum_{n=1}^{544} a_n = \frac12(\sqrt{2(544) + 1} - 1) = \frac12(\sqrt{1089} - 1) = 16$$</p>
1,356,545
<p>Given a fair 6-sided die, how can we simulate a biased coin with P(H)= 1/$\pi$ and P(T) = 1 - 1/$\pi$ ?</p>
lulu
252,071
<p>Well, here's an (approximate) way to do it: write P as a "hex-decimal" in base$_6$. Then toss your die repeatedly forming a hex-decimal. (if you throw a "6" mark it as "0"). A Win comes if the hex-decimal produced by the die is less than P.</p> <p>Of course, it is theoretically possible that this takes an arbitrarily long set of tosses (hence the "approximate").</p>
365,631
<p>Suppose we want to prove that among some collection of things, at least one of them has some desirable property. Sometimes the easiest strategy is to equip the collection of all things with a measure, then show that the set of things with the desired property has positive measure. Examples of this strategy appear in many parts of mathematics.</p> <blockquote> <p><strong>What is your favourite example of a proof of this type?</strong></p> </blockquote> <p>Here are some examples:</p> <ul> <li><p><strong>The probabilistic method in combinatorics</strong> As I understand it, a typical pattern of argument is as follows. We have a set <span class="math-container">$X$</span> and want to show that at least one element of <span class="math-container">$X$</span> has property <span class="math-container">$P$</span>. We choose some function <span class="math-container">$f: X \to \{0, 1, \ldots\}$</span> such that <span class="math-container">$f(x) = 0$</span> iff <span class="math-container">$x$</span> satisfies <span class="math-container">$P$</span>, and we choose a probability measure on <span class="math-container">$X$</span>. Then we show that with respect to that measure, <span class="math-container">$\mathbb{E}(f) &lt; 1$</span>. It follows that <span class="math-container">$f^{-1}\{0\}$</span> has positive measure, and is therefore nonempty.</p> </li> <li><p><strong>Real analysis</strong> One example is <a href="http://www.artsci.kyushu-u.ac.jp/%7Essaito/eng/maths/Cauchy.pdf" rel="noreferrer">Banach's proof</a> that any measurable function <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> satisfying Cauchy's functional equation <span class="math-container">$f(x + y) = f(x) + f(y)$</span> is linear. Sketch: it's enough to show that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>, since then it follows from additivity that <span class="math-container">$f$</span> is continuous everywhere, which makes it easy. To show continuity at <span class="math-container">$0$</span>, let <span class="math-container">$\varepsilon &gt; 0$</span>. An argument using Lusin's theorem shows that for all sufficiently small <span class="math-container">$x$</span>, the set <span class="math-container">$\{y: |f(x + y) - f(y)| &lt; \varepsilon\}$</span> has positive Lebesgue measure. In particular, it's nonempty, and additivity then gives <span class="math-container">$|f(x)| &lt; \varepsilon$</span>.</p> <p>Another example is the existence of real numbers that are <a href="https://en.wikipedia.org/wiki/Normal_number" rel="noreferrer">normal</a> (i.e. normal to every base). It was shown that almost all real numbers have this property well before any specific number was shown to be normal.</p> </li> <li><p><strong>Set theory</strong> Here I take ultrafilters to be the notion of measure, an ultrafilter on a set <span class="math-container">$X$</span> being a finitely additive <span class="math-container">$\{0, 1\}$</span>-valued probability measure defined on the full <span class="math-container">$\sigma$</span>-algebra <span class="math-container">$P(X)$</span>. Some existence proofs work by proving that the subset of elements with the desired property has measure <span class="math-container">$1$</span> in the ultrafilter, and is therefore nonempty.</p> <p>One example is a proof that for every measurable cardinal <span class="math-container">$\kappa$</span>, there exists some inaccessible cardinal strictly smaller than it. Sketch: take a <span class="math-container">$\kappa$</span>-complete ultrafilter on <span class="math-container">$\kappa$</span>. Make an inspired choice of function <span class="math-container">$\kappa \to \{\text{cardinals } &lt; \kappa \}$</span>. Push the ultrafilter forwards along this function to give an ultrafilter on <span class="math-container">$\{\text{cardinals } &lt; \kappa\}$</span>. Then prove that the set of inaccessible cardinals <span class="math-container">$&lt; \kappa$</span> belongs to that ultrafilter (&quot;has measure <span class="math-container">$1$</span>&quot;) and conclude that, in particular, it's nonempty.</p> <p>(Although it has a similar flavour, I would <em>not</em> include in this list the cardinal arithmetic proof of the existence of transcendental real numbers, for two reasons. First, there's no measure in sight. Second -- contrary to popular belief -- this argument leads to an <em>explicit construction</em> of a transcendental number, whereas the other arguments on this list do not explicitly construct a thing with the desired properties.)</p> </li> </ul> <p>(Mathematicians being mathematicians, someone will probably observe that <em>any</em> existence proof can be presented as a proof in which the set of things with the required property has positive measure. Once you've got a thing with the property, just take the Dirac delta on it. But obviously I'm after less trivial examples.)</p> <p><strong>PS</strong> I'm aware of the earlier question <a href="https://mathoverflow.net/questions/34390">On proving that a certain set is not empty by proving that it is actually large</a>. That has some good answers, a couple of which could also be answers to my question. But my question is specifically focused on <em>positive measure</em>, and excludes things like the transcendental number argument or the Baire category theorem discussed there.</p>
Terry Tao
766
<p>The <a href="https://en.wikipedia.org/wiki/Chevalley%E2%80%93Warning_theorem" rel="noreferrer">Chevalley-Warning theorem</a> asserts that if a system of polynomial equations in <span class="math-container">$r$</span> variables over a finite field of characteristic <span class="math-container">$p$</span> has total degree less than <span class="math-container">$r$</span>, then the number of solutions to this system is a multiple of <span class="math-container">$p$</span>.</p> <p>An immediate corollary of this is <em>Chevalley's theorem</em>: if such a system of polynomials has a &quot;trivial&quot; solution (often this is the origin <span class="math-container">$(0,\dots,0)$</span>), then it must necessarily have a non-trivial solution as well. This is often applied for instance as part of the &quot;polynomial method&quot; in combinatorics.</p>
3,068,031
<blockquote> <p>Let <span class="math-container">$G$</span> be a group and <span class="math-container">$H$</span> be a subgroup of <span class="math-container">$G$</span>. Let also <span class="math-container">$a,~b\in G$</span> such that <span class="math-container">$ab\in H$</span>.</p> <p>True or false? <span class="math-container">$a^2b^2\in H.$</span></p> </blockquote> <p><em>Attempt.</em> I believe the answer is no (i have proved that the statement is true for normal subgroups, but it seems that there is no need to hold for arbitrary subgroups). I was looking for a counterexample in a non abelian group of small order, such as <span class="math-container">$S_3$</span>, or <span class="math-container">$S_4$</span>, but i couldn't find a suitable combination of <span class="math-container">$H\leq S_n$</span>, <span class="math-container">$\sigma$</span> and <span class="math-container">$\tau\in S_n$</span> such that <span class="math-container">$\sigma \tau \in H$</span> and <span class="math-container">$\sigma^2 \tau^2 \notin H.$</span></p> <p>Thanks in advance for the help.</p>
Aphelli
556,825
<p>Let <span class="math-container">$u \in G$</span>, <span class="math-container">$v \in H$</span>.</p> <p>Take <span class="math-container">$a=u$</span>, <span class="math-container">$b=u^{-1}v$</span>. Then <span class="math-container">$ab \in H$</span>. </p> <p>Moreover, <span class="math-container">$a^2b^2=uvu^{-1}v$</span>, thus <span class="math-container">$a^2b^2 \in H \Leftrightarrow uvu^{-1}v \in H \Leftrightarrow uvu^{-1} \in H$</span>. </p> <p>Thus if <span class="math-container">$H$</span> is not normal, the property does not hold.</p>
3,110,508
<p>I read that implication like a=>b can be proof using the following steps : 1) suppose a true. 2) Then deduce b from a. 3) Then you can conclude that a=>b is true.</p> <p>Actually my real problem is to understand why step 1 and 2 are sufficient to prove that a=>b is true. I mean, how can you prove the truth table of a=>b just using 1 and 2 ? I know that implication a=>b is actually defined as "not(a) or b". How can steps 1 and 2 can prove that a and b are related like "not(a) or b" ?</p>
cansomeonehelpmeout
413,677
<p>Maybe not easier, but given <span class="math-container">$$\frac{1-z^n}{1-z}=\sum_{i=0}^{n-1}z^i\\1-z^n=(1-z)\sum_{i=0}^{n-1}z^i\\$$</span> let <span class="math-container">$z=\frac{y}{x}$</span>, then <span class="math-container">$$1-\left(\frac{y}{x}\right)^n=\left(1-\frac{y}{x}\right)\sum_{i=0}^{n-1}y^ix^{-i}$$</span> multiplying by <span class="math-container">$x^n$</span> yields <span class="math-container">$$x^n-y^n=(x-y)\sum_{i=0}^{n-1}y^ix^{n-1-i}$$</span></p>
2,933,375
<p>I have a set of vectors, <span class="math-container">$M_1$</span> which is defined as the following: <span class="math-container">$$M_1:=[\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}]$$</span> I have to show that <span class="math-container">$M_1$</span> isn't a generating set of <span class="math-container">$\mathbb R^3$</span>, even though it's linearly independent. My initial idea was that, because <span class="math-container">$$\begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix}≠a\cdot\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}+ b \cdot\begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}$$</span> therefore <span class="math-container">$M_1$</span> isn't a generating set of <span class="math-container">$\mathbb R^3$</span>. However I would like to know if there is any other way to show that <span class="math-container">$M_1$</span> is not a generating set of <span class="math-container">$\mathbb R^3$</span>.</p>
Dan Christensen
3,515
<p>There are no universally accepted standards in this case. You really must get used to slight variations in notation from one author to the next.</p> <p>To add to the confusion, I often write <span class="math-container">$\forall x: [x \in N \implies P(x)]$</span> and <span class="math-container">$\exists x: [x \in N \land P(x)]$</span>. It uses more symbols but it is often easier to work with than other notations. Tricky proofs in logic for the beginner, can often be made much simpler using this notation. </p> <hr> <p><strong>Warning:</strong> Avoid using <span class="math-container">$\exists x: [ x\in N \implies P(x)]$</span>. Very weird things can happen. Using ordinary set theory here, this statement will be true for any set <span class="math-container">$N$</span> and for even the most nonsensical proposition <span class="math-container">$P(x).$</span></p>
3,572,842
<p><strong>Context:</strong> 1st year BSc Mathematics, Vectors and Mechanics module, constant circular motion.</p> <p>This may be trivial, but can someone tell me what's wrong with the following reasoning?</p> <p><span class="math-container">$$\underline{e_r}=\underline{i}\cos\theta+\underline{j}\sin\theta=(1,\theta) \;\;(1);$$</span> <span class="math-container">$$\underline{e_\theta}=\frac{d(\underline{e_r})}{d\theta}=-\underline{i}\sin\theta+\underline{j}\cos\theta \;\;(2);$$</span> so <span class="math-container">$$(1),(2):\;\; \underline{e_\theta}=\frac{d}{d\theta}((1,\theta))=(1,\frac{d\theta}{d\theta})=(1,1) \;\;(3),$$</span> so <span class="math-container">$$(2),(3): \;\; (1,1)=-\underline{i}\sin\theta+\underline{j}\cos\theta \;\; (4),$$</span> an undesirable conclusion.</p>
Will Jagy
10,400
<p>I have been running some programs. It seems that the break even point, where the possible values of your <span class="math-container">$a+b$</span> are half prime and half composite for <span class="math-container">$$ a+b &lt; 1736495 \; , \; $$</span> a number between one million and two million. I'm impressed. There seems to be a little wobble, up to 1,740,000 I think sometimes there are more primes, sometimes more composite. I guess I know some good ways to investigate that a bit more. </p> <p>The following may or may not make any sense, but shows that we can take a + b &lt; 1736495 as our break even point.</p> <pre><code>jagy@phobeusjunior:~<span class="math-container">$ head -130400 mse.txt | grep P | wc 65208 260832 1976749 jagy@phobeusjunior:~$</span> head -130500 mse.txt | grep P | wc 65252 261008 1978113 jagy@phobeusjunior:~<span class="math-container">$ head -130600 mse.txt | grep P | wc 65298 261192 1979539 jagy@phobeusjunior:~$</span> head -130510 mse.txt | grep P | wc 65255 261020 1978206 jagy@phobeusjunior:~<span class="math-container">$ jagy@phobeusjunior:~$</span> head -130510 mse.txt | tail 1736329 = 7 * 17 * 14591 1736369 = 1736369 P 1736393 = 1736393 P 1736399 = 7 * 248057 1736407 = 353 * 4919 1736417 = 1736417 P 1736431 = 17 * 23 * 4441 1736441 = 7 * 248063 1736473 = 41^2 * 1033 1736489 = 1009 * 1721 jagy@phobeusjunior:~<span class="math-container">$ jagy@phobeusjunior:~$</span> jagy@phobeusjunior:~<span class="math-container">$ head -130515 mse.txt | tail 1736417 = 1736417 P 1736431 = 17 * 23 * 4441 1736441 = 7 * 248063 1736473 = 41^2 * 1033 1736489 = 1009 * 1721 1736497 = 7 * 248071 1736519 = 1736519 P 1736551 = 1097 * 1583 1736561 = 337 * 5153 1736567 = 7 * 17 * 14593 jagy@phobeusjunior:~$</span> </code></pre> <p>ORIGINAL:</p> <p>The number you are asking about, for a primitive Pythagorean triple, is <span class="math-container">$$ n^2 + 2nm - m^2 $$</span> when <span class="math-container">$\gcd(m,n) = 1$</span> and they are not both odd. The usual way to talk about this is to take integrs <span class="math-container">$x,y$</span> with <span class="math-container">$x = n + m$</span> and <span class="math-container">$y = m,$</span> so we still have <span class="math-container">$\gcd(x,y) = 1$</span> and now <span class="math-container">$x$</span> is odd. Finally <span class="math-container">$$ a+b = x^2 - 2 y^2 . $$</span> Since <span class="math-container">$x,y$</span> are coprime, and <span class="math-container">$x$</span> is odd, this number can be divisible only by primes <span class="math-container">$$ p \equiv \pm 1 \pmod 8. $$</span> The first few such primes are <span class="math-container">$$7, 17 , 23 , 31, 41, 47, 71 , 73, 79, 89, 97, 103, 113, 127, 137, 151, 167, 191, 193, 199, 223, ... $$</span> The two smallest products of these primes are <span class="math-container">$49$</span> and <span class="math-container">$119.$</span> You have seen both. Those are <strong>as small as possible</strong>. </p> <pre><code> Primitively represented odd positive integers up to 600 and greater than 1 7 = 7 17 = 17 23 = 23 31 = 31 41 = 41 47 = 47 49 = 7^2 71 = 71 73 = 73 79 = 79 89 = 89 97 = 97 103 = 103 113 = 113 119 = 7 * 17 127 = 127 137 = 137 151 = 151 161 = 7 * 23 167 = 167 191 = 191 193 = 193 199 = 199 217 = 7 * 31 223 = 223 233 = 233 239 = 239 241 = 241 257 = 257 263 = 263 271 = 271 281 = 281 287 = 7 * 41 289 = 17^2 311 = 311 313 = 313 329 = 7 * 47 337 = 337 343 = 7^3 353 = 353 359 = 359 367 = 367 383 = 383 391 = 17 * 23 401 = 401 409 = 409 431 = 431 433 = 433 439 = 439 449 = 449 457 = 457 463 = 463 479 = 479 487 = 487 497 = 7 * 71 503 = 503 511 = 7 * 73 521 = 521 527 = 17 * 31 529 = 23^2 553 = 7 * 79 569 = 569 577 = 577 593 = 593 599 = 599 Primitively represented odd positive integers up to 600 and greater than 1 1 0 -2 original form </code></pre>
393,712
<p>I studied elementary probability theory. For that, density functions were enough. What is a practical necessity to develop measure theory? What is a problem that cannot be solved using elementary density functions?</p>
John Douma
69,810
<p>Measure theory goes beyond probability theory. It generalizes our notion of length, area and volume.</p>
255,374
<p>Does there exist any noncomputable set $A$ and probabilistic Turing machine $M$ such that $\forall n\in A$ $M(n)$ halts and outputs $1$ with probability at least $2/3$, and $\forall n\in\mathbb{N}\setminus A$ $M(n)$ halts and outputs $0$ with probability at least $2/3$? What if you only require that $M(n)$ is correct with probability greater than $1/2$?</p>
none
101,583
<p>Construct $A$ as follows. Roll a 6-sided die infinitely many times, giving output $r_1,r_2\dots$.</p> <p>Now for odd $k$, say $k\in A$ iff $r_k=6$. For even $k$, say $k\in A$ iff $r_k&lt;6$. So $k\in A$ with probability 1/6 if $k$ is odd, and $5/6$ if $k$ is even. $A$ is obviously incomputable.</p> <p>Turing machine $M$ on input $k$ simply answers 1 if $k$ is even, $0$ otherwise. So it is correct 5/6 of the time.</p>
1,253,687
<p>I don't know how to solve this one and the question is:</p> <p>Find the values of a at which $y = x^3 + ax^2 + 3x + 1$.</p> <p>My solution is:</p> <p>$y'= 3x^2 + 2ax + 3$</p> <p>I know that if $y' \ge 0$, $y$ should be always increasing. I don't know how to make it true. Please help and explain and thank you in advance!</p> <p>Edit: i saw another solution but cannot understand it.</p> <p>D/4 = a^2 - 9</p> <p>What does the D stand for and why divide it by 4. Also where does the a^2 - 9 come from?</p> <p>Also, how do you write exponents?</p>
alkabary
96,332
<p>Well, you made the first right step.</p> <p>$$y^{'} = 3x^2 + 2ax + 3$$</p> <p>Now You need to know which values for $a$ satisfy $3x^2 + 2ax + 3 \geq 0$</p> <p>Now $$ 2ax \geq -3 -3x^2$$</p> <p>so</p> <p>$$ax \geq \frac{-3x^2}{2} - \frac{3}{2}$$</p> <p>Now you should do some effort to figure out what are the possible values for $a$</p>
2,037,030
<p>I am studying Distribution theory. But I am curious about that why we coin compact support. In what situation is it useful? Can any one give an intuitive way to explain this concept?</p>
reuns
276,986
<p>Why do we require that $\varphi(x) = 0$ for $x$ large enough ? Because it allows us to <strong>integrate by parts without fear</strong> : $$\int_{-\infty}^\infty T(x) \varphi'(x)dx = \lim_{x \to \infty} T(x)\varphi(x)-T(-x)\varphi(-x)-\int_{-\infty}^\infty T'(x) \varphi(x)dx$$ Here $\varphi \in C^\infty_c$ and $T$ is a distribution, so $T(x)\varphi(x)$ doesn't make sense, but if you assume that $\varphi(x) = 0$ for $|x| &gt; M$ then clearly $\lim_{x \to \infty} T(x)\varphi(x)-T(-x)\varphi(-x) = 0$ and it makes sense to write $$\langle T,\varphi' \rangle =\int_{-\infty}^\infty T(x) \varphi'(x)dx = -\int_{-\infty}^\infty T'(x) \varphi(x)dx=-\langle T',\varphi \rangle \tag{1}$$ which is exactly what we need for defining $\delta'$ the derivative of the Dirac delta and the derivatives of distributions in general.</p> <p>Now the big question : how do you prove that $(1)$ makes sense (that it doesn't lead to some contradictions) ? Well you can take it as a definition, so nothing to prove, </p> <p>or you can show that when defining the distributions as linear operators $C^\infty_c \to \mathbb{R}$ continuous for the test function space topology, then the differentiation operator $\langle T,.\rangle \mapsto \langle T',.\rangle$ is continuous in the sense of distributions.</p> <hr> <p>See also the <a href="https://en.wikipedia.org/wiki/Schwartz_space" rel="nofollow noreferrer">Schwartz space</a>, where we replace the compact support property by a decay $o(x^{-k})$ at $\infty$, discarding the distributions with a too large grow rate and keeping the so-called tempered distributions.</p>
898,755
<p>The function $G_m(x)$ is what I encountered during my search for approximates of Riemann $\zeta$ function:</p> <p>$$f_n(x)=n^2 x\left(2\pi n^2 x-3 \right)\exp\left(-\pi n^2 x\right)\text{, }x\ge1;n=1,2,3,\cdots,\tag{1}$$ $$F_m(x)=\sum_{n=1}^{m}f_n(x)\text{, }\tag{2}$$</p> <p>$$G_m(x)=F_m(x)+F_m(1/x)\text{, }\tag{3}$$</p> <p>Numerical results showed that $G_m(x)$ is zero near $m+1.2$ for $m=1,2,...,8$.</p> <p>Please refer to fig 1 below for the plot of $\log|G_m(x)|$ v.s. $x$ for $m=1,2,...,8$</p> <p><img src="https://i.stack.imgur.com/KWh09.png" alt="enter image description here"></p> <p>Let us denote these zeros by $x_0(m)$. I am interested if it can be proved that</p> <p>(A) $x_0(m)$ is the smallest zero of $G_m(x)$ for $x\ge1$ </p> <p>(B) there exist bounds $\mu(m),\nu(m)$ such that $0&lt;\mu(m)\le x_0(m)\le \nu(m)$;$\mu(m),\nu(m)\to\infty$, when $m\to\infty$.</p> <p>Here are the things I tried.</p> <p>Because $G_m(1)&gt;0$ and $G_m(x)\to F_m(1/x)&lt;0$ when $x\to\infty$, so there exist a zero for $G_m(x)$ between $x=1$ and $x=\infty$.</p> <p>But I was not able to find the bounds for this zero.</p> <p>It is tempting to speculate that $x_0(m)$ is the only zero for $G_m(x)$ and $m+1&lt;x_0(m)&lt;m+2$.</p> <p>The values for $x_0(m), (m=1,2,...,10)$ are given by:</p> <p>$x_0(1)$=2.24203, $x_0(2)$=3.21971, $x_0(3)$=4.21913, $x_0(4)$=5.22283, $x_0(5)$=6.22764, $x_0(6)$=7.23268, $x_0(7)$=8.23764, $x_0(8)$=9.24241, $x_0(9)$=10.2469, $x_0(10)$=11.2512.</p>
Daccache
79,416
<p>While this is just a partial answer, I hope this serves at least as a step in the right direction for proving what you need to. </p> <p>First, to work with something more concrete, I substituted the expressions for $f_n(x)$ into $F_m(x)$ and then that into $G_m(x)$ in order to get an explicit set of functions:<br> $$F_m(x) = \sum\limits_{n=1}^mn^2x(2\pi n^2x - 3)\exp(-n^2\pi x)$$ $$G_m(x) = \sum\limits_{n=1}^mn^2x(2\pi n^2x - 3)\exp(-n^2\pi x) + \sum\limits_{n=1}^m\frac{n^2}{x}\left(\frac{2\pi n^2}{x} - 3\right)\exp\left(-\frac{n^2\pi}{x}\right)$$ While this class of functions is particularly nasty, we can make some ways with proving the proposed bounds $m + 1 &gt; x_0(m) &gt; m + 2$ hold. (By the way, the inequalities are reversed here than in your question, since the function $G_m(x)$ is positive and decreases in $x_0(m)$'s neighborhood, so $G_m(m + 1) &gt; G_m(m + 2)$, thus making the former the upper bound and the latter the lower bound, not vice versa.) </p> <p>So basically what we need to show is that $G_m(m + 1)$ is always positive for all $m$, and $G_m(m + 2)$ is always negative, thus by the intermediate value theorem (since the function in question is continuous), $G_m(x)$ has a root in between $m + 1$ and $m + 2$. </p> <p>Substituting in $x = m + 1$ for the function, we get:<br> $$G_m(m + 1) = \sum\limits_{n=1}^mn^2(m + 1)(2\pi n^2m + 2\pi n^2 - 3)\exp(-n^2\pi (m + 1)) + \sum\limits_{n=1}^m\frac{n^2}{m + 1}\left(\frac{2\pi n^2}{m + 1} - 3\right)\exp\left(-\frac{n^2\pi}{m + 1}\right)$$ If we can show that each factor in each summation is always positive, then the whole summation is positive and so the functions are always positive at that point. (Actually, that would be the best case scenario of it satisfying a sufficient but not necessary condition; it can have some negative sums as long as the total value of the positive terms is larger than the negative ones.) </p> <p>From the first summation, factor by factor: </p> <p><strong>1st summation, 1st factor: $n^2$</strong><br> Since the square of any number is always positive, $n^2$ is positive. </p> <p><strong>1st summation, 2nd factor: $m + 1$</strong><br> Obviously $m$ is a positive number by definition, and so $m + 1$ is also always positive. </p> <p><strong>1st summation, 3rd product: $(2\pi n^2m + 2\pi n^2 - 3)$</strong><br> $2\pi n^2m + 2\pi n^2$ must be greater than 3 for this factor to be positive. Taking the 'lowest' case of $n = m = 1$, we get $2\pi + 2\pi = 4\pi,$ and $4\pi &gt; 3$, so this factor will always be positive. </p> <p><strong>1st summation, 4th product: $\exp(-n^2\pi (m + 1))$</strong><br> An exponential term is never negative or zero. </p> <p>Now, we go on to the second summation:<br> <strong>2nd summation, 1st product: $\frac{n^2}{m + 1}$</strong><br> $n^2$ and $m + 1$ are always positive, so their quotient is too. </p> <p><strong>2nd summation, 2nd product: $\frac{2\pi n^2}{m + 1} - 3$</strong><br> Alas, here we run into trouble. Taking the case $n = 1, m = 2$, we see that the resulting term is negative. When will it be negative? Like the corresponding factor in the first summation, the fraction must be greater than $3$: </p> <p>$\frac{2\pi n^2}{m + 1} &gt; 3 \implies 2\pi n^2 &gt; 3(m + 1) \implies n &gt; \sqrt{\frac{3}{2\pi}}\sqrt{m + 1}$<br> So, for the first few terms in the sum, the product will be negative, but once n is sufficiently large to satisfy the inequality, the product will become positive.<br> <strong>2nd summation, 3rd product: $\exp\left(-\frac{n^2\pi}{m + 1}\right)$</strong><br> Again, an exponential term is never negative. </p> <p>So, what does this all mean since not all terms are positive? All it means is that we need to prove that the first summation is larger than the second one (which I couldn't do), so that their difference is still positive:<br> $$\sum\limits_{n=1}^mn^2(m + 1)(2\pi n^2m + 2\pi n^2 - 3)\exp(-n^2\pi (m + 1)) &gt; \sum\limits_{n=1}^m\frac{n^2}{m + 1}\left(\frac{2\pi n^2}{m + 1} - 3\right)\exp\left(-\frac{n^2\pi}{m + 1}\right)$$<br> Or, if we strip out the positive terms from the second summation, (I'm calling the terms $a_n$ and $b_n$ for the 1st and 2nd summations respectively)<br> $$\left[\sum\limits_{n=1}^ma_n + \sum\limits_{n &gt; \sqrt{\frac{3}{2\pi}}\sqrt{m + 1}}^mb_n\right] &gt; \sum\limits_{n &lt; \sqrt{\frac{3}{2\pi}}\sqrt{m + 1}}^mb_n$$<br> If we prove either inequality (the first one is stronger than the second), we deduce that the function is always positive at the point $m + 1$. We can use an extremely similar argument for the point $m + 2$ to prove it is negative, and thus we will have proved the bounds. About the first question (whether $x_0(m)$ is the smallest zero of $G_m(x)$), if we take that the function is decreasing from $G_m(1)$ to $x_0(m)$, we can prove it to be the smallest zero by contradiction (and if we also accept that $\lim\limits_{x \to \infty} G_m(x) = 0$, then we can prove that it is the only zero.) For suppose that there exists other zeros smaller than $x_0(m)$; that is, in the interval $[1, x_0(m)]$. Since the function is continuous, the only way for it to have a smaller zero is if the function dips below zero and back up again (since it needs to pass through zero at $x_0(m)$). But since the function is always decreasing on that interval, then we reach a contradiction, since after the first 'smaller' zero the function would be negative and would need to be increasing to cross the x-axis again. I know this is far from rigorous, but either way proving the bounds would also prove this. </p> <p>I apologize for the (extremely!) long answer, but I found out a lot about this function and didn't want anything to go to waste. Cheers!</p>
898,755
<p>The function $G_m(x)$ is what I encountered during my search for approximates of Riemann $\zeta$ function:</p> <p>$$f_n(x)=n^2 x\left(2\pi n^2 x-3 \right)\exp\left(-\pi n^2 x\right)\text{, }x\ge1;n=1,2,3,\cdots,\tag{1}$$ $$F_m(x)=\sum_{n=1}^{m}f_n(x)\text{, }\tag{2}$$</p> <p>$$G_m(x)=F_m(x)+F_m(1/x)\text{, }\tag{3}$$</p> <p>Numerical results showed that $G_m(x)$ is zero near $m+1.2$ for $m=1,2,...,8$.</p> <p>Please refer to fig 1 below for the plot of $\log|G_m(x)|$ v.s. $x$ for $m=1,2,...,8$</p> <p><img src="https://i.stack.imgur.com/KWh09.png" alt="enter image description here"></p> <p>Let us denote these zeros by $x_0(m)$. I am interested if it can be proved that</p> <p>(A) $x_0(m)$ is the smallest zero of $G_m(x)$ for $x\ge1$ </p> <p>(B) there exist bounds $\mu(m),\nu(m)$ such that $0&lt;\mu(m)\le x_0(m)\le \nu(m)$;$\mu(m),\nu(m)\to\infty$, when $m\to\infty$.</p> <p>Here are the things I tried.</p> <p>Because $G_m(1)&gt;0$ and $G_m(x)\to F_m(1/x)&lt;0$ when $x\to\infty$, so there exist a zero for $G_m(x)$ between $x=1$ and $x=\infty$.</p> <p>But I was not able to find the bounds for this zero.</p> <p>It is tempting to speculate that $x_0(m)$ is the only zero for $G_m(x)$ and $m+1&lt;x_0(m)&lt;m+2$.</p> <p>The values for $x_0(m), (m=1,2,...,10)$ are given by:</p> <p>$x_0(1)$=2.24203, $x_0(2)$=3.21971, $x_0(3)$=4.21913, $x_0(4)$=5.22283, $x_0(5)$=6.22764, $x_0(6)$=7.23268, $x_0(7)$=8.23764, $x_0(8)$=9.24241, $x_0(9)$=10.2469, $x_0(10)$=11.2512.</p>
mike
75,218
<p>@GeorgeDaccache Nice results!</p> <p>I just need the space and easy of use in this answer area to convey some thoughts that I have.</p> <p>First let me define two integers $n_0(m)$ and $n_1(m)$ as</p> <p>$$ n_0(m) =\lfloor{\sqrt{3/(2\pi)}\sqrt{m + 1}}\rfloor$$ $$ n_1(m) =\lceil{\sqrt{3/(2\pi)}\sqrt{m + 1}}\text{ }\rceil$$</p> <p>For convenience we also define $a_n(m)$ and $b_n(m)$ as: $$a_n(m)=n^2(m + 1)(2\pi n^2(m + 1) - 3)&gt;0$$ $$b_n(m)=\frac{n^2}{m + 1}\left(\frac{2\pi n^2}{m + 1} - 3\right)$$</p> <p>So that $$b_n(m)\gt 0, m\ge n\gt n_1(m)$$ $$b_n(m)\lt 0, 1\le n\lt n_0(m)$$</p> <p>Then what we want to prove is the following:</p> <p>$$\sum\limits_{n=1}^m a_n(m)\exp(-n^2\pi (m + 1)) + \sum\limits_{n=n_1(m)}^m b_n(m)\exp\left(-\frac{n^2\pi}{m + 1}\right) &gt; \sum\limits_{n=1}^{n_0(m)} (-b_n(m))\exp\left(-\frac{n^2\pi}{m + 1}\right)\tag{1}$$</p> <p>Since for $A&gt;0$,</p> <p>$$\exp(-A n^2)&gt;\exp(-A m^2)$$ $$\exp(-A n^2)&lt;\exp(-A * 1^2)=\exp(-A)$$</p> <p>We can replace the exponential terms by their corresponding limit terms, factor them out of the summation and thus only need to prove the following:</p> <p>$$\exp(-(n_0(m))^2\pi (m + 1))\sum\limits_{n=1}^{n_0(m)} a_n(m)+\exp(-m^2\pi (m + 1))\sum\limits_{n=n_1(m)}^m a_n(m) + \exp\left(-\frac{m^2\pi}{m + 1}\right)\sum\limits_{n=n_1(m)}^m b_n(m) &gt; \exp\left(-\frac{1^2\pi}{m + 1}\right)\sum\limits_{n=1}^{n_0(m)} (-b_n(m))\tag{2}$$</p> <p>The nice thing about (2) is that we can now complete the summation of $n$ in (2).</p> <p>@GeorgeDaccache, can you continue to work along this direction? Since you already spent so much effort on this problem. I feel that you might just be one step away from providing a complete answer.</p> <p>Best regards- mike</p> <p>EDIT: Numerical results for $m=10$ showed that the terms associated with $a_n$ are quite tiny, so we can now focus on proving that:</p> <p>$$\sum\limits_{n=n_1(m)}^m b_n(m)\exp\left(-\frac{n^2\pi}{m + 1}\right) &gt; \sum\limits_{n=1}^{n_0(m)} (-b_n(m))\exp\left(-\frac{n^2\pi}{m + 1}\right)\tag{3}$$</p>
463,139
<p>I have this:</p> <p>Case 1)</p> <p><img src="https://i.stack.imgur.com/xEEFQ.png" alt="enter image description here"></p> <p>If <em>f</em> is a pair function $f(-x)=f(x)$ then $\int_{-a}^a f(x)dx=2\int_0^af(x)dx$</p> <p>Case 2)</p> <p><img src="https://i.stack.imgur.com/mnHbB.png" alt="enter image description here"></p> <p>If $f$ is a inpair function $f(-x)=-f(x)$ then $\int_{-a}^a f(x)dx=0$</p> <p>I understand the reasons for the case 1 to be double area and for case 2 to be zero, but, I'll be grateful if some one can tell me a little more of this <strong>symmetry aspect</strong> </p> <p><strong>How can I realize when and where this symmetry exist in a function</strong> Particularly will be helpful if someone explain how to interpret $f(-x)=-f(x)$ and $f(-x)=f(x)$ <strong>I'd to understand better what do they imply</strong></p>
Sujaan Kunalan
77,862
<p>Algebraically, </p> <p>A function is odd if $f(-x)=-f(x)$ and a function is even when $f(-x)=f(x)$.</p> <p>For example, if $f(x)=x^3$, then $f(-x)=(-x)^3=-x^3$, which is $-f(x)$. This is our original function multiplied by $-1$. That means we can say this is an <em>odd</em> function.</p> <p>Example for an even function could be $f(x)=3x^2$, because $f(-x)=3(-x)^2=3x^2$, which is our original function, $f(x).$ This means we can say this is an <em>even</em> function.</p> <hr> <p>Geometrically,</p> <p>A function $f$ is even if the graph of $f$ is symmetric with respect to the $y$-axis.</p> <p>A function $f$ is odd if the graph of $f$ is symmetric with respect to the origin.</p>
463,139
<p>I have this:</p> <p>Case 1)</p> <p><img src="https://i.stack.imgur.com/xEEFQ.png" alt="enter image description here"></p> <p>If <em>f</em> is a pair function $f(-x)=f(x)$ then $\int_{-a}^a f(x)dx=2\int_0^af(x)dx$</p> <p>Case 2)</p> <p><img src="https://i.stack.imgur.com/mnHbB.png" alt="enter image description here"></p> <p>If $f$ is a inpair function $f(-x)=-f(x)$ then $\int_{-a}^a f(x)dx=0$</p> <p>I understand the reasons for the case 1 to be double area and for case 2 to be zero, but, I'll be grateful if some one can tell me a little more of this <strong>symmetry aspect</strong> </p> <p><strong>How can I realize when and where this symmetry exist in a function</strong> Particularly will be helpful if someone explain how to interpret $f(-x)=-f(x)$ and $f(-x)=f(x)$ <strong>I'd to understand better what do they imply</strong></p>
dls
1,761
<blockquote> <p>How can I realize when and where this symmetry exist in a function?</p> </blockquote> <p>You can recognize symmetric functions by knowing basic examples and understanding how these behave under common combinations.</p> <ol> <li><p>The most basic examples of even functions $f(x)=f(-x)$ are the monomials with even exponent. For instance: $1=x^0, x^2, x^4$ and so on. The function $f(x)=x^2$ is even since $$f(-x)=(-x)^2=(-1)^2x^2=x^2=f(x).$$ Examples of odd functions $f(x)=-f(-x)$ are given by the monomials with odd exponent: $x,x^3,x^5,\cdots$. Two other examples basic examples of functions with symmetry are sine and cosine. By the Taylor expansion $$\sin(x) = \sum_{k=0}^{\infty} \frac{(-1)^n}{(2n+1)!}x^{2n+1} = x-\frac{x^3}{3!}+\frac{x^5}{5!}+\cdots$$ we suspect that sine is odd since it consists of only odd powers of $x$. You should verify this. Alternatively, the <a href="https://en.wikipedia.org/wiki/Taylor_series#List_of_Maclaurin_series_of_some_common_functions" rel="noreferrer">Taylor expansion for cosine</a> indicates that it is an even function.</p></li> <li><p>Now that you have a library of functions exhibiting symmetry, you can ask:</p> <ul> <li>What happens if I add two such functions?</li> <li>What if I multiply a symmetric function by a constant?</li> <li>What if I multiply two such functions?</li> <li>What happens if I divide two such functions?</li> <li>What if I compose two such functions?</li> </ul></li> </ol> <p>I won't answer all of these questions, but here's an example. Let's prove that the product of an even function with an odd function is odd. Let $f$ be an even function, that is, $f(-x)=f(x)$, and $g$ satisfy $g(-x)=-g(x)$. Then we want to show that $x \mapsto f(x)g(x)$ is an odd function. Simply compute: $$f(-x)g(-x) = f(x)g(-x) = f(x)[-g(x)] = -f(x)g(x).$$ So the function $fg$ is odd!</p> <p>As a concrete example, is the function $x^2\sin(x)$ even or odd? What about $\tan(x)$? What about $\cos(\tan(x))$ (for $x$ in the domain of the function)? Look at the <a href="http://www.wolframalpha.com/input/?i=cos%28tan%28x%29%29" rel="noreferrer">graph</a>, then try to prove it algebraically.</p>
227,797
<p>I have this function and I want to see where it is zero. <span class="math-container">$$\frac{1}{16} \left(\sinh (\pi x) \left(64 \left(x^2-4\right) \cosh \left(\frac{2 \pi x}{3}\right) \cos (y)+\left(x^2+4\right)^2+256 x \sinh \left(\frac{2 \pi x}{3}\right) \sin (y)\right)+\left(x^2-12\right)^2 \sinh \left(\frac{7 \pi x}{3}\right)-2 \left(x^2+4\right)^2 \sinh \left(\frac{5 \pi x}{3}\right)\right)+2 \left(x^2-4\right) \sinh \left(\frac{\pi x}{3}\right)$$</span> I use ContourPlot</p> <pre><code>f[x_, y_] := 2 (-4 + x^2) Sinh[(π x)/3] + 1/16 (((4 + x^2)^2 + 64 (-4 + x^2) Cos[y] Cosh[(2 π x)/3] + 256 x Sin[y] Sinh[(2 π x)/3]) Sinh[π x] - 2 (4 + x^2)^2 Sinh[(5 π x)/3] + (-12 + x^2)^2 Sinh[( 7 π x)/3]); ContourPlot[ f[x, y] == 0, {x, 3.465728, 3.465729}, {y, 1.046786, 1.046795}, PlotPoints -&gt; 500] </code></pre> <p>and I obtain this plot</p> <p>Now, my question is that can I trust this plot and conclude that the curves do not cross?</p> <p>Or, I should increase the precision of the plot? And if so, how can I ask Mathematica to give higher precision for the axis in ContourPlot?</p>
Michael E2
4,999
<p>The following shows that <code>f[x, y]</code> is negative (not zero) along the vertical line <code>x == 3.4657284...</code> through the saddle point between the two branches, and therefore the line separates the two branches where <code>f[x, y] == 0</code> in the OP's graph:</p> <pre><code>yAssum = 1.046786`32 &lt; y &lt; 1.046795`32; FindRoot[D[f[x, y], {{x, y}}], {{x, 3.465728}, {y, 1.04679}}, WorkingPrecision -&gt; 32] Simplify[ Reduce[f[x, y] &lt; 0 &amp;&amp; yAssum /. First@%, y], yAssum] </code></pre> <blockquote> <pre><code>(* Saddle point: {x -&gt; 3.4657284034593587205275273903929, y -&gt; 1.0467905677870295818695381998660} *) </code></pre> </blockquote> <blockquote> <p>Reduce::ratnz: Reduce was unable to solve the system with inexact coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result.</p> </blockquote> <blockquote> <pre><code>(* True *) </code></pre> </blockquote> <p>Another way to show <code>x == 3.465...</code> separates the branches of the curve (different, exact calculation; but same idea, negative maximum along the line):</p> <pre><code>xAssum = Rationalize[3.465 &lt; x &lt; 3.466, 0]; yAssum = Rationalize[1.046786 &lt; y &lt; 1.046795, 0]; xCP = x /. First@Solve[D[f[x, y], {{x, y}}] == 0 &amp;&amp; yAssum &amp;&amp; xAssum, {x, y}]; Maximize[{f[xCP, y], yAssum}, y] // N[#, 32] &amp; (* {-0.000037312986399805657836881071055665, {y -&gt; 1.0467905677870295818695381998661}} *) </code></pre>
779,095
<p>Let $$f(x,y)=\left\{ \begin{matrix} \frac{x^2y}{x^4+y^2} &amp; (x,y)\neq(0,0) \\0 &amp; (x,y)=(0,0)\end{matrix}\right.$$</p> <p>It is easy to prove that the $f$ is not continuous at $(0,0)$ (doing the limit along the curve $y=x^2$).</p> <p>I want to know whether it is possible to define the partial derivatives of $f$ at $(0,0)$ and find the directions $\vec v$ such that $D_vf(0,0)$ is defined.</p> <p>I've calculated the partial derivatives of $f$ for $(x,y)\neq(0,0)$: $$\frac{\partial f}{\partial x}=\frac{2xy(x^4+y^2)-4yx^5}{(x^4+y^2)^2}$$ $$\frac{\partial f}{\partial y}=\frac{x^2(x^4+y^2)-2x^2y^2}{(x^4+y^2)^2}$$</p> <p>Neither of them is continuous at $(0,0)$. However, if we take for example the line $y=x$, then $$\lim_{(x,y)\to (0,0)}_{x=y}\frac{\partial f}{\partial x}=-2$$ $$\lim_{(x,y)\to (0,0)}_{x=y}\frac{\partial f}{\partial y}=-1$$</p> <p>So I'm tempted to say that even though the partial derivatives do not exist at the origin, $D_{(1/\sqrt2),1/\sqrt2)}f(0,0)=\frac{1}{\sqrt2}(-2,-1)$.</p> <p>Is this correct? If so, how could I find all the vectors $\vec v$ such that $D_\vec v f(0,0)$ is defined</p>
bof
111,012
<p>In the usual terminology, a <a href="http://en.wikipedia.org/wiki/Partially_ordered_set" rel="nofollow"><em>partial order</em></a> is required be <em>antisymmetric</em>: if $x\le y$ and $y\le x$, then $x=y$. Thus there are only three partial orders on a two-element set; your fourth example in not antisymmetric.</p> <p>Apparently, you want to count the more general <em>quasi-orders</em> aka <a href="http://en.wikipedia.org/wiki/Preorder" rel="nofollow"><em>preorders</em></a>, i.e., reflexive transitive relations. (On a <em>finite</em> set, there is a one-to-one correspondence between quasi-orders and topologies.)</p> <p>The number of quasi-orders (or equivalently, the number of topologies) on an $n$-element set is sequence <a href="http://oeis.org/A000798" rel="nofollow">A000798</a> at the <a href="https://oeis.org/" rel="nofollow">OEIS</a>. There are $29$ quasi-orders on a $3$-element set.</p>
2,642,144
<p>How would I prove or disprove the following statement? $ \forall a \in \mathbb{Z} \forall b \in \mathbb{N}$ , if $a &lt; b$ then $a^2 &lt; b^2$</p>
Rgkpdx
112,537
<p>Not true. Take $a=-2$ and $b=1$. Then $a&lt;b$ but $a^2=4&gt; 1=b^2$.</p> <p>Generally, it is good practice to start with easy examples to get an idea of why something might or not be true. </p>
559,194
<p>$\mathscr{F}\{\delta(t)\}=1$, so this means inverse fourier transform of 1 is dirac delta function so I tried to prove it by solving the integral but I got something which doesn't converge.</p>
L. Xu
77,573
<p>In the following $\langle f, \cdot \rangle$ denotes the linear functional on Schwartz space induced by $f$ and $f^\lor$ stands for the inverse Fourier transform of $f$. By definition, for any Schwartz function $\varphi$ \begin{align*} \langle 1^\lor, \varphi \rangle=\langle 1, \varphi^\lor \rangle=&amp;\int_\mathbb{R} \left(\int_\mathbb{R} e^{2\pi ixy}\varphi(y) dy\right)dx =\lim_{M\to\infty}\int_{-M}^M \left(\int_\mathbb{R} e^{2\pi ixy}\varphi(y) dy\right)dx. \end{align*} By Fubini's theorem we have \begin{align*} \int_{-M}^M \left(\int_\mathbb{R} e^{2\pi ixy}\varphi(y) dy\right)dx=&amp;\int_\mathbb{R} \varphi(y)\left(\int_{-M}^M e^{2\pi ixy}dx\right) dy =\pi^{-1}\int_\mathbb{R} \varphi\left(y\right)\frac{\sin (2\pi My)}{y} dy. \end{align*} Since $\varphi$ is differentiable at $y=0$, we have $|\varphi(y)-\varphi(0)|\le C|y|$ for some constant $C$. Thus $\left(\varphi(y)-\varphi(0)\right)/y\in L^1_{loc}(\mathbb{R}).$ Then by Riemann-Lebesgue Lemma we have \begin{align*} \lim_{M\to\infty}\int_{-1}^1 \left(\varphi(y)-\varphi(0)\right)\frac{\sin (2\pi My)}{y} dy=0, \end{align*} which means \begin{align*} \lim_{M\to\infty}\int_{-1}^1 \varphi\left(y\right)\frac{\sin (2\pi My)}{y} dy=&amp;\varphi(0)\lim_{M\to\infty}\int_{-1}^1\frac{\sin (2\pi My)}{y} dy \\ =&amp;\varphi(0) \int_{-\infty}^\infty \frac{\sin y}{y}dy=\pi\varphi(0). \end{align*} Note that $\varphi\left(y\right)/y$ is integrable on $\mathbb{R}\setminus [-1,1]$. Thus by Riemann-Lebesgue Lemma we have \begin{align*} \lim_{M\to\infty}\int_{1}^\infty \varphi\left(y\right)\frac{\sin (2\pi My)}{y} dy=\lim_{M\to\infty}\int_{-\infty}^{-1} \varphi\left(y\right)\frac{\sin (2\pi My)}{y} dy=0. \end{align*} To sum up the above argument we have for all Schwartz functions $\varphi$, \begin{align*} \langle 1^\lor, \varphi \rangle=\pi^{-1}\lim_{M\to\infty}\int_\mathbb{R} \varphi\left(y\right)\frac{\sin (2\pi My)}{y} dy=\varphi(0)=\langle \delta, \varphi \rangle. \end{align*} Therefore $1^\lor=\delta$.</p>
3,430,812
<p>Consider the set of integers, <span class="math-container">$\Bbb{Z}$</span>. Now consider the sequence of sets which we get as we divide each of the integers by <span class="math-container">$2, 3, 4, \ldots$</span>.</p> <p>Obviously, as we increase the divisor, the elements of the resulting sets will get closer and closer.</p> <p><strong>Question:</strong> In the limit as <span class="math-container">$\text{divisor}\to\infty$</span>, what will the "limiting" set be? (I don't think it could be <span class="math-container">$\Bbb{R}$</span>.)</p>
Brian Moehring
694,754
<p>The typical way to define limits of sets is via <span class="math-container">$$\liminf_{n\to\infty} A_n = \bigcup_{n\geq 1} \bigcap_{k \geq n} A_k \\ \limsup_{n\to\infty} A_n = \bigcap_{n\geq 1} \bigcup_{k\geq n} A_k$$</span></p> <p>Using these and <span class="math-container">$A_n = f_n(\mathbb{Z})$</span> where <span class="math-container">$f_n(x) = x/n,$</span> we have <span class="math-container">$$\liminf_{n\to\infty} A_n = \mathbb{Z} \\ \limsup_{n\to\infty} A_n = \mathbb{Q} $$</span> In particular, the limit doesn't exist.</p>
3,927,488
<p>Given a random variable <span class="math-container">$X$</span> with finite expectation, I know that <span class="math-container">$$X_n\to X, a.s.\text{and} |X_n| \leq X\implies \mathbb{E}|X-X_n|\to 0 \text{ by DCT.}$$</span></p> <p>I am wondering if it is possible to approximate <span class="math-container">$X$</span> (with finite expectation) by a sequence of <strong>simple</strong> random variables: <span class="math-container">$$\forall \varepsilon, \exists \text{ a simple r.v. } X_{{\varepsilon}} \text{ such that } |X_{{\varepsilon}}|\leq |X| \text{ and } \mathbb{E}|X-X_{{\varepsilon}}|&lt; \varepsilon.$$</span></p> <p>Any help will be greatly appreciated!</p>
Claude Leibovici
82,404
<p>Starting from @J. W. Tanner's answer, the easy way to solve <span class="math-container">$$\dfrac{dC}{dt}=k(A_0-C)(B_0-C)$$</span> is to write it as <span class="math-container">$$\dfrac{dt}{dC}=\frac 1{k(A_0-C)(B_0-C)}=\frac{1}{k (A_0-B_0)}\left( \frac 1{C-A_0}- \frac 1{C-B_0}\right)$$</span> which gives (since <span class="math-container">$C_0=0$</span> <span class="math-container">$$t =\frac{1}{k (A_0-B_0)}\,\log \left(\frac{C-A_0}{C-B_0}\right)$$</span> from which it is easy to express <span class="math-container">$C(t)$</span>.</p>
954,419
<p>I am teaching myself mathematics, my objective being a thorough understanding of game theory and probability. In particular, I want to be able to go through A Course in Game Theory by Osborne and Probability Theory by Jaynes.</p> <p>I understand I want to cover a lot of ground so I'm not expecting to learn it in less than a year or maybe even two. Still I'm fairly certain it's not impossible.</p> <p>However I would like to have a study plan more or less fleshed out just to know I'm on the right track. There were some other questions related to self learning math here but I couldn't find one like mine.</p> <p>I'd appreciate some feedback.</p> <p>Calc I + II: no book, I already know basic calculus</p> <ul> <li>Differential equations: MIT's OCW lectures </li> <li>Calc III: Stewart's Multivariable calculus</li> <li>Linear Algebra: Strang, Gilbert, Linear Algebra and Its Applications complemented with MIT's OCW lectures <strong>OR</strong> Linear Algebra Done Right</li> </ul> <p>Until here I am more or less certain on what I want to study, but I'm totally confused on what to learn next. Jayne's book states that you need to be familiar with applied mathematics.</p> <p>After reading about applied mathematics, I came up with this plan to be done after finishing what I mentioned earlier (in order of course, not all at the same time):</p> <ol> <li>Topology A: Munkres, part I.</li> <li>Real analysis: Still not sure about the material, probably Abbott or Rudin.</li> <li>Complex Analysis: No idea about the material</li> <li>Group Theory: Rotman, An Introduction to the Theory of Groups</li> <li>Topology B: Munkres, part II.</li> </ol> <p>And then finally, Jayne's Probability Theory and game theory.</p> <p>Am I missing something here? Some of these books such as Rotman's are aimed at a graduate level, is it foolish to think I will understand them?</p>
Rgkpdx
112,537
<p>If you are more or less new to mathematics, your priority might be to train proof skills (by doing lost of easy proofs) and gain comfort with basic properties of sets and functions for example (Introduction to Metric and Topological Spaces by Sutherland gives a good quick overview within an advanced framework, I think). </p> <p>Such grounding work sounds necessary to me if you want to proactively read mathematical statements in the books you mention in the short-run.</p>
1,395,619
<p>One of my friend asked this doubt.Even in lower class we use both as synonyms,he says that these two concepts have difference.Empty set $\{ \}$ is a set which does not contain any elements,while null set ,$\emptyset$ says about a set which does not contain any elements.</p> <p>I could not make out that...is his argument correct ? if so how ?</p>
A Bc
647,801
<p>I would call "null sets" in the Measure Theory sense "sets of measure 0" just to avoid any confusion. In many parts of mathematics, "null set" and "empty set" are synonyms.</p>
792,924
<p>If a quantity can be either a scalar or a vector, how would one call that property? I could think of scalarity but I don't think such a term exists.</p>
Fermat
83,272
<p>No. Since $f$ is continuous, There exist a sequence $P_n$ of polynomials such that converges to $f$ uniformly on $[0,1]$. Therefore $$\int_{0}^{1}x^{n}f\left(x\right)dx=\lim_{n\to \infty}\int_{0}^{1}x^{n}P_{n}\left(x\right)dx$$ Now let $$P_{n}(x)=a_nx^n+a_{n-1}x^{n-1}+....+a_0$$ convert left integral to a sum and show that it does not converge to $1$.</p>
140,754
<p>Pleas tell me that what a "Kink" is and what this sentence means: </p> <blockquote> <p>Distance functions have a kink at the interface where $d = 0$ is a local minimum.</p> </blockquote>
robjohn
13,854
<p>A "kink" in a curve would be a point where the curve is continuous, yet the first derivative (gradient) is not continuous. The curvature would be infinite at a kink because the direction changes a finite amount in an infinitesimal distance.</p>
140,754
<p>Pleas tell me that what a "Kink" is and what this sentence means: </p> <blockquote> <p>Distance functions have a kink at the interface where $d = 0$ is a local minimum.</p> </blockquote>
Mikasa
8,581
<p>As above you can find it via the web as <a href="http://en.wikipedia.org/wiki/Cusp_%28singularity%29" rel="noreferrer">Cusp (singularity)</a>. See the following graphs:</p> <p><img src="https://i.stack.imgur.com/R08Ey.jpg" alt="enter image description here"></p> <p><img src="https://i.stack.imgur.com/Oy4dR.jpg" alt="enter image description here"></p>
1,335,483
<p>Given a relation $R \subseteq A \times A$ with $n$ tuples, I am trying to prove that its transitive closure $R^+$ has at the most $n^2$ elements.</p> <p>My initial idea was to use the following definition of the transitive closure to identify an argument why the statement to be proven must be true:</p> <p>$$R^+ = R \cup R^2 \cup R^3 \cup \ldots$$</p> <p>where $R^k, k \in \mathbb{N}$ stands for the k-fold composition of $R$, but that didn't give me any useful hint to continue the prove. I appreciate any hint that may help me on.</p>
D Left Adjoint to U
26,327
<p>Suppose that it's true for $n = 1...K-1$, then add a tuple to your $n = K-1$ tuples $R$. $(R \cup \{(a,b)\})^+ = R^+ \cup \{(a,x) : (b,x) \in R^+\} \cup \{(x,b): (x,a) \in R^+\} \cup \{(a,b)\} = R'^+$. So since by inductive assumption $|R^+| \leq (K-1)^2 = K^2 - 2K +1.$, we have that $|R'^+|$ is no greater than $3K^2 -2K + 2$. Okay, that's larger than your required bound, but since induction gave us that maybe you can find a proof by induction. That is my answer.</p>
1,812,956
<blockquote> <p>Find the equation of the normal to the curve with equation $4x^2+xy^2-3y^3=56$ at the point $(-5,2)$.</p> </blockquote> <p>I know that the normal to a curve is $$-\frac{1}{f'(x)}$$ And when I differentiate the curve implicitly I get $$-\frac{8x-y^2}{6y^2}$$</p> <p>Substituting that into the equation for a normal you get a positive reciprocal $6y^2/(8x-y^2)$ But apparently this is wrong, I'm given the points $(-5,2)$ how are these useful?</p>
DooplissForce
281,590
<p>I believe your implicit differentiation is wrong. Given $4x^2+xy^2-3y^3=56$, we can implicitly differentiate to find $\frac{dy}{dx}$:</p> <p>$$ 4x^2+xy^2-3y^3=56 \\ 8x+\color{blue}{\left(y^2+2xy\frac{dy}{dx}\right)} - 9y^2\frac{dy}{dx}=0 \\ $$</p> <p>(The blue part is arrived at through the product rule.) After simplifying this, we can plug in the point $(-5,2)$ to find the slope of the tangent line at that point. Like you said, the negative reciprocal of that will be the slope of the normal, and we can then use point-slope form, $y-y_1=m(x-x_1),$ to find the equation.</p> <p><strong>Edit:</strong> If you want to check your work, hover over the box below for the solution:</p> <blockquote class="spoiler"> <p> $$\begin{align*}4x^2+xy^2-3y^3&amp;=56 \\8x+\left(y^2+2xy\frac{dy}{dx}\right) - 9y^2\frac{dy}{dx}&amp;=0 \\ 2xy\frac{dy}{dx} - 9y^2 \frac{dy}{dx} &amp;= -8x-y^2 \\ \frac{dy}{dx}(2xy-9y^2) &amp;= -8x-y^2 \\ \frac{dy}{dx} &amp;= \frac{-8x-y^2}{2xy-9y^2} \end{align*} $$ <br> Plug in $(-5,2)$: <br> $$ \begin{align*} \left.\frac{dy}{dx}\right]_{(-5,2)} &amp;= \frac{-8(-5)-(2)^2}{2(-5)(2)-9(2)^2} \\ &amp;= -\frac{9}{14} \end{align*} $$ <br> Thus the slope of the tangent is $-\frac{9}{14}$, so the slope of the normal is $\frac{14}{9}$, and the equation of the normal is $y-2=\frac{14}{9}(x+5)$.</p> </blockquote>
1,182,953
<p>Does anyone know the provenance of or the answer to the following integral</p> <p>$$\int_0^\infty\ \frac{\ln|\cos(x)|}{x^2} dx $$</p> <p>Thanks.</p>
Mark Fischler
150,362
<p>This integral is equal to $$ \frac{1}{2} \int_0^\infty \frac{\ln (\cos^2 x)}{x^2} dx = \frac{1}{2}(-\pi) = -\frac{\pi}2$$</p> <p>The easiest place to remember seeing this is Gradshteyn and Ryzhik, where it appears as definite integral 4.322.6. The source quoted there is Fikhtebgik'ts, G. M. (<a href="http://en.wikipedia.org/wiki/Grigorii_Fichtenholz" rel="nofollow">http://en.wikipedia.org/wiki/Grigorii_Fichtenholz</a> on Wikipedia) in the book Kurs differntsial'nogo i integral'ogo ischizdat, Vloume 2, page 686. </p> <p>The book is pictured on the WP page. Quoting the Wiki description, "Fichtenholz's books about analysis are widely used in Eastern European and Chinese universities due to its exceptionality of detailed and well-ordered presentation of material about mathematical analysis. " </p> <p>If this is an example of content in an introductory class on calculus, I think I am glad it has not been translated into English for me to have read as an undergraduatei!</p>
3,387,138
<p>First Definition. A modular form of level n and dimension -k is an analytic function <span class="math-container">$F$</span> of <span class="math-container">$\omega_1 $</span> and <span class="math-container">$\omega_2$</span> satisfying the following properties :</p> <ol> <li><span class="math-container">$F(\omega_1,\omega_2)$</span> is holomorphic and unique for all <span class="math-container">$\omega_1,\omega_2$</span> , where <span class="math-container">$Im(\omega_1/\omega_2)&gt;0$</span> .</li> </ol> <p>2.<span class="math-container">$F(\lambda\omega_1,\lambda\omega_2)=\lambda^{-k}F(\omega_1,\omega_2)$</span> for all <span class="math-container">$\lambda\neq0$</span> .</p> <p>3.<span class="math-container">$F(a\omega_1+b\omega_2,c\omega_1+d\omega_2)=F(\omega_1,\omega_2)$</span> , if <span class="math-container">$a,b,c,d$</span> are rational integers with <span class="math-container">$$ad-bc=1 $$</span> <span class="math-container">$$a\equiv d \equiv1 ,b\equiv c \equiv0 (mod \ n)$$</span></p> <ol start="4"> <li>The function <span class="math-container">$$F(\tau)=\omega_2^kF(\omega_1,\omega_2)$$</span> has a power series in <span class="math-container">$\tau=\infty$</span> <span class="math-container">$$F(\tau)=\sum_{m=0}^{\infty} c_m e^{2\pi im\tau/n} \ ,Im(\tau)&gt;0$$</span> </li> </ol> <p>Second Definition .</p> <p>Let <span class="math-container">$k \in \mathbb{Z}$</span> . A function <span class="math-container">$f:\mathbb{H} \rightarrow\mathbb{C}$</span> is is a modular form of weight k for <span class="math-container">$SL_2(\mathbb{Z})$</span> if it satisfies the following properties :</p> <p>1) <span class="math-container">$f$</span> is holomorphic on <span class="math-container">$\mathbb{H} .$</span></p> <p>2)<span class="math-container">$f|_k M=f$</span> for all <span class="math-container">$M \in SL_2(\mathbb{Z})$</span> .</p> <p>3) <span class="math-container">$f$</span> has a Fourier expansion <span class="math-container">$$f(\tau)=\sum_{n=0}^{\infty} a_n e^{2\pi i n\tau}$$</span></p> <p>I do not understand why these two definitions are equivalent .</p> <p>Thanks for the help .</p>
reuns
276,986
<p>For <span class="math-container">$n=1$</span>, <span class="math-container">$F$</span> is a function of lattices.</p> <p>For <span class="math-container">$a,b,c,d\in\Bbb{Z},ad-bc=1$</span> <span class="math-container">$$f(z)=F(\Bbb{Z}+z \Bbb{Z}) = F((az+b)\Bbb{Z}+(cz+d) \Bbb{Z})$$</span> <span class="math-container">$$=(cz+d)^{-k} F(\frac{az+b}{cz+d}\Bbb{Z}+ \Bbb{Z})=(cz+d)^{-k}f(\frac{az+b}{cz+d})$$</span></p>
152,880
<p>I know that for every $n\in\mathbb{N}$, $n\ge 1$, there exists $p(x)\in\mathbb{F}_p[x]$ s.t. $\deg p(x)=n$ and $p(x)$ is irreducible over $\mathbb{F}_p$.</p> <blockquote> <p>I am interested in counting how many such $p(x)$ there exist (that is, given $n\in\mathbb{N}$, $n\ge 1$, how many irreducible polynomials of degree $n$ exist over $\mathbb{F}_p$).</p> </blockquote> <p>I don't have a counting strategy and I don't expect a closed formula, but maybe we can find something like "there exist $X$ irreducible polynomials of degree $n$ where $X$ is the number of...".</p> <p>What are your thoughts ?</p>
Eugene
31,288
<p>With regards to your question, <a href="http://arxiv.org/pdf/1001.0409v6.pdf">this paper</a> has a formula for counting the number of monic irreducibles over a finite field.</p>
928,772
<p>Let <span class="math-container">$G$</span> be a group. If <span class="math-container">$(ab)^n=a^nb^n$</span> <span class="math-container">$\forall a,b \in G$</span> and <span class="math-container">$(|G|, n(n-1))=1$</span> then prove that <span class="math-container">$G$</span> is abelian.</p> <hr /> <p>What I have proven is that:</p> <blockquote> <p>If <span class="math-container">$G$</span> is a group such that <span class="math-container">$(ab)^i = a^ib^i$</span> for three consecutive integers <span class="math-container">$i$</span> for all <span class="math-container">$a, b\in G$</span>, then <span class="math-container">$G$</span> is abelian.</p> </blockquote> <p>A proof of this can be found in the answers to <a href="https://math.stackexchange.com/q/40996">this</a> old question.</p>
James
751
<p>We can assume that $n&gt;2$. Since $(ab)^n = a^nb^n$, for <em>all</em> $a,b\in G$, we can write $(ab)^{n+1}$ in two different ways: $$(ab)^{n+1} = a(ba)^nb = ab^na^nb,$$ and $$(ab)^{n+1} = ab(ab)^n = aba^nb^n.$$ Hence, $$ab^na^nb = aba^nb^n.$$ Cancel $ab$ on the left and $b$ on the right to obtain $$b^{n-1}a^n = a^nb^{n-1}.$$ Note that this is true for <em>all</em> $a,b\in G$. (This says that the $n$th power of any element of $G$ commutes with the $(n-1)$st power of any element of $G$.)</p> <p>Now let $x,y\in G$ be arbitrary; we want to show that $x$ and $y$ commute. Since the order of $G$ is prime to $n$, the $n$th power map $t\mapsto t^n$ on $G$ is bijective, so there exists $a\in G$ such that $x = a^n$. Since the order of $G$ is prime to $n-1$, there exists $b\in G$ for which $y=b^{n-1}$. Therefore, $xy = a^nb^{n-1} = b^{n-1}a^n = yx$. Because $x$ and $y$ were arbitrary, it follows that $G$ is commutative.</p> <p>ADDED:</p> <p><strong>Lemma.</strong> <em>Let $G$ be a group of finite order $m$, and let $k$ be a positive integer such that $(k,m)=1$. Then the $k$th power map $x\mapsto x^k$ on $G$ is bijective.</em></p> <p><em>Proof.</em> Since $G$ is finite, it suffices to show that the map $x\mapsto x^k$ is surjective. To this end, let $g\in G$; we show that $g$ is the $k$th power of some element in $G$. Since the order of $g$ divides the order of the group $G$, it follows that $(\left| g\right|, k) = 1$. Therefore, $\langle g\rangle = \langle g^k\rangle$. Hence, there is an integer $r$ for which $g = (g^k)^r = g^{kr} = (g^r)^k$. This completes the proof.</p>
928,772
<p>Let <span class="math-container">$G$</span> be a group. If <span class="math-container">$(ab)^n=a^nb^n$</span> <span class="math-container">$\forall a,b \in G$</span> and <span class="math-container">$(|G|, n(n-1))=1$</span> then prove that <span class="math-container">$G$</span> is abelian.</p> <hr /> <p>What I have proven is that:</p> <blockquote> <p>If <span class="math-container">$G$</span> is a group such that <span class="math-container">$(ab)^i = a^ib^i$</span> for three consecutive integers <span class="math-container">$i$</span> for all <span class="math-container">$a, b\in G$</span>, then <span class="math-container">$G$</span> is abelian.</p> </blockquote> <p>A proof of this can be found in the answers to <a href="https://math.stackexchange.com/q/40996">this</a> old question.</p>
Ri-Li
152,715
<p>I have got another answer though it can also be easily viewed from James answer too.</p> <p>We can assume that $n&gt;2$. Since $(ab)^n = a^nb^n$, for <em>all</em> $a,b\in G$. Then we will get $$b^{n-1}a^n = a^nb^{n-1}.$$ this is true for <em>all</em> $a,b\in G$. </p> <p>Now see Since the order of $G$ is prime to $n$, the $n$th power map $t\mapsto t^n$ on $G$ is automorphism so is the $(1-n)$ th power map $t\mapsto t^{1-n}$ on $G$.</p> <p>so there exists $a\in G$ &amp; $b\in G$ such that $x = a^n$ &amp; $y=b^{n-1}$. Therefore, $xy = a^nb^{n-1} = b^{n-1}a^n = yx$. Because $x$ and $y$ were arbitrary, it follows that $G$ is commutative.</p>
4,344,571
<p>In a previous exam assignment, there is a problem that asks for a proof that <span class="math-container">$\mathbb{Z}_{24}$</span> and <span class="math-container">$\mathbb{Z}_{4}\times\mathbb{Z}_6$</span> are <strong>not</strong> isomorphic.</p> <p>We have <span class="math-container">$\mathbb{Z}_{24}$</span> is isomorphic to <span class="math-container">$\mathbb{Z}_4\times\mathbb{Z}_6$</span> if there exists a bijective function <span class="math-container">$f∶ \mathbb{Z}_{24}\rightarrow\mathbb{Z}_{4}\times\mathbb{Z}_6$</span> such that <span class="math-container">$f(a+b)=f(a)+f(b)$</span> and <span class="math-container">$f(ab)=f(a)f(b) \forall a,b\in R$</span>. Since there are exactly <span class="math-container">$24$</span> unique elements in both <span class="math-container">$\mathbb{Z}_{24}$</span> and <span class="math-container">$\mathbb{Z}_4\times\mathbb{Z}_6$</span>, we can construct a bijective function <span class="math-container">$f∶ \mathbb{Z}_{24}\rightarrow\mathbb{Z}_{4}\times\mathbb{Z}_6$</span>. Consider then <span class="math-container">$$\begin{aligned} &amp;f\left([a]_{24}+[b]_{24}\right) \\ &amp;=f\left([a+b]_{24}\right) \\ &amp;=\left([a+b]_{4},[a+b]_{6}\right) \\ &amp;=\left([a]_{4}+[b]_{4},[a]_{6}+[b]_{6}\right) \\ &amp;=\left([a]_{4},[a]_{6}\right)+\left([b]_{4},[b]_{6}\right) \\ &amp;=f\left([a]_{24}\right)+f\left([b]_{24}\right) \end{aligned}$$</span> and <span class="math-container">$$\begin{aligned} &amp;f\left([a]_{24}[b]_{24}\right) \\ &amp;=f\left([a b]_{24}\right) \\ &amp;=\left([a b]_{4},[a b]_{6}\right) \\ &amp;=\left([a]_{4}[b]_{4},[a]_{6}[b]_{6}\right) \\ &amp;=\left([a]_{4},[a]_{6}\right)\left([b]_{4},[b]_{6}\right) \\ &amp;=f\left([a]_{24}\right) f\left([b]_{24}\right). \end{aligned}$$</span> It therefore seems to me that this function shows that <span class="math-container">$\mathbb{Z}_{24}$</span> <em>is</em> isomorphic to <span class="math-container">$\mathbb{Z}_4\times\mathbb{Z}_6$</span>.</p> <p>Can someone tell me where I go wrong with this &quot;proof&quot;, and tell me how I can show that the rings are <em>not</em> isomorphic?</p>
Kandinskij
657,309
<p>These rings are not isomorphic because their additive groups are not isomorphic. In <span class="math-container">$(\mathbb{Z}_{24},+)$</span> there is an element whom order is <span class="math-container">$24$</span>(i.e. <span class="math-container">$[1]_{24}$</span>). In <span class="math-container">$(\mathbb{Z}_4\times\mathbb{Z}_6,+)$</span> the orders of the elements are upper-bounded by <span class="math-container">$12$</span>, in fact:</p> <p><span class="math-container">$$\underbrace{([n]_4,[m]_6)+...+([n]_4,[m]_6)}_{12 \text{ times}}=([12n]_4,[12m]_6)=([4\cdot3n]_4,[6\cdot 2m]_6)=([0]_4,[0]_6)$$</span></p> <p>Since a group isomorphism preserves the orders, there cannot be a group isomorphism between <span class="math-container">$(\mathbb{Z}_{24},+)$</span> and <span class="math-container">$(\mathbb{Z}_4\times\mathbb{Z}_6,+)$</span>(let alone a ring isomorphism!).</p> <p>In the same way, you can prove this more general result:</p> <p><span class="math-container">$$(\mathbb{Z}_{n}\times\mathbb{Z}_{m},+)\cong(\mathbb{Z}_{nm},+)\iff \text{gcd}(m,n)=1$$</span></p>
2,155,589
<p>I'm reading a computer science book that gives several functions, in the mathematical sense. There are two that are the basis of this question.</p> <p>These are equations used to convert a number represented in base ten to a bit representation using two's complement and back.</p> <p>One function makes the conversion from binary two's complement to decimal, $B2T_w$, defined as $$B2T_w(x) = -x_{w-1}2^{w-1} + \sum_{i=0}^{w-2} x_i2^i$$ where x is a vector of length w.</p> <p>It then goes to define another function, $T2B_w$ as the inverse of $B2T_w$; which it does not define using mathematical notation.</p> <p>I understand how to convert between a two's complement bit representation of a number to it's decimal representation, but for the sake of understanding I'd like to know how to derive the inverse of $B2T_w$</p> <p>How do I find $B2T_w^{-1}$?</p>
Mark Viola
218,419
<p>Let $f(z)=z^3-z_0^3$. Then, we can write $f(z)$ as </p> <p>$$f(z)=(z-z_0)^3+3z_0(z-z_0)^2+3z_0^2(z-z_0)$$ </p> <p>Hence from $(1)$, if $|z-z_0|&lt;1$, then given $\epsilon&gt;0$</p> <p>$$\begin{align} f(z)&amp;=|z^3-z_0^3|\\\\ &amp;=|(z-z_0)^3+3z_0(z-z_0)^2+3z_0^2(z-z_0)|\\\\ &amp;\le |z-z_0|\left(1+3|z_0|+3|z_0|^2\right)\\\\ &amp;&lt;\epsilon \end{align}$$</p> <p>whenever $|z-z_0|&lt;\delta=\min\left(1,\frac{\epsilon}{1+3|z_0|+3|z_0|^2}\right)$.</p> <p>And we are done!</p>
904,041
<p>$$tx'(x'+2)=x$$ First I multiplied it: $$t(x')^2+2tx'=x$$ Then differentiated both sides: $$(x')^2+2tx'x''+2tx''+x'=0$$ substituted $p=x'$ and rewrote it as a multiplication $$(2p't+p)(p+1)=0$$ So either $(2p't+p)=0$ or $p+1=0$</p> <p>The first one gives $p=\frac{C}{\sqrt{T}}$ The second one gives $p=-1$. My question is how do I take the antidervative of this in order to get the answer for the actual equation?</p>
Kelenner
159,886
<p>Your differential equation is a complicated one. I give some hints on $I=]0,+\infty[$ (the study has to be done also on $]-\infty,0[$).</p> <p>A) First note that on $I$, your equation is $\displaystyle (x^{\prime}(t)+1)^2=\frac{x(t)+t}{t}$. We see that $x_0(t)=-t$ is a solution. For any solution, we must have $x(t)+t\geq 0$. </p> <p>Now suppose that on $I$, we have $x(t)+t &gt;0$ for all $t$. Then $x^{\prime}+1\not =0$ for all $t$; by Darboux's theorem, $x^{\prime}+1$ has a constant sign on $I$. With $\varepsilon\in \{\pm 1\}$, we can write $\displaystyle \frac{x^{\prime}+1}{\sqrt{x(t)+t}}=\frac{\varepsilon}{\sqrt{t}}$, and hence $2\sqrt{x(t)+t}=2\varepsilon\sqrt{t}+C$ where $C$ is a constant. Now $x(t)=C^2/4+C\varepsilon \sqrt{t}$. Of course, we have to verify that all these solutions are really solutions(and verify what conditions on $C$ and $\varepsilon$ gives $x(t)+t&gt;0$ for all $t$).</p> <p>B) To see how the hypothesis $x(t)+t&gt;0$ is important, let $t_0&gt;0$. Put $x(t)=-t$ for $0&lt;t\leq t_0$, and $x(t)=t_0-2\sqrt{t_0t}$ if $t&gt;t_0$. I leave to you the verification that $x(t)$ is a solution, (first show that the derivative at $t_0$ exists and is equal to $-1$). But this solution is not one of the form $C^2/4+C\varepsilon \sqrt{t}$ we have found, as they cannot agree on $]0,t_0[$. </p>
1,122,926
<p>Question: The product of monotone sequences is monotone, T or F?</p> <p>Uncompleted Solution: There are four cases from considering each of two monotone sequences, increasing or decreasing.</p> <p>CASE I: Suppose we have two monotonically decreasing sequences, say ${\{a_n}\}$ and ${\{b_n}\}$. Then, $a_{n+1}\leq a_n$ and $b_{n+1}\leq b_n$; if $b_n\geq 0$ and $b_{n+1}\geq 0$ then $a_{n+1}b_{n+1}\leq a_{n}b_{n+1}\leq a_{n}b_{n}$, but the l-h-s inequality, i.e., $a_{n}b_{n+1}\leq a_{n}b_{n}$, implies that must $a_{n}\geq 0$ since $b_{n+1}\leq b_n$ already has been supposed, but $a_{n}\geq 0$ has not been supposed. So does it mean that two monotonically decreasing sequences with requisites of $a_{n}\leq 0$ since $b_{n+1}\leq b_n$; is counterexample for "the product of monotone sequences is monotone"?</p> <p>Under which circumstances the product of monotone sequences is monotone, even if it may not true for all cases? And, is there any short (general) proof without need to evaluate each single of sub-cases of the 4-cases?</p> <p>Thank you. </p>
paoloff
211,137
<p>A simple counter example to "The product of two monotone sequences is a monotone sequence" is the product of the monotone sequence $\{...,-3,-2,-1,0,1,2,3,...\}$ (which you can picture as a sequence of points in the $y$ axes of the the graph of $f(x) =x$) with itself. The product of these sequences is again a sequence viewed as a series of points in the $y$ axis of $f(x)=x^2$, and is increasing for $n&gt;0$ but decreasing for $n&lt;0$, as can be easily checked. So this shows that even when both sequences are increasing, their product need not be monotone. However, one can easily check that if the sequences are both increasing or both decreasing, and neither change sign, their product is monotone.</p>
197,603
<p>I'm a newcomer in topology, so I have many things chaotic in my minds, so I hope you could help me. In order topology, an basis has structure $(a,b)$, right. This is no problem when considering a topology like R, but, what if the number of elements between a and b is finite, so we can write $$(a,b) = [a_1, b_1]$$ which is not open, right? Can any one explain to me. Thanks</p>
Neal
20,569
<p>In the case of a finite ordered set $X$, the order topology is discrete. In particular, this implies that for any $a,b\in X$, $[a,b]$ is open (as a union of open sets). It is closed, yes, but it is also open. (Perhaps your point of difficulty is thinking that closed sets cannot also be open - this is not true, since in particular you have observed a counterexample!)</p> <p>Hint for proving that the order topology on a finite set is discrete: How would you show that singletons are open?</p>
2,038,189
<p>(Note: I didn't learn how to solve equations the conventional way; instead I was just taught to "move numbers from side to side", inverting the sign or the operation accordingly. I am learning the conventional way though because I think it makes the process of solving equations clearer. That being said, I apologize if this question is too "basic".)</p> <p>I know that when I have an equality such as $5 = \frac{x}{2}$ I have to multiply both sides by 2 to get the answer.</p> <p>However, what is the process behind $5 = \frac{2}{x} \Leftrightarrow \frac{2}{5} = x$ ?</p> <p>I know that when I have an equation which the variable is in the denominator I have to move the numerator to the other side and make it the numerator and the number that's already there the denominator, but I don't really know why that is or how that's done "mathematically".</p> <p>I have a theory:</p> <ul> <li>Invert both sides and then multiply both sides by 2;</li> </ul> <p>Is this correct?</p>
Eff
112,061
<p>We have that $$5 = \frac2x.$$ Now multiply by $x$ on each side, and get $$5x=2. $$ Next, divide by $5$ on each side, and get $$x=\frac25. $$</p>
2,038,189
<p>(Note: I didn't learn how to solve equations the conventional way; instead I was just taught to "move numbers from side to side", inverting the sign or the operation accordingly. I am learning the conventional way though because I think it makes the process of solving equations clearer. That being said, I apologize if this question is too "basic".)</p> <p>I know that when I have an equality such as $5 = \frac{x}{2}$ I have to multiply both sides by 2 to get the answer.</p> <p>However, what is the process behind $5 = \frac{2}{x} \Leftrightarrow \frac{2}{5} = x$ ?</p> <p>I know that when I have an equation which the variable is in the denominator I have to move the numerator to the other side and make it the numerator and the number that's already there the denominator, but I don't really know why that is or how that's done "mathematically".</p> <p>I have a theory:</p> <ul> <li>Invert both sides and then multiply both sides by 2;</li> </ul> <p>Is this correct?</p>
kub0x
309,863
<p>Other way for solving it is by the reciprocal or inverse process:</p> <p>$5 = \frac{2}{x} \Rightarrow 5^{-1} = (\frac{2}{x})^{-1}$</p> <p>$\frac{1}{5} = \frac{x}{2} \Rightarrow \frac{2}{5} = x$</p>
229,966
<p>I want to put a title to the plotlegends I am using. I get a solution <a href="https://mathematica.stackexchange.com/questions/201353/title-for-plotlegends">here</a> which says to use <code>PlotLegends -&gt; SwatchLegend[{0, 3.3, 6.7, 10, 13, 17, 20}, LegendLabel -&gt; &quot;mu&quot;]</code>. But I also want to place the legends where I want like using <code>PlotLegends -&gt; Placed[Range[1, 6, 1], {0.2, 0.3}]</code>.</p> <p>How can I do both?</p> <p>Edit: As the answer was given, the wrapping works, but there is still a problem, i.e my plot looks like this: <a href="https://i.stack.imgur.com/Mad6w.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Mad6w.png" alt="enter image description here" /></a></p> <p>Where I want to name the legends as &quot;H&quot;. But when I do that Swatchlegend thing it becomes like this : <a href="https://i.stack.imgur.com/In3YR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/In3YR.png" alt="enter image description here" /></a></p> <p>I want to keep the markers and colors same. What should I do?</p>
tad
70,428
<p>You can wrap Placed around the legend. Here's an example modified from the SwatchLegend documentation:</p> <pre><code>Plot[{Sin[x], Cos[x]}, {x, 0, 5}, PlotLegends -&gt; Placed[SwatchLegend[{&quot;first&quot;, &quot;second&quot;}, LegendLabel -&gt; &quot;legend title&quot;], {0.2, 0.3}]] </code></pre> <p><a href="https://i.stack.imgur.com/4uGEQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4uGEQ.png" alt="enter image description here" /></a></p>
4,157,841
<p>Q is to prove that integer just above(<span class="math-container">$\sqrt{3} + 1)^{2n}$</span> is divisible by <span class="math-container">$2^{n+1}$</span> for all n belongs to natural numbers.</p> <p>In Q , by integer just above means that: For an example , which is the integer just above 7.3 . It is 8. Then , Q wants you to prove that 8 is divisible by <span class="math-container">$2^{n+1}$</span>.</p> <p><span class="math-container">\begin{equation} \begin{array}{l} (\sqrt{3}+1)^{2 n}=(4+2 \sqrt{3})^{n}=2^{n}(2+\sqrt{3})^{n}\\ =2^{n}(2+\sqrt{3})^{n}=2^{n}\left[^n C _{0}2^{n}+^n C_{1} 2^{n-1} \sqrt{3}+^n C_{{2}} 2^{n-2} \sqrt{3}^{2}+\right.\\ \begin{array}{l} (\sqrt{3}-1)^{2 n}=(4-2 \sqrt{3})^{n}=2^{n}(2-\sqrt{3})^{n} \\ =2^{n}(2-\sqrt{3})^{n}=2^{n}\left[{ }^{n} c_{0} 2^{n}-n_{C}, 2^{n-1} \sqrt{3}+{ }^{n} c_{2} 2^{n-1} \sqrt{1}^{2} \ldots\right. \end{array}\\ I+f+f)=2^{n}\left[2(\text { Integer) }]=2^{n+1}\right. \text { . Integer }\\ I+1=2^{n+1} \text { . Integer } \end{array} \end{equation}</span></p> <p>In the image is the way this question is solved.</p> <p>My Q from this method of solving is that if we notice at the end , we somehow got <span class="math-container">$ 2^{n+1}$</span>. If the Q has taken some other value like <span class="math-container">$3^{n+3}$</span> or Something else. Then , it was not possible to prove this question.</p> <p>What is another method to prove this Q or can you help me justify that the above method can be used for all kinds of Q.</p> <p>Thank you.</p>
Mark Bennet
2,906
<p>Note that <span class="math-container">$0\lt \sqrt 3 -1 \lt 1$</span></p> <p>Now look at the numbers <span class="math-container">$a=1+\sqrt 3, b=1-\sqrt 3$</span> with <span class="math-container">$a+b=2, ab=-2$</span> which are roots of the quadratic <span class="math-container">$x^2-2x-2=0$</span></p> <p>Then with <span class="math-container">$v_n=a^n+b^n$</span> we have <span class="math-container">$v_{n+2}-2v_{n+1}-2v_n=0$</span> (spot the coefficients) or <span class="math-container">$v_{n+2}=2(v_{n+1}+v_n)$</span> now <span class="math-container">$|b^n|\lt 1$</span> and indeed <span class="math-container">$0\lt b^{2n}\lt 1$</span> so the difference <span class="math-container">$v_{2m}-a^{2m}=b^{2m}$</span> is small and positive.</p> <p>It will be found that <span class="math-container">$v_n$</span> is an integer close to <span class="math-container">$a^n$</span> for all <span class="math-container">$n$</span>, and the sign of the difference for even indices is right, and you can run an induction to complete the proof.</p> <p>If you understand this you will be able to do the same trick with <span class="math-container">$A=a^2$</span> and <span class="math-container">$B=b^2$</span> to get a rather simpler induction.</p> <p>For these kinds of questions with powers of an irrational number &quot;miraculously close&quot; to integers, there is very often a recurrence to be found lurking in the background.</p> <hr /> <p>If <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are the roots of <span class="math-container">$x^2-px+q$</span>=0 and we put <span class="math-container">$u_n=Ca^n+Db^n$</span> we can reason as follows:</p> <p><span class="math-container">$$a^2-pa+q=0$$</span> because <span class="math-container">$a$</span> is a root. Multiply through bay <span class="math-container">$a^n$</span> to give <span class="math-container">$$a^{n+2}-pa^{n+1}+qa^n=0$$</span></p> <p>Now multiply through by the constant <span class="math-container">$C$</span> <span class="math-container">$$Ca^{n+2}-Cpa^{n+1}+Cqa^n=0$$</span></p> <p>Similarly with the other root <span class="math-container">$b$</span> and the constant <span class="math-container">$D$</span> <span class="math-container">$$Db^{n+2}-Dpb^{n+1}+Dqb^n=0 $$</span></p> <p>Now add these last equations to obtain <span class="math-container">$$Ca^{n+2}+Db^{n+2}-p\left(Ca^{n+1}+Db^{n+1}\right)+q\left(Ca^n+Db^n\right) = 0 = u_{n+2}-pu_{n+1}+qu_n$$</span></p> <p>And this works for any constants <span class="math-container">$C, D$</span> and hence for <span class="math-container">$C=D=1$</span>.</p> <hr /> <p>We can also argue that <span class="math-container">$p=a+b$</span> and <span class="math-container">$q=ab$</span> so that</p> <p><span class="math-container">$$u_{n+2}-pu_{n+1}+qu_n=$$</span><span class="math-container">$$=Ca^{n+2}+Db^{n+2}-(a+b)Ca^{n+1}-(a+b)Db^{n+1}+abCa^n+abDb^n=0$$</span> because the terms all cancel.</p>
126,120
<p>Python has generators which save memory, is there a technique for generating in memory examples for your training set "on the fly".</p> <p>For example purposes, I constructed here a regressor for blur:</p> <pre><code>randomMask[img_] := Module[{t, h, g, d = ImageDimensions[img]}, t = Table[{PointSize@RandomReal[{0, .1}], RandomChoice[{Point, Rectangle[#, # + RandomReal[{-200, 200}, {2}]] &amp;}]@ RandomPoint[Rectangle[{0, 0}, d]]}, {RandomChoice[{0, 1, 2, 3, 4, 8, 14, 20, 50, 200}]}]; g = Graphics[t, PlotRange -&gt; Transpose[{{0, 0}, d}], ImageSize -&gt; d]; {g, Area@DiscretizeGraphics@g/Times @@ d}] makeExample[img_] := Module[{g, v}, {g, v} = randomMask[img]; ImageCompose[img, SetAlphaChannel[Blur[img, 15], ColorNegate@g]] -&gt; v ]; imgs = ConformImages[ExampleData /@ ExampleData["TestImage"], {100, 100}]; (* this is a large set that I don't want to precompute !!! *) train = Table[makeExample@RandomChoice[imgs], {3000}] test = Table[makeExample@RandomChoice[imgs], {500}]; convnet=NetChain[{ ConvolutionLayer[20,{5,5}], ElementwiseLayer[Ramp], PoolingLayer[{2,2},{2,2}], ConvolutionLayer[50,{5,5}], ElementwiseLayer[Ramp], PoolingLayer[{2,2},{2,2}], FlattenLayer[], DotPlusLayer[500], ElementwiseLayer[Ramp], DotPlusLayer[50], ElementwiseLayer[Ramp], DotPlusLayer[1] }, "Input"-&gt;NetEncoder[{"Image",{100,100}}], "Output"-&gt;NetDecoder["Scalar"] ] trainedConvnet = NetTrain[convnet, train, TargetDevice -&gt; "GPU"] output = trainedConvnet /@ Keys[test]; target = test // Values; meanSquareLoss = Mean@Flatten[(#Output - #Target)^2, Infinity] &amp;; data = &lt;|"Output" -&gt; {{output}}, "Target" -&gt; {{target}}|&gt;; N@meanSquareLoss@data </code></pre>
Alexey Golyshev
23,402
<p>You can do out-of-core classification with the new function <code>File</code> (<a href="http://reference.wolfram.com/language/ref/File.html" rel="noreferrer">link1</a>, <a href="https://www.wolfram.com/language/11/neural-networks/out-of-core-image-classification.html" rel="noreferrer">link2</a>).</p> <p>I will simplify your code. For example, we have directory 'train' with 100 images.</p> <pre><code>CreateDirectory["train"]; Do[ Export[ "train\\" &lt;&gt; ToString[i] &lt;&gt; ".jpg", RandomImage[1, {100, 100}, ColorSpace -&gt; "RGB"] ], {i, 100} ] </code></pre> <p>Let's compare the calculation speed of out-of-core <code>File</code> and classic <code>Import</code>.</p> <pre> SetDirectory["train"]; X1 = File /@ FileNames[]; X2 = Import /@ FileNames[]; Y = RandomInteger[1, 100]; </pre> <p>Convolutional neural network:</p> <pre> convnet = NetChain[ { ConvolutionLayer[20, {5, 5}], ElementwiseLayer[Ramp], PoolingLayer[{2, 2}, {2, 2}], ConvolutionLayer[50, {5, 5}], ElementwiseLayer[Ramp], PoolingLayer[{2, 2}, {2, 2}], FlattenLayer[], DotPlusLayer[500], ElementwiseLayer[Ramp], DotPlusLayer[50], ElementwiseLayer[Ramp], DotPlusLayer[1] }, "Input" -> NetEncoder[{"Image", {100, 100}}], "Output" -> NetDecoder["Scalar"] ]; </pre> <pre> SeedRandom[123]; AbsoluteTiming[ net1 = NetTrain[convnet, X1 -> Y, BatchSize -> 16, MaxTrainingRounds -> 1]; ] </pre> <blockquote> <p>{5.79041, Null}</p> </blockquote> <pre> SeedRandom[123]; AbsoluteTiming[ net2 = NetTrain[convnet, X2 -> Y, BatchSize -> 16, MaxTrainingRounds -> 1]; ] </pre> <blockquote> <p>{5.54343, Null}</p> </blockquote> <p>As we can see, the difference in the speed of calculations is very small.</p> <p>But of course this is not an online augmentation of the dataset.</p> <p>'In-the-storage' augmentation with function <code>ImageFileApply</code>:</p> <pre> augmentingFunctions = {# &, 1 - # &}; numberOfRounds = 3; SeedRandom[123]; Do[ Xaugm = File /@ (ImageFileApply[RandomChoice[augmentingFunctions], #] & /@ X1); net1 = NetTrain[convnet, Xaugm -> Y, BatchSize -> 16, MaxTrainingRounds -> 1]; DeleteFile@FileNames["* at *.jpeg"], {numberOfRounds} ] </pre>
22,753
<p>I've learned the process of orthogonal diagonalisation in an algebra course I'm taking...but I just realised I have no idea what the point of it is.</p> <p>The definition is basically this: "A matrix <span class="math-container">$A$</span> is orthogonally diagonalisable if there exists a matrix <span class="math-container">$P$</span> which is orthogonal and <span class="math-container">$D = P^tAP$</span> where <span class="math-container">$D$</span> is diagonal". I don't understand the significance of this though...what is special/important about this relationship?</p>
Arturo Magidin
742
<p>When you have a matrix $A$ that is diagonalisable, the matrix $U$ such that $U^{-1}AU=D$ is diagonal is the matrix whose columns are a basis of eigenvectors of $A$. Having a basis of eigenvectors makes understanding $A$ much easier, and makes computations with $A$ easy as well.</p> <p>What the basis of eigenvectors does not do, however, is preserve the usual notions of size of vectors, distance between vectors, or angles between vectors, that come from the inner/dot product. The standard basis for $\mathbb{R}^n$ is particularly nice because it is not only a basis, it is an <em>orthonormal</em> basis. Orthonormal bases make all sorts of things easy to compute, such as how to express vectors as linear combinations of them, least squares, the size of vectors, angles between vectors, etc. For example, if you think about it, it's very easy to express a vector as a linear combination of the vectors in the standard basis; it's usually not so easy to do it in terms of some other bases. With orthonormal bases, all you need to do is take the dot product with the basis elements to get the coefficients of the linear transformation. <em>Very</em> easy.</p> <p>So now you have two notions that make life simpler: a basis of eigenvectors makes your life easy relative to the linear transformation/basis; an orthonormal basis makes your life easy relative to the vector space itself. Unfortunately, usually you have to choose one or the other, and you cannot have both.</p> <p>When a matrix $P$ is orthogonal, then its columns are not just a basis for $\mathbb{R}^n$, they are an <strong>orthonormal</strong> basis. The fact that $A$ is orthogonally diagonalisable means that, for $A$, <em>you don't have to choose!</em> The columns of $P$ are <em>both</em> an orthonormal basis, <em>and</em> a basis of eigenvectors for $A$. So you get the best of both worlds. And being an orthonormal basis, you still keep a good handle on the notions of distance, angles between vectors, and so on.</p>
1,452,943
<p>I'm working on problem where I want to use the continuity of $f'$ to assert that $f'(x)$ cannot be zero ("bounded away from zero"?) near $x = 0$. We know that $(f'(0))^2 &gt;3$.</p> <p>So, I think that what I really want to ask is this: if $f'$ is cts, must $f'$-squared also be continuous? </p> <p>Can I use the epsilon-delta definition?:</p> <p>Since $f'$ is continuous, for every $\epsilon&gt;0$, there exists $\delta&gt;0$ such that:</p> <p>$$|x-0|&lt;\delta \implies|f'(x)-f'(0)|&lt;\epsilon$$ $$ \implies -\epsilon &lt; f'(x) - f'(0) &lt; \epsilon $$ $$ \implies f'(0)-\epsilon &lt; f'(x) &lt; f'(0)+\epsilon $$</p> <p>From here I'm not sure how to use the fact that $(f'(0))^2&gt;3$.</p> <p>Thanks,</p>
davidlowryduda
9,754
<p>Very generally, if $f$ and $g$ are both continuous functions, then $f \circ g$ is a continuous function. (If you haven't proved this, then you should). Here, you are composing the square-function with the derivative of $f$.</p>
275,775
<p>For the FrameLabel, I have:</p> <pre><code>Style[&quot;\[NumberSign] Humans per city&quot;, FontFamily -&gt; &quot;Latin Modern Math&quot;] </code></pre> <p>How can I make only &quot;Humans&quot; in the label to be italic?</p>
Szabolcs
12
<p>Select &quot;Humans&quot; and press Ctrl-I (or Command-I on macOS) to format it in italics. This is by far the simplest way.</p> <p>Note that Latin Modern Math is intended only for math, not text, and does not have an italics version. Your OS will likely fake the italics style by simply slanting letters. Install Latin Modern Roman instead to get proper italics.</p> <p>Illustration:</p> <p><a href="https://i.stack.imgur.com/WM4Vk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WM4Vk.png" alt="enter image description here" /></a></p>
19,495
<p>I was told that one of the most efficient tools (e.g. in terms of computations relevant to physics, but also in terms of guessing heuristically mathematical facts) that physicists use is the so called &quot;Feynman path integral&quot;, which, as far as I understand, means &quot;integrating&quot; a functional (action) on some infinite-dimentional space of configurations (fields) of a system.</p> <p>Unfortunately, it seems that, except for some few instances like Gaussian-type integrals, the quotation marks cannot be eliminated in the term &quot;integration&quot;, cause a mathematically sound integration theory on infinite-dimensional spaces — I was told — has not been invented yet.</p> <p>I would like to know the state of the art of the attempts to make this &quot;path integral&quot; into a well-defined mathematical entity.</p> <p>Difficulties of analytical nature are certainly present, but I read somewhere that perhaps the true nature of path integral would be hidden in some combinatorial or higher-categorical structures which are not yet understood...</p> <p>Edit: I should be more precise about the kind of answer that I expected to this question. I was not asking about reference for books/articles in which the path integral is treated at length and in detail. I'd have just liked to have some &quot;fresh&quot;, (relatively) concise and not too-specialistic account of the situation; something like: &quot;Essentially the problems are due to this and this, and there have been approaches X, Y, Z that focus on A, B, C; some progress have been made in ... but problems remain in ...&quot;.</p>
Kevin H. Lin
83
<p>Recently I have been reading Kevin Costello's book (draft) <a href="http://www.math.northwestern.edu/%7Ecostello/renormalization" rel="nofollow noreferrer">Renormalization of Quantum Field Theories</a>, which claims to work out some foundations of perturbative quantum field theory following the &quot;Wilsonian philosophy&quot;. I don't understand this stuff well enough to really say much, and hopefully someone else can say more; but I think the basic idea is to, instead of doing integrals over infinite dimensional spaces such as <span class="math-container">$C^\infty(M)$</span>, do integrals over finite dimensional &quot;approximations&quot; of these infinite dimensional spaces, for example, the space of functions <span class="math-container">$C^\infty(M)_{\leq \Lambda}$</span> of energy <span class="math-container">$\leq \Lambda$</span>, where <span class="math-container">$\Lambda$</span> is some constant. I think energy <span class="math-container">$\leq \Lambda$</span> means you take the Laplacian of <span class="math-container">$M$</span> (corresponding to a Riemannian metric, which is probably fixed from the beginning), and you take eigenfunctions of the Laplacian corresponding to eigenvalues <span class="math-container">$\leq \Lambda$</span>. (Someone should correct me if I'm wrong.) Then, a low energy theory should be related in an appropriate way to (indeed it should be determined by) the higher energy theories.</p> <p>I might be wrong, but my impression is that it is &quot;impossible&quot; to make a rigorous definition of the path integral: There are various problems with defining the appropriate measures on infinite dimensional spaces. Therefore, if we wish to make path integrals &quot;rigorous&quot;, we must find some other means to define it, or find some alternative &quot;roundabout&quot; solution, such as the Wilsonian idea. But again I am not an expert on this; these are just my (very) naive impressions.</p> <p>There is also the Atiyah-Segal axiomatization of (topological) quantum field theory. Perhaps this can also be viewed as a &quot;roundabout&quot; solution to &quot;defining&quot; the path integral: It avoids having to define path integrals, and instead axiomatizes the properties that should hold <em>if</em> the path integral could be rigorously defined. Check out Atiyah's <a href="https://doi.org/10.1007/BF02698547" rel="nofollow noreferrer">original paper</a> and Segal's <a href="https://web.archive.org/web/20150430034051/http://www.cgtp.duke.edu:80/ITP99/segal/" rel="nofollow noreferrer">notes</a>. One way that higher categorical stuff arises is via the &quot;locality&quot; property/assumption of (T)QFTs. For more on this, see for example Jacob Lurie's paper on TFTs (available on <a href="http://www.math.harvard.edu/%7Elurie/" rel="nofollow noreferrer">his webpage</a>), and the references therein.</p>
3,570,688
<p>For example, if a ball can be any of 3 colors, then the number of configurations (with repetition of colors) of 2 balls is <span class="math-container">$(3+2-1)C_{2} = 4C_{2} = 6$</span> Why?</p>
Hanno
316,749
<p>An alternative solution relies on the Cauchy–Bunyakovsky–Schwarz inequality (CBS).</p> <p>The given constraint <span class="math-container">$\,ab+bc+ca+abc=4\,$</span> can be written as <span class="math-container">$(a+1)(b+1)(c+1) = 2+(a+1)+(b+1)+(c+1)$</span>, which in turn is equivalent to <span class="math-container">$$\sum_{\text{cyc}}{1\over a+2} \:=\:1\tag{1}\,.$$</span></p> <p>Thanks to (CBS) and benefitting from <span class="math-container">$(1)$</span> we have <span class="math-container">$$\begin{align*} (3+5+7)^2 &amp; \:=\: \left(3\sqrt{a+2}\cdot\frac1{\sqrt{a+2}}+5\sqrt{b+2}\cdot\frac1{\sqrt{b+2}}+7\sqrt{c+2}\cdot\frac1{\sqrt{c+2}}\right)^2 \\[1ex] &amp; \:\leqslant\: 9(a+2)+25(b+2)+49(c+2) \\[1.5ex] \iff\quad 59 &amp; \:\leqslant\:9a+25b+49c \end{align*}$$</span> Recall that (CBS) gives equality only if one argument is a scalar multiple of the other. This leads to the solution given in the OP.</p>
1,876,732
<p>What is $$\int_{S}(x+y+z)dS,$$ where $S$ is the region $0\leq x,y,z\leq 1$ and $x+y+z\leq 2$?</p> <p>We can change the region to $0\leq x,y,z\leq 1$ and $x+y+z\geq 2$, because the total of the two integrals is just $$\int_0^1\int_0^1\int_0^1(x+y+z)dxdydz=3\int_0^1xdxdydz=\frac{3}{2}.$$</p> <p>Now, can we write the new integral as $$\int_0^1\int_{\min(2-x,1)}^1\int_{\min(2-x-y,1)}^1(x+y+z)dzdydx?$$ This gets more involved since we have to divide into cases whether $2-x-y\leq 1$ or $\geq 1$. Is there a simpler way?</p>
ChrisT
353,893
<p>You could write \begin{equation} \int_S x \, dx dy dz = \int_0^1 \left( \int_{y+z \leq 2-x; \, 0\leq y,z \leq 1} dy dz \right) x dx \end{equation}</p> <p>Now, you can interpret $y+z \leq 2-x$ with $y,z \geq 0$ as a triangle in the plane, whose area is $\frac{(2-x)^2}{2}$. From this triangle, you subtract two smaller triangles to account for the fact that $0 \leq y,z \leq 1$. You can write the area of these smaller triangles as $2\times \frac{(2-x-1)^2}{2} = (1-x)^2$. In total, we have \begin{equation} \int_{y+z \leq 2-x; \, 0\leq y,z \leq 1} dy dz = \frac{(2-x)^2}{2} - (1-x)^2 = 1 - \frac{x^2}{2} \end{equation}</p> <p>So in total, \begin{equation} \int_0^1 x \left(1 - \frac{x^2}{2} \right) dy dz = \frac{3}{8} \end{equation}</p> <p>You have three of those integrals, so $\int_S x+y+z \, dx dx dz = \frac{9}{8}$.</p>
1,876,732
<p>What is $$\int_{S}(x+y+z)dS,$$ where $S$ is the region $0\leq x,y,z\leq 1$ and $x+y+z\leq 2$?</p> <p>We can change the region to $0\leq x,y,z\leq 1$ and $x+y+z\geq 2$, because the total of the two integrals is just $$\int_0^1\int_0^1\int_0^1(x+y+z)dxdydz=3\int_0^1xdxdydz=\frac{3}{2}.$$</p> <p>Now, can we write the new integral as $$\int_0^1\int_{\min(2-x,1)}^1\int_{\min(2-x-y,1)}^1(x+y+z)dzdydx?$$ This gets more involved since we have to divide into cases whether $2-x-y\leq 1$ or $\geq 1$. Is there a simpler way?</p>
Christian Blatter
1,303
<p>You already have made two good moves: (i) replacing $S$ by the pyramidal region $S':=[0,1]^3\setminus S$, and (ii) replacing the integrand by $3x$. The integral in question then comes to $$Q:=\int_S (x+y+z)\&gt;{\rm d}(x,y,z)={3\over2}-3\int_{S'}x\&gt;{\rm d}(x,y,z)\ .$$ From a figure we read off that $$\eqalign{\int_{S'}x\&gt;{\rm d}(x,y,z)&amp;=\int_0^1 x\int_{1-x}^1 \int_{2-x-y}^1 \&gt;dz\&gt;dy\&gt;dx\cr &amp;=\int_0^1 x \int_{1-x}^1 (x+y-1)\&gt;dy\&gt;dx\cr &amp;=\int_0^1 x\cdot{x^2\over2}\&gt;dx={1\over8}\ .\cr}$$ It follows that $$Q={3\over2}-{3\over8}={9\over8}\ .$$</p>
363,166
<p>For valuation rings I know examples which are Noetherian. </p> <blockquote> <p>I know there are good standard non Noetherian Valuation Rings. Can anybody please give some examples of rings of this kind? </p> </blockquote> <p>I am very eager to know. Thanks.</p>
Alex Youcis
16,497
<p>This was bumped to the front page for some reason, so I apologize for resurrecting this. But I think that there is an exceedingly natural example. In fact, it comes up all the time in 'nature'. Namely, consider $\mathbb{Q}_p$ with the standard valuation $v_p$. Then, there is a unique extension of this valuation to $\overline{\mathbb{Q}_p}$. The value group is $\mathbb{Q}$, and so if $\mathcal{O}$ is its valuation ring (it's just the integral closure $\overline{\mathbb{Z}_p}$ of $\mathbb{Z}_p$ in $\mathbb{Q}_p$), then $\mathcal{O}$ is a non-Noetherian valuation ring. </p> <p>Other examples which come up are $\mathcal{O}_{\mathbb{C}_p}$, the valuation ring of the $p$-adic complex numbers.</p>
363,166
<p>For valuation rings I know examples which are Noetherian. </p> <blockquote> <p>I know there are good standard non Noetherian Valuation Rings. Can anybody please give some examples of rings of this kind? </p> </blockquote> <p>I am very eager to know. Thanks.</p>
Pramathanath Sastry
444,395
<p>Let <span class="math-container">$(K, \lvert\cdot\rvert)$</span> be a complete algebraically closed field with a non trivial absolute value. Let <span class="math-container">$R$</span> be its valuation ring and <span class="math-container">$\mathfrak{m}$</span> the maximal ideal of <span class="math-container">$R$</span>. Since every element of <span class="math-container">$K$</span> has a square root in <span class="math-container">$K$</span>, therefore <span class="math-container">$\mathfrak{m}=\mathfrak{m}^2$</span>. By Nakayama, <span class="math-container">$R$</span> cannot be noetherian. Such fields <span class="math-container">$K$</span> exist of course. Start with any <span class="math-container">$K$</span> with a non-trivial absolute value. Complete it, take the algebraic closure of the completion, and complete that. So for example the valuation ring of <span class="math-container">$\mathbb{C}_p$</span>, with <span class="math-container">$p$</span> a prime number, would be such an example.</p>
2,771,240
<p>Let $\mathbb F$ be a field and $\mathbb K $ be an extension field of $\mathbb F$ such that $\mathbb K$ is algebraically closed. </p> <p>Let $\mathbb L$ be the field of all elements of $\mathbb K$ which are algebraic over $\mathbb F$. Then $\mathbb L_{|\mathbb F}$ is an algebraic extension. </p> <p>My question is : Is $\mathbb L$ algebraically closed ?</p> <p>I am trying to prove the existence of algebraic closure, so please don't assume that every field has an algebraic closure. </p>
vadim123
73,324
<p>Suppose that $n=kT+n'$, where $0\le n'&lt;T$. Then:</p> <p>$$\frac{1}{n}\int_0^n f(x)dx = \frac{k\int_0^T f(x)dx + \int_{kT}^{kT+n'}f(x)dx}{n}=\frac{k}{kT+n'}\int_0^T f(x)dx+\frac{1}{n}\int_{kT}^{kT+n'}f(x)dx$$</p> <p>As $n\to\infty$, we have $k\to \infty$, while $n'$ and $\int_{kT}^{kT+n'}f(x)dx$ remain bounded. Hence, in the limit, the first summand approaches $\frac{1}{T}\int_0^T f(x)dx$, while the second summand approaches $0$.</p>
2,626,506
<p><strong>Proof: There is no other prime triple then $3,5,7$</strong></p> <p>There are already lots of questions about this proof, but I can't find the answer to my question.</p> <p>The complete the proof, we consider mod $3$ so $p=3k; p=3k+1; p=3k+2$ </p> <p>But why do we look at divisibility by $3$?</p> <p>Do we look at mod $4$ for prime quads?</p>
nonuser
463,553
<p>If $p=2$ then we have no solution.</p> <p>If $p, p+2,p+4$ are primes then exactly one of them is divisible by 3, so it must be $p=3$.</p>
3,624,524
<p>I want to figure out the process for showing why the function <span class="math-container">$\cos(1-\frac{1}{z})$</span> has an essential singularity at <span class="math-container">$z=0$</span> without using knowledge of the Laurent expansion. I know the process should be to rule out the possibility of removable singularities or poles, but do not know how to do this for this function.</p> <p>Attempt I was thinking I would show since <span class="math-container">$$\lim_{z\to 0} |\cos(1-\frac{1}{z})| \text{ DNE } $$</span> since the function oscillates between <span class="math-container">$1$</span> and <span class="math-container">$-1$</span> for <span class="math-container">$z$</span> near zero for positive values, this rules out the possibility of a pole since the limit is not <span class="math-container">$\infty$</span> and the singularity is not removable since the limit is not finite.Is this the correct approach? What are some other ways, to show that zero is an essential singularity?</p>
Basel J.
545,344
<p>There always exists an open cover <span class="math-container">$\{U_\alpha\}$</span> of <span class="math-container">$M$</span> such that <span class="math-container">$TU_\alpha$</span> is homeomorphic to <span class="math-container">$U_\alpha \times \mathbb{R}^{\text{dim } M}$</span>. The collection of these maps along with the open cover is called a local trivialization, and it can be done for the tangent bundle of any manifold. </p> <p>I can't say I have a way to visualize the space <span class="math-container">$TS^2$</span>, but to answer your question about what property makes it non trivial consider this: On <span class="math-container">$S^2\times\mathbb{R}^2$</span> there is a non vanishing "section" or vector field, one can pick <span class="math-container">$X(p)=(p,e_1)$</span> for any <span class="math-container">$p\in S^2$</span> and <span class="math-container">$e_1$</span> is the first standard basis vector of <span class="math-container">$\mathbb{R}^2$</span>. However one cannot find a non vanishing vector field on <span class="math-container">$S^2$</span> (this is known as the hairy ball theorem) which tells us that <span class="math-container">$TS^2$</span> cannot be trivial. </p>
2,353,190
<p>Let $f(x)=\dfrac{1+x}{1-x}$ The nth derivative of f is equal to:</p> <ol> <li>$\dfrac{2n}{(1-x)^{n+1}} $</li> <li>$\dfrac{2(n!)}{(1-x)^{2n}} $</li> <li>$\dfrac{2(n!)}{(1-x)^{n+1}} $</li> </ol> <p>by Leibniz formula </p> <p>$$ {\displaystyle \left( \dfrac{1+x}{1-x}\right)^{(n)}=\sum _{k=0}^{n}{\binom {n}{k}}\ (1+x)^{(k)}\ \left(\dfrac{1}{1-x}\right)^{(n-k)}}$$</p> <p>using the hint </p> <ul> <li>$\dfrac{1+x}{1-x}=\dfrac{2-(1-x)}{1-x}=\dfrac2{1-x}-1$ and </li> <li>$\left(\dfrac{1}{x}\right)^{n}=\dfrac{(-1)^{n}n!}{x^{n+1}}$</li> </ul> <p>so </p> <p>$${\displaystyle \left( \dfrac{1+x}{1-x} \right)^{(n)} = \left( \dfrac{2}{1-x}-1 \right)^{(n)}=2\dfrac{ (-1)^{n}n! }{ (1-x)^{n+1} } } $$ but this result isn't apear in any proposed answers</p> <p>what about the method of <strong>Lord Shark the Unknown</strong></p> <p>tell me please this way holds for any mqc question contain find the n th derivative so it's suffice to check each answer in y case i will start with first </p> <ul> <li>let $f_n(x)=\dfrac{2n}{(1-x)^{n+1}}$ then $f_{n+1}(x)=\dfrac{2(n+1)}{(1-x)^{n+2}}$ do i have $f'_{n}=f_{n+1}$ let calculate $$ f'_n=\dfrac{-2n(n+1)}{(1-x)^{n+2}}\neq f_{n+1}$$</li> <li>let $f_n(x)=\dfrac{2(n!)}{(1-x)^{2n}}$ then $f_{n+1}(x)=\dfrac{2((n+1)!)}{(1-x)^{2(n+1)}}$ do i have $f'_{n}=f_{n+1}$ let calculate $$ f'_n=\dfrac{-2(n!)(2n)}{(1-x)^{4n}}\neq f_{n+1}$$</li> <li>let $f_n(x)=\dfrac{2(n!)}{(1-x)^{n+1}}$ then $f_{n+1}(x)=\dfrac{2((n+1)!)}{(1-x)^{n+2}}$ do i have $f'_{n}=f_{n+1}$ let calculate $$ f'_n=\dfrac{2(n!)(n+1)}{((1-x)^{n+1})^{2}}=\dfrac{2((n+1)!)}{(1-x)^{2n+2}}\neq f_{n+1}$$</li> </ul>
lab bhattacharjee
33,337
<p>$$y=\dfrac{1+x}{1-x}=\dfrac{2-(1-x)}{1-x}=\dfrac2{1-x}-1$$</p> <p>$$\implies\dfrac{dy}{dx}=\dfrac{2(-1)}{(1-x)^2}$$</p> <p>$$\dfrac{d^2y}{dx^2}=\dfrac{2(-1)(-2)}{(1-x)^3}=\dfrac{2(-1)^22!}{(1-x)^{2+1}}$$</p> <p>Can you follow the pattern?</p>
2,353,190
<p>Let $f(x)=\dfrac{1+x}{1-x}$ The nth derivative of f is equal to:</p> <ol> <li>$\dfrac{2n}{(1-x)^{n+1}} $</li> <li>$\dfrac{2(n!)}{(1-x)^{2n}} $</li> <li>$\dfrac{2(n!)}{(1-x)^{n+1}} $</li> </ol> <p>by Leibniz formula </p> <p>$$ {\displaystyle \left( \dfrac{1+x}{1-x}\right)^{(n)}=\sum _{k=0}^{n}{\binom {n}{k}}\ (1+x)^{(k)}\ \left(\dfrac{1}{1-x}\right)^{(n-k)}}$$</p> <p>using the hint </p> <ul> <li>$\dfrac{1+x}{1-x}=\dfrac{2-(1-x)}{1-x}=\dfrac2{1-x}-1$ and </li> <li>$\left(\dfrac{1}{x}\right)^{n}=\dfrac{(-1)^{n}n!}{x^{n+1}}$</li> </ul> <p>so </p> <p>$${\displaystyle \left( \dfrac{1+x}{1-x} \right)^{(n)} = \left( \dfrac{2}{1-x}-1 \right)^{(n)}=2\dfrac{ (-1)^{n}n! }{ (1-x)^{n+1} } } $$ but this result isn't apear in any proposed answers</p> <p>what about the method of <strong>Lord Shark the Unknown</strong></p> <p>tell me please this way holds for any mqc question contain find the n th derivative so it's suffice to check each answer in y case i will start with first </p> <ul> <li>let $f_n(x)=\dfrac{2n}{(1-x)^{n+1}}$ then $f_{n+1}(x)=\dfrac{2(n+1)}{(1-x)^{n+2}}$ do i have $f'_{n}=f_{n+1}$ let calculate $$ f'_n=\dfrac{-2n(n+1)}{(1-x)^{n+2}}\neq f_{n+1}$$</li> <li>let $f_n(x)=\dfrac{2(n!)}{(1-x)^{2n}}$ then $f_{n+1}(x)=\dfrac{2((n+1)!)}{(1-x)^{2(n+1)}}$ do i have $f'_{n}=f_{n+1}$ let calculate $$ f'_n=\dfrac{-2(n!)(2n)}{(1-x)^{4n}}\neq f_{n+1}$$</li> <li>let $f_n(x)=\dfrac{2(n!)}{(1-x)^{n+1}}$ then $f_{n+1}(x)=\dfrac{2((n+1)!)}{(1-x)^{n+2}}$ do i have $f'_{n}=f_{n+1}$ let calculate $$ f'_n=\dfrac{2(n!)(n+1)}{((1-x)^{n+1})^{2}}=\dfrac{2((n+1)!)}{(1-x)^{2n+2}}\neq f_{n+1}$$</li> </ul>
Angina Seng
436,618
<p>If we let $f_n$ denote the $n$-th derivative, then $f_{n+1}=f_n'$. That is the case only for <strong>one</strong> of the three possible solutions you have there.</p>
45,662
<p>Does this undirected graph with 6 vertices and 9 undirected edges have a name? <img src="https://i.stack.imgur.com/XwuUB.png" alt="enter image description here"> I know a few names that are not right. It is not a complete graph because all the vertices are not connected. It is close to K<sub>3,3</sub> the utility graph, but not quite (and not quite matters in graph theory :-) </p> <p>This graph came up in my analysis of quaternion triple products.</p>
Fixee
7,162
<p>Take two opposing vertices (the leftmost and rightmost will do). Now swap them and draw the resulting picture.</p> <p>You should get a very clear $K_{3,3}$ as a result.</p>
3,208,412
<p>I have to prove the following:</p> <p><span class="math-container">$$ \sqrt{x_1} + \sqrt{x_2} +...+\sqrt{x_n} \ge \sqrt{x_1 + x_2 + ... + x_n}$$</span></p> <p>For every <span class="math-container">$n \ge 2$</span> and <span class="math-container">$x_1, x_2, ..., x_n \in \Bbb N$</span></p> <p>Here's my attempt:</p> <p>Consider <span class="math-container">$P(n): \sqrt{x_1} + \sqrt{x_2} +...+\sqrt{x_n} \ge \sqrt{x_1 + x_2 + ... + x_n}$</span></p> <p><span class="math-container">$$P(2): \sqrt{x_1} + \sqrt{x_2} \ge \sqrt{x_1 + x_2}$$</span> <span class="math-container">$$ x_1 + x_2 + 2\sqrt{x_1x_2} \ge x_1 + x_2$$</span> Which is true because <span class="math-container">$2\sqrt{x_1x_2} &gt; 0$</span>.</p> <p><span class="math-container">$$P(n + 1): \sqrt{x_1} + \sqrt{x_2} + ... + \sqrt{x_n} + \sqrt{x_{n+1}} \ge \sqrt{x_1 + x_2 +...+ x_n + x_{n+1}}$$</span> </p> <p>From the hypothesis we have:</p> <p><span class="math-container">$$\sqrt{x_1} + \sqrt{x_2} + ... + \sqrt{x_n} + \sqrt{x_{n+1}} \ge \sqrt{x_1 + x_2 + ... + x_n} + \sqrt{x_{n + 1}} \ge \sqrt{x_1 + x_2 +...+ x_n + x_{n+1}}$$</span> </p> <p>Squaring both sides of the right part:</p> <p><span class="math-container">$$ x_1 + x_2 + ... + x_n + x_{n + 1} + 2\sqrt{x_{n+1}(x_1 + x_2 +... + x_n)} \ge x_1 + x_2 +...+ x_n + x_{n+1} $$</span></p> <p>Which is true, hence <span class="math-container">$P(n + 1)$</span> is true as well.</p> <p>I'm not sure if I did it correctly?</p>
DanielWainfleet
254,665
<p>Without induction. For <span class="math-container">$any$</span> non-negative reals <span class="math-container">$x_1,...,x_n$</span> let <span class="math-container">$x_j=(y_j)^2$</span> for each <span class="math-container">$j,$</span> with each <span class="math-container">$y_j\ge 0.$</span> The inequality is then <span class="math-container">$$y_1+...+y_n\ge \sqrt {(y_1)^2+...+(y_n)^2}.$$</span> Since both sides are non-negative reals, this is equivalent to <span class="math-container">$$(y_1+...+y_n)^2\ge (y_1)^2+...+(y_n)^2.$$</span> Expand the LHS of this and you see that each term <span class="math-container">$(y_j)^2$</span> appears on the LHS, and (if <span class="math-container">$n&gt;1$</span>) all the other terms are non-negative, while <span class="math-container">$(y_1)^2,...,(y_n)^2$</span> are the <span class="math-container">$only$</span> terms on the RHS, so the LHS is <span class="math-container">$\ge$</span> the RHS.</p> <p>E.g. For <span class="math-container">$a,b,c\ge 0$</span> we have <span class="math-container">$(a+b)^2=a^2+b^2+2ab\ge a^2+b^2,$</span> and</p> <p><span class="math-container">$(a+b+c)^2=a^2+b^2+c^2+2ab+2bc+2ca\ge a^2+b^2+c^2.$</span></p>
4,136,082
<p><span class="math-container">$$f(x) = \begin{cases} \cos(\frac{1}{x}) &amp; \text{if $x\ne0$} \\ 0 &amp; \text{if $x=0$} \\ \end{cases}$$</span></p> <p>How do I prove this function has Darboux's property? I know it has it because it has antiderivatives, but how do I prove it otherwise, with intervals maybe ?</p>
José Carlos Santos
446,262
<p>Take <span class="math-container">$a,b\in\Bbb R$</span> with <span class="math-container">$a&lt;b$</span>. You want to prove that, if <span class="math-container">$y$</span> lies between <span class="math-container">$f(a)$</span> and <span class="math-container">$f(b)$</span>, then there is some <span class="math-container">$c\in[a,b]$</span> such that <span class="math-container">$f(c)=y$</span>. If <span class="math-container">$b&lt;0$</span> or <span class="math-container">$a&gt;0$</span>, this is clear, by continuity. If <span class="math-container">$a&lt;0&lt;b$</span>, take some <span class="math-container">$n\in\Bbb N$</span> such that <span class="math-container">$\frac1{2\pi n}&lt;b$</span>. Then <span class="math-container">$f\left(\left[\frac1{2\pi n+\pi},\frac1{2\pi n}\right]\right)=[-1,1]$</span>, and therefore there is come <span class="math-container">$c\in\left[\frac1{2\pi n+\pi},\frac1{2\pi n}\right]$</span> such that <span class="math-container">$f(c)=y$</span>. The remaining cases are similar.</p>
3,413,261
<p>I know this was answered before but I'm having one particular problem on the proof that I'm not getting.</p> <p>My Understanding of the distribution law on the absorption law is making me nuts, by the answers of the proof it should be like this.</p> <p>A∨(A∧B)=(A∧T)∨(A∧B)=A∧(T∨B)=A∧T=A</p> <p>This should prove the Absoption Law but on the Step (A ^ (T v B)), I'm not getting how they get to it.</p> <p>If (A ^ T) v (A ^ B) will be distributed by my understand of this, the following is the answer (A v A) ^ (A v B) ^ (T v A) ^ (T v B) That we can go to A ^ (A v B) ^ T I'm getting lost on something here, because it looks to me we will enter a loop on it as: A ^ T ^ (A v B) will be distributed again and I will go back to A ^ B if I distribute with A but if I distribute with T it will be B ^ T, nothing usefull also is it?</p> <p>Can anyone help on this? Thanks in advance.</p>
Wlod AA
490,755
<p>An example of a Lindelöf non-second countable space, which has some additional nice properties, was constructed/discovered during the Prague 1961 Topological Conference (by wh). The point-set is the unit disc</p> <p><span class="math-container">$$\ B(\mathbf 0\,\ 1)\ := \ \{p\in\mathbb R^2: |p|\le 1\} $$</span></p> <p>The neighborhoods of the points <span class="math-container">$\ p\ $</span> of the disk, with <span class="math-container">$\ |p|&lt;1,\ $</span> are the ordinary Euclidean. In the case of <span class="math-container">$\ |p|=1,\ $</span> a base neighborhood, <span class="math-container">$\ N_{a\,b}(p),\ $</span> is determined by points <span class="math-container">$\ a\ b\ $</span> such that <span class="math-container">$\ |a|=|b|=1\ $</span> and <span class="math-container">$\ a\ne p\ne b\ne a.\ $</span> This neighborhood consists of points which are between the chord which connects <span class="math-container">$\ a\ $</span> to <span class="math-container">$\ p\ $</span> and the unit circle, together with a similar one for <span class="math-container">$\ b\ $</span> and <span class="math-container">$\ p\ $</span> (the arcs <span class="math-container">$\ ap\ $</span> and <span class="math-container">$\ pb\ $</span> are such <span class="math-container">$\ a\ $</span> does not belong to arc <span class="math-container">$\ pb,\ $</span> nor <span class="math-container">$\ b\ $</span> to <span class="math-container">$\ ap.$</span>)</p> <blockquote> <p><em><strong>Note:</strong> Following that Prague conference, my example was published in a paper by A.Archangielski and W.Holsztyński (there is only one paper by these two authors). I've solved a respective question asked by Archangielski.</em></p> </blockquote>
38,586
<p>The $n$-th Mersenne number $M_n$ is defined as $$M_n=2^n-1$$ A great deal of research focuses on Mersenne primes. What is known in the opposite direction about Mersenne numbers with only small factors (i.e. smooth numbers)? In particular, if we let $P_n$ denote the largest prime factor of $M_n$, are any results known of the form $$\liminf_{n\rightarrow \infty}\frac{P_n}{f(n)}= 1$$ for some function $f$?</p> <p>I've only come across two (fairly distant) bounds so far. If we consider even-valued $n$, then $M_n=M_{n/2}(M_{n/2}+2)$, so: $$\liminf_{n\rightarrow \infty}\frac{P_n}{2^{n/2}}\leq 1$$ In the other direction, [1] shows that $P_n\geq 2n+1$ for $n&gt;12$, so $$\liminf_{n\rightarrow \infty}\frac{P_n}{2n}\geq 1$$</p> <p>[1] A. Schinzel, On primitive prime factors of $a^n-b^n$, Proc. Cambridge Philos. Soc. 58 (1962), 555-562.</p>
Qiaochu Yuan
290
<p>I can give you a slightly better upper bound. Recall that $2^n - 1 = \prod_{d | n} \Phi_d(2)$ where $\Phi_d$ is a cyclotomic polynomial. Now,</p> <p>$$\Phi_d(2) = \prod_{(k, d) = 1} (2 - \zeta_d^k) \le 3^{\varphi(d)}$$</p> <p>so that in particular the largest prime factor of $2^n - 1$ is at most $3^{\varphi(n)}$. By taking $n$ to be a product of the first $k$ primes and letting $k$ tend to infinity we have $\liminf_{n \to \infty} \frac{\varphi(n)}{n} = 0$, hence</p> <p>$$\liminf_{n \to \infty} \frac{P_n}{c^n} = 0$$</p> <p>for any $c &gt; 1$. In fact if $n$ is the product of the first $k$ primes then we should expect something like $3^{\varphi(n)} \approx 3^{ \frac{n}{\log k} }$ but this doesn't seem like a big improvement to me.</p>
2,467,327
<p>How to prove that $441 \mid a^2 + b^2$ if it is known that $21 \mid a^2 + b^2$.<br> I've tried to present $441$ as $21 \cdot 21$, but it is not sufficient.</p>
Tengu
58,951
<blockquote> <p><strong>Lemma.</strong> If a prime $p \equiv 3\pmod{4}$ and $p$ divides $a^2+b^2$ then $p$ divides $a,b$.</p> </blockquote> <p><em>Proof.</em> Assume the contrary that $p \nmid a,p \nmid b$.</p> <p>By Fermat's little theorem, we have $a^{p-1} \equiv 1 \pmod{p}, b^{p-1} \equiv 1 \pmod{p}$ so $a^{p-1}+b^{p-1} \equiv 2 \pmod{p}$.</p> <p>On the other hand, note that $x+y$ divides $x^{2k+1}+y^{2k+1}$ for all integer $x,y,k$ and $k \ge 0$. Hence,since $p \equiv 3 \pmod{4}$ so $(p-1)/2$ is odd, we obtain $a^2+b^2$ divides $(a^2)^{(p-1)/2}+(b^2)^{(p-1)/2}$. Since $p \mid a^2+b^2$ so $p$ divides $(a^2)^{(p-1)/2}+(b^2)^{(p-1)/2}=a^{p-1}+b^{p-1}$, a contradiction since $a^{p-1}+b^{p-1}\equiv 2 \pmod{p}$ and $p \ge 3$. $\square$</p> <hr> <p>Back to the problem, since $3,7 \equiv 3 \pmod{4}$ are primes so if $7 \mid a^2+b^2$ then $7 \mid a, 7 \mid b$ which means $7^2 \mid a^2+b^2$. Similarly, $3^2 \mid a^2+b^2$. Thus, $441 \mid a^2+b^2$.</p>
2,132,936
<p>How do you simplify this problem? $$ \frac {\mathrm d}{\mathrm dx}\left[(3x+1)^3\sqrt{x}\right] $$ $$= \frac {(3x+1)^3}{2\sqrt {x}} + 9\sqrt{x} (3x+1)^2 $$ $$\frac{(3x+1)^2(21x+1)}{2\sqrt x} $$</p>
Deepak
151,732
<p>Just offering another tool here, logarithmic differentiation. A specific use of implicit differentiation.</p> <p>Put $y= (3x+1)^3\sqrt x$</p> <p>$$\log y = 3\log(3x+1) + \frac 12 \log x$$</p> <p>Observe that $\frac{d}{dx} \log y = \frac{d}{dy} (\log y) \cdot \frac{dy}{dx}$ by the chain rule. Use $y'$ to represent $\frac{dy}{dx}$ for clarity.</p> <p>$$\frac{y'}{y} = \frac{(3)(3)}{3x+1} + \frac{1}{2x}$$</p> <p>$$y' = (3x+1)^3\sqrt x (\frac{9}{3x+1} + \frac{1}{2x})$$</p> <p>(I'll leave any further simplification to you).</p>
1,455,348
<p>Recently, having realized I did not properly internalize it (shame on me!), I went back to the definition of continuity in metric spaces and I found a proposition for which I was looking for a proof.</p> <p>Here there is the result and my &quot;proof&quot; (in the hope to get rid of the quotation marks).</p> <p><em>[In general, I use <span class="math-container">$N_{\varepsilon, X} (x)$</span> to denote an open <span class="math-container">$\varepsilon$</span>-neighbourhood of <span class="math-container">$x \in X$</span>.]</em></p> <blockquote> <p><strong>Proposition:</strong> Let <span class="math-container">$\phi \in \mathbb{R}^X$</span> be a continuous function, with <span class="math-container">$X$</span> an arbitrary metric space. Then, the set <span class="math-container">$\{ x \mid \phi(x) \geq \alpha \}$</span> is closed.</p> <p><em>Attempted proof:</em><br /> Let <span class="math-container">$\alpha$</span> be an arbitrary real number. We establish the result by showing that <span class="math-container">$\{ x \mid \phi(x) &lt; \alpha \}$</span> is an open set in <span class="math-container">$X$</span>. Notice that for every <span class="math-container">$x \in X$</span>, if <span class="math-container">$\phi (x) &lt; \alpha$</span>, then there is a <span class="math-container">$\varepsilon &gt; 0$</span> such that there is an open neighbourhood <span class="math-container">$N_{\varepsilon, \mathbb{R}} (\phi (x)) &lt; \alpha$</span>. Let <span class="math-container">$z \in X$</span> be arbitrary and such that <span class="math-container">$\phi (z) &lt; \alpha$</span>. Hence, by the definition of continuity and the fact that <span class="math-container">$\phi$</span> is continuous, there is a <span class="math-container">$\delta (z, \varepsilon) &gt; 0$</span> such that</p> <p><span class="math-container">$$N_{\delta, X} (z) \subseteq \phi^{-1} ( N_{\varepsilon, \mathbb{R}} ( \phi (z)). $$</span></p> <p>Hence, being <span class="math-container">$z \in X$</span> arbitrary, the proposition follows.</p> </blockquote> <p><em>Is this proof correct?</em></p> <p>As always, any feedback is more than welcome.<br /> Thank you for your time.</p> <hr /> <p><strong>Edit:</strong><br /> I know it is possible to proceed, as hinted by air in a comment below, through the fact that the if a function is continuous, then the preimage of a closed set is closed. However, I find this solution a bit too <em>topological</em>, in the sense that I really would like to know about this <span class="math-container">$\varepsilon - \delta$</span> proof, which – to me – has a stronger metric flavour.</p>
air
181,046
<p>Ok now the proof is basically correct (when we are working in metric spaces)! Some remarks:</p> <p>As Umberto P. also noted in his answer in the <a href="https://math.stackexchange.com/questions/1456524/proof-based-on-convergence-arguments-that-if-phi-in-mathbbrx-is-continu">related question</a> you asked, I am not fond of the notation &quot;<span class="math-container">$N_{\varepsilon, \mathbb{R}} (\phi (x)) &lt; \alpha$</span>&quot;. In fact in the other thread you write &quot;<span class="math-container">$\phi(Y) \le \alpha$</span>&quot; for a set <span class="math-container">$Y$</span>, which is still a bit more appropriate than what you write here (though still false).</p> <p>You should write instead: There is a neighborhood <span class="math-container">$N_{\varepsilon, \mathbb{R}} (\phi (x))$</span> such that or all <span class="math-container">$y \in N_{\varepsilon, \mathbb{R}} (\phi (x))$</span> it holds that <span class="math-container">$y &lt; \alpha$</span>.</p> <p>Also towards the end, your argument is correct but I usually like to be a bit more explicit (at least until you get more comfortable with the material). For example I would write: By the continuity of <span class="math-container">$\phi$</span> there exists <span class="math-container">$\delta:=\delta (z, \varepsilon) &gt; 0$</span> such that <span class="math-container">$|\phi(x)-\phi(z)| &lt; \varepsilon$</span> for all <span class="math-container">$x \in N_{\delta, X}(z)$</span>. This implies that for <span class="math-container">$x \in N_{\delta, X}(z)$</span> it holds that <span class="math-container">$\phi(x) \in N_{\varepsilon, \mathbb{R}} (\phi (z))$</span>. Therefore we finally have that:</p> <p><span class="math-container">$$N_{\delta, X} (z) \subseteq \phi^{-1} ( N_{\varepsilon, \mathbb{R}} ( \phi (z)) \subseteq \{ x \mid \phi(x) &lt; \alpha \}$$</span></p> <p>This finishes our proof. (Note the final inclusion; you had already shown it but I still feel it is critical to the argument and should be repeated at this point.)</p>
652,660
<p>Show $\lnot(p\land q) \equiv \lnot p \lor \lnot q$</p> <p>this is my solution . Check it please </p> <p><img src="https://i.stack.imgur.com/1y7DB.jpg" alt="enter image description here"></p>
Newb
98,587
<p>Your solution is right apart from the second line in $(\Leftarrow)$: it should be </p> <blockquote> <p>$p$ $\color{red}{\text{ and }} q$ is false. Then $(p\land q)$ is false.$\ldots$</p> </blockquote>
4,308,316
<p>I am given the question.</p> <p>Suppose X and Y are iid uniform random variables on the interval (-2,2). Let <span class="math-container">$Z=\frac{Y}{X}$</span>. Does the expectation of Z exist? If it exists, calculate <span class="math-container">$\mathbb{E}[Z]$</span>. If it does not exist, explain why.</p> <p>I have 2 different interpretations of this questions and I don't know which one or if any is correct.</p> <p>1 way I see this is once we calculate the pdf of Z. We can use it to calculate the expected value as,</p> <p><span class="math-container">$$\mathbb{E}[Z]=\int zf_{Z}(z)dz$$</span></p> <p>But, if we look at as,</p> <p><span class="math-container">$$\mathbb{E}[Z]=\mathbb{E}\left[\frac{Y}{X}\right]$$</span></p> <p>Since X and Y are independent,</p> <p><span class="math-container">$$\mathbb{E}\left[\frac{Y}{X}\right]=E[Y]\cdot\mathbb{E}\left[\frac{1}{X}\right]$$</span></p> <p>But, the expected value of 1/X is</p> <p><span class="math-container">$$\mathbb{E}\left[\frac{1}{X}\right]=\int_{-2}^2 \frac{1}{x}f_{X}(x)dx$$</span></p> <p>This is a divergent integral and thus the expected value is not possible to calculate.</p> <p>I don't know which interpretation if either is correct.</p>
tommik
791,458
<p>Both methods are valid. The second one is faster (and suggested). If you calculate <span class="math-container">$f_Z(z)$</span> you will realize that <span class="math-container">$Z$</span> does not have expectation as the corresponding integral diverges.</p> <hr /> <p>The calculation of Z distribution, even not necessary to answer your question, can be an useful exercise.</p> <p><span class="math-container">$$f_Z(z)=\frac{1}{4}\cdot \mathbb{1}_{|z|&lt;1}+\frac{1}{4z^2}\cdot \mathbb{1}_{|z|\ge 1} $$</span></p>
4,308,316
<p>I am given the question.</p> <p>Suppose X and Y are iid uniform random variables on the interval (-2,2). Let <span class="math-container">$Z=\frac{Y}{X}$</span>. Does the expectation of Z exist? If it exists, calculate <span class="math-container">$\mathbb{E}[Z]$</span>. If it does not exist, explain why.</p> <p>I have 2 different interpretations of this questions and I don't know which one or if any is correct.</p> <p>1 way I see this is once we calculate the pdf of Z. We can use it to calculate the expected value as,</p> <p><span class="math-container">$$\mathbb{E}[Z]=\int zf_{Z}(z)dz$$</span></p> <p>But, if we look at as,</p> <p><span class="math-container">$$\mathbb{E}[Z]=\mathbb{E}\left[\frac{Y}{X}\right]$$</span></p> <p>Since X and Y are independent,</p> <p><span class="math-container">$$\mathbb{E}\left[\frac{Y}{X}\right]=E[Y]\cdot\mathbb{E}\left[\frac{1}{X}\right]$$</span></p> <p>But, the expected value of 1/X is</p> <p><span class="math-container">$$\mathbb{E}\left[\frac{1}{X}\right]=\int_{-2}^2 \frac{1}{x}f_{X}(x)dx$$</span></p> <p>This is a divergent integral and thus the expected value is not possible to calculate.</p> <p>I don't know which interpretation if either is correct.</p>
StratosFair
857,384
<p>Both of the statements you have written are correct. If you compute <span class="math-container">$f_z$</span>, you will find out that the integral <span class="math-container">$$E[Z]=\int_{-\infty}^\infty zf_{z}(z)dz $$</span> diverges.</p> <p>Your second approach using the fact that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent is a much quicker way of proving that if that's all you're interested in.</p>
2,113
<p>With respect to the stated reason for closure, I'd like to get some clarification as to what, precisely, "too localized" encompasses (at least a definition, or explanation, that is more specific and objective than the current "definition"). It just strikes me that some questions which might appear to some as being "too localized" are, in fact, of greater interest to the "world-wide" audience than some of the questions that are accepted and answered (and hence deemed not too localized), which in fact may only be of interest to PhD candidates, if not PhD researchers. While I'm not objecting at all to questions of the latter sort, you must admit that questions of the latter sort such are likely not of any interest whatsoever to the vast majority of those who participate in the world wide internet? (some questions, perhaps, only of interest to only a small community of a significant minority of mathematicians?) I'm not meaning to be argumentative, or to suggest that questions of the latter sort be "closed." I am simply a bit confused regarding some questions that <em>are</em> judged to be "too localized" and I'm attempting to reconcile this action with the given definition, as it stands.</p>
Jeff Atwood
153
<p>Well, I can tell you what it means in the context of programming...</p> <ol> <li><p>Small geographic area</p> <blockquote> <p>Are there any Python user group meetings in Peoria, IL?</p> </blockquote></li> <li><p>Specific moment in time</p> <blockquote> <p>When will Visual Studio 2010 be released? </p> </blockquote></li> <li><p>Extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet</p> <blockquote> <p>We use this in-house tool WELBOG.EXE to generate faxes from XML via regular expressions. What does the -LASERS option do?</p> </blockquote></li> </ol> <p>... perhaps you can map that to mathematics? I would agree that mathematics enjoys a sort of timeless universality that mostly escapes programming, as I'm sure COBOL programmers would attest to. So it could be that the "Too Localized" close reason is not as useful on math as other sites in our network.</p> <p>(Also, the example Joel uses is <em>There is a green car parked outside my house right now. Why?</em> which I think is too whimsical to be useful, personally..)</p>
75,875
<p>I am asking in the sense of isometry groups of a manifold. SU(3) is the group of isometries of CP2, and SO(5) is the group of isometries of the 4-sphere. Now, it happens that both manifolds are related by Arnold-Kuiper-Massey theorem: $\mathbb{CP}^2/conj \approx S^4$; one is a branched covering of the other, the quotient being via complex conjugation.</p> <p>Now, for the case of a manifold and a lower dimensional submanifold, it is not rare to find that the corresponding isometry groups are subgroups one of the other. But here, which is the equivalent result? is SO(5) an "enhanced SU(3)" in some way?</p> <p>The context of the question comes from 11D Kaluza Klein, more particularly from the classification of Einstein metrics in compact 7-manifolds. It is easy to produce from the 7-sphere metric a "squeezed sphere" whose isometry group is, instead of SO(8), just the one of $S^4 \times S^3$. But it is not known if there is some relationship between the 7-sphere and the "Witten manifolds" of the kind $CP^2 \times S^3$. </p> <p>EDIT: to add more context, some dynkin diagrams.</p> <pre><code>o====o SO(5), isometries of the sphere S4 o----o SU(3) are the isometries of CP2 o o SU(2)xSU(2), isometries of S2xS2. Also SO(4), so isometries of S3 </code></pre> <p>So it seems that the quotient under conjugation has implied, or is compensated by, some change in the angles between roots, but not in the number of roots.</p> <p>For isometries of 7-manifolds we have also some similarities.</p> <pre><code> o o o / / o----o SO(8) o----o SU(3)xSO(4) o====o SO(5)xSO(4) \ \ o o o </code></pre> <p>where the first diagram is the [isometry group of] the seven sphere, the last is the squashed sphere, and the intermediate is the one I am intrigued about, as it contains the physicists standard model gauge group.</p> <p>By the way, the last drawing makes one to ask about how triality survives in the representations of these product groups, but that is other question :-)</p>
Joseph Wolf
18,505
<p>$SU(3)$ has center of order 3 and $SO(5)$ has center reduced to the identity. In fact they are not even locally isomorphic: $SU(3)$ is of Cartan type $A_2$ and $SO(5)$ is of type $B_2$.</p>
217,291
<p>I am trying to recreate the following image in latex (pgfplots), but in order to do so I need to figure out the mathematical expressions for the functions</p> <p><img src="https://i.stack.imgur.com/jYGNP.png" alt="wavepacket"></p> <p>So far I am sure that the gray line is $\sin x$, and that the redline is some version of $\sin x / x$. Whereas the green line is some linear combination of sine and cosine functions.</p> <p>Anyone know a good way to find these functions? </p>
Qiaochu Yuan
232
<p><a href="http://en.wikipedia.org/wiki/Persi_Diaconis" rel="nofollow">Persi Diaconis</a> showed that it takes about $7$ shuffles to shuffle a $52$-card deck. I'm not going to explain how he proved this result, but I'm going to explain some relevant ideas. </p> <p>Given a <a href="http://en.wikipedia.org/wiki/Regular_graph" rel="nofollow">regular graph</a> $X$, it's interesting to think about <a href="http://en.wikipedia.org/wiki/Random_walk" rel="nofollow">random walks</a> on $X$. (In Diaconis' case, the graph is the graph whose vertices are all possible configurations of the deck and whose edges are given by shuffles.) A natural question to ask is approximately how quickly the random walk <em>mixes</em>, which roughly speaking means asking about how many steps need to be taken before a random walker is about equally likely to be at any particular vertex. The mixing time of a random walk on $X$ is controlled by the second largest <a href="http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors" rel="nofollow">eigenvalue</a> of the <a href="http://en.wikipedia.org/wiki/Adjacency_matrix" rel="nofollow">adjacency matrix</a> $A(X)$ of $X$, so what we'd like to do is to compute this eigenvalue.</p> <p>If $X$ is a <a href="http://en.wikipedia.org/wiki/Cayley_graph" rel="nofollow">Cayley graph</a> of a finite group $G$, then the eigenspaces of $A(X)$ become <a href="http://en.wikipedia.org/wiki/Representation_theory" rel="nofollow">representations</a> of $G$, and this is a huge help in figuring out what the second largest eigenvalue is. If $G$ is in addition abelian, then the <a href="http://en.wikipedia.org/wiki/Group_representation#Reducibility" rel="nofollow">irreducible</a> representations of $G$ are $1$-dimensional, and this tells you explicitly what all of the eigenvectors of $A(X)$ are, from which their eigenvalues are straightforward to compute. (In Diaconis' case, $G = S_{52}$ is far from abelian, but nevertheless representation theory is still, as I understand it, quite relevant.)</p> <p><em>Example.</em> If $G = C_n$, then a natural Cayley graph for $G$ is an $n$-gon. The second largest eigenvalue of the adjacency matrix can be computed using the representation theory of $C_n$ (essentially the <a href="http://en.wikipedia.org/wiki/Discrete_Fourier_transform" rel="nofollow">discrete Fourier transform</a>) to be </p> <p>$$2 \cos \frac{2 \pi}{n} \approx 2 - \frac{4 \pi}{n^2}$$</p> <p>(the largest eigenvalue is $2$.) I believe this implies that the mixing time is $O(n^2)$. </p> <p><em>Example.</em> If $G = C_2^n$, then a natural Cayley graph for $G$ is a hypercube. The second largest eigenvalue of the adjacency matrix can be computed again using the discrete Fourier transform to be $n-2$ (the largest eigenvalue is $n$). I believe this implies that the mixing time is $O(n)$. </p>