qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,541,524 | <blockquote>
<p>Decide whether the following ie true or false
<span class="math-container">$$\lvert\arcsin z \rvert \le \left\lvert \frac {\pi z} {2} \right\rvert $$</span>
whenever <span class="math-container">$z\in\Bbb C$</span> . </p>
</blockquote>
<p><span class="math-container">$\arcsin z =-i \text{Log } (\sqrt{1-z^2}+iz)$</span>, </p>
<p><span class="math-container">$\text{Log }z=\log|z|+i\arg z,\arg z\in(-\pi,\pi] $</span></p>
<p>The problem is related to <a href="https://math.stackexchange.com/questions/2533309/show-that-the-series-sum-n-1-infty-textarcsinn-2z-converges-norma">the series <span class="math-container">$\sum_{n=1}^{\infty}\arcsin(n^{-2}z) $</span> converges normally in the whole complex plane</a>. </p>
| Andreas | 317,854 | <p>Note <span class="math-container">$\arcsin(z) = \sum_{n=0}^\infty \frac{1 }{2^{2n}}\binom{2n}{n} \frac{ z^{2n+1}}{2n+1}$</span>. So we have that
<span class="math-container">$\lvert\arcsin z \rvert \le \arcsin\lvert z \rvert$</span>.</p>
<p>Further, we have that <span class="math-container">$\arcsin\lvert z \rvert$</span> is convex. Since <span class="math-container">$\arcsin\lvert z \rvert$</span> is declared on <span class="math-container">$0 \le \lvert z \rvert \le 1 $</span> and <span class="math-container">$\arcsin 0 = 0$</span>, we have that <span class="math-container">$\arcsin\lvert z \rvert \le \arcsin(1) \cdot \lvert z \rvert = \frac{\pi}{2}\lvert z \rvert$</span>, which proves the claim.</p>
|
122,546 | <p>There is a famous proof of the Sum of integers, supposedly put forward by Gauss.</p>
<p>$$S=\sum\limits_{i=1}^{n}i=1+2+3+\cdots+(n-2)+(n-1)+n$$</p>
<p>$$2S=(1+n)+(2+(n-2))+\cdots+(n+1)$$</p>
<p>$$S=\frac{n(1+n)}{2}$$</p>
<p>I was looking for a similar proof for when $S=\sum\limits_{i=1}^{n}i^2$</p>
<p>I've tried the same approach of adding the summation to itself in reverse, and I've found this:</p>
<p>$$2S=(1^2+n^2)+(2^2+n^2+1^2-2n)+(3^2+n^2+2^2-4n)+\cdots+(n^2+n^2+(n-1)^2-2(n-1)n$$</p>
<p>From which I noted I could extract the original sum;</p>
<p>$$2S-S=(1^2+n^2)+(2^2+n^2-2n)+(3^2+n^2-4n)+\cdots+(n^2+n^2-2(n-1)n-n^2$$</p>
<p>Then if I collect all the $n$ terms;</p>
<p>$$2S-S=n\cdot (n-1)^2 +(1^2)+(2^2-2n)+(3^2-4n)+\cdots+(n^2-2(n-1)n$$</p>
<p>But then I realised I still had the original sum in there, and taking that out mean I no longer had a sum term to extract.</p>
<p>Have I made a mistake here? How can I arrive at the answer of $\dfrac{n (n + 1) (2 n + 1)}{6}$ using a method similar to the one I expound on above? <strong>I.e following Gauss' line of reasoning</strong>?</p>
| Tyler | 2,465 | <p><strong>HINT:</strong> $(k + 1)^3 - k^3 = 3k^2 + 3k + 1$. Telescope the left hand side, solve for $k^2$.</p>
<p>If you need more of a hint I'll be glad to elaborate later. In case you'd like a reference, this is one of the first exercises in Spivak's Calculus (I don't have the latest edition, but it's in the section "Numbers of Various Sorts.")</p>
<p><strong>EDIT</strong></p>
<p>Since you're only interested in the "Gaussian" method of summing this series, I suggest you take a look at this Wikipedia article on <a href="http://en.wikipedia.org/wiki/Arithmetic_progression" rel="nofollow noreferrer">Arithmetic progression</a>. It shows how you can use this specific trick for finding the sum of arbitrary arithmetic series. Unfortunately, your sum is not of this kind, so cannot be summed by that simple method.</p>
<p>I have no doubt that if you fumble around with the series for long enough, you'll encounter some trick that will allow you to sum it (maybe the fact that the sum of the first $n$ odd numbers is a square?). No doubt a lot of research has been done on the so-called <a href="http://mathworld.wolfram.com/SquarePyramidalNumber.html" rel="nofollow noreferrer">square pyramidal numbers</a> (check out the list of references!) The <a href="http://en.wikipedia.org/wiki/Square_pyramidal_number" rel="nofollow noreferrer">Wikipedia</a> entry on them has a picture of what you're actually summing (finding the number of balls in a square bottomed pyramid), so maybe you can see why they aren't as easy to sum as the triangular numbers, which can easily be arranged into squares. The <a href="https://mathoverflow.net/questions/8846/proofs-without-words/8851#8851">MathOverflow</a> link by <a href="https://math.stackexchange.com/users/22816/aelguindy">aelguindy</a> gives a "visual proof" of how the formula is derived.</p>
<p>Sorry I could not be of any more help.</p>
|
1,109,759 | <p>I.e, prove $\lVert f+g \rVert\ \le \lVert f \rVert + \lVert g \rVert$ for all $f,g$ in $C^\infty [0,1]$,
$$\lVert f \rVert =(\int_0^1 \lvert f(x) \rvert ^2 dx)^{1/2}$$</p>
<p>I think we're supposed to use Cauchy-Schwarz: $\lvert \int_0^1 f(x)g(x) dx \rvert \le \left( \int_0^1 \lvert f(x) \rvert ^2 dx \right)^{1/2} \left( \int_0^1 \lvert g(x) \rvert ^2 dx \right) ^{1/2}$</p>
<p>So far I've got $\lVert f+g \rVert\ = \left( \int_0^1 \left( \lvert f(x) + g(x) \rvert \right) ^2 dx \right) ^{1/2} \le \left( \int_0^1 \left( \lvert f(x) \rvert + \lvert g(x) \rvert \right) ^2 dx \right) ^{1/2} = \left( \int_0^1 (\lvert f(x) \rvert)^2 + (\lvert g(x) \rvert)^2 + 2 \lvert f(x) \rvert \lvert g(x) \rvert dx \right)^{1/2} \le \left( \int_0^1 (\lvert f(x) \rvert)^2 dx \right) ^{1/2} + \left( \int_0^1 (\lvert g(x) \rvert)^2 dx \right) ^{1/2} + \left( 2 \int_0^1 \lvert f(x) \rvert \lvert g(x) \rvert dx \right)^{1/2}$</p>
<p>I'm also not sure about the last step...</p>
| user14717 | 24,355 | <p>There's no reason to do all these calculations. The norm induced from <em>any</em> inner product obeys the triangle inequality as a consequence of the Cauchy-Schwarz inequality, so just state that your norm is induced from the inner product
\begin{align}
\left<f, g\right> := \int_{0}^{1} f(x)\overline{g(x)} \, \mathrm{d}x.
\end{align}</p>
|
280,346 | <p>I am wondering how to tell Mathematica that a function, say <code>F[x]</code>, is a real-valued function so that, e.g., the <code>Conjugate</code> command will pass through it:</p>
<pre><code>Conjugate[E^(-i k x)F[x]] = E^(i k x)F[x]
</code></pre>
<p>I tried to make a huge calculation using the <code>Conjugate</code> command, but without setting the arbitrary function <code>F[x]</code> as a real-valued function, the result is completely messy.</p>
| sebqas | 14,660 | <p>I'm surprised, as Bob's answer works as you wish</p>
<p><a href="https://i.stack.imgur.com/LsetX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LsetX.png" alt="enter image description here" /></a></p>
<p>However, if you don't want to use FullSimplify, you may redefine Conjugate[]</p>
<pre><code>Unprotect[Conjugate]
Conjugate[f[x_]] := f[x]
Conjugate[Derivative[n_][f][x]] := Derivative[n][f][x]
Protect[Conjugate]
</code></pre>
<p>and then</p>
<pre><code>Conjugate[Exp[I x] h[x] f'[x] f[x]]
</code></pre>
<p>gives</p>
<pre><code>E^(-I Conjugate[x]) Conjugate[h[x]] f[x] Derivative[1][f][x]
</code></pre>
|
3,586,346 | <p>Basically, I'd like to model sin x, but make it's derivative tend towards 0, so as x increases, it becomes a constant y = 0. The function begins like a typical sin x function, but slowly the fluctuation decreases until it isn't there anymore. If this works as I'm trying to have it work, I think some constant between 0 and 1 multiplying that derivative could also re-establish the normal sin x function.</p>
<p>I've been messing around with desmos graphing calculator, somehow trying to make sin x a result of some equation containing it's derivative, but I haven't been able to make much progress. I have taken classes in differential and integral calculus and linear algebra.</p>
<p>Edit: Sorry for the lack of rigour, I'm struggling to formulate the question properly</p>
<p>Edit2: User @ElliotG has provided me with the exact equation I'm looking to obtain, or atleast one that perfectly describes the idea of what the equation I'm looking for looks like: <span class="math-container">$\frac{sin x}{1 + x^2}$</span>. The way I'd describe this function is that it is like sin x, but having its derivative constantly decreasing until it reaches 0. What I'd be interested in is: could there have been a way of obtaining a similar equation to <span class="math-container">$\frac{sin x}{1 + x^2}$</span> having in mind that what we want to do is have sin x as if its derivative was tending to 0? So all we start by knowing is how we want the function to behave without knowing what it looks like.</p>
| Lt. Commander. Data | 632,103 | <p>Any function of the form
<span class="math-container">$$y(t)=Ae^{-\gamma t}\sin(\omega t+\phi)$$</span>
works. For example, substituting <span class="math-container">$\gamma = 0.2$</span>, <span class="math-container">$A=1$</span>, <span class="math-container">$\omega = 1$</span> and <span class="math-container">$\phi = 0$</span>, we get the following curve:</p>
<p><a href="https://i.stack.imgur.com/hlZLt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hlZLt.png" alt="Function"></a></p>
<p>which satisfies your constraints. </p>
<hr>
<p>Your question also pertained to <em>why</em> such a function works, and the reason is that this function was developed in physics as a model for the damped <a href="https://en.wikipedia.org/wiki/Harmonic_oscillator" rel="nofollow noreferrer">Harmonic Oscillator</a>. This was initially intended as a solution to the following form of Newton's second law:
<span class="math-container">$$m\frac{d^2x}{dt^2}+c\frac{dx}{dt}+kx=0$$</span>
and subsequently introducing an additional damping force. The equation is a method to model oscillators like pendulums in realistic scenarios (where constraints like air resistance prevent the oscillator from continuing indefinitely).</p>
|
1,056,038 | <blockquote>
<p>Each of $n$ balls is independently placed into one of $n$ boxes, with all boxes equally likely. What is the probability that exactly one box is empty? (Introduction to Probability, Blitzstein and Nwang, p.36).</p>
</blockquote>
<ul>
<li>The number of possible permutations with replacement is $n^n$</li>
<li><p>In order to have one empty box, we need a different box having $2$ balls in it. We have $\dbinom{n}{1}$ choices for the empty box, $\dbinom{n-1}{1}$ choices left for the box with $2$ balls, and $(n-2)!$ permutations to assign the remaining balls to the remaining boxes.</p></li>
<li><p>Result: $$\frac{\dbinom{n}{1} \dbinom{n-1}{1} (n-2)!}{n^n}$$</p></li>
</ul>
<p>Is this correct?</p>
| Jimmy R. | 128,037 | <p>Your approach (although nice) has a flaw in the second bullet. The problem is that there you count two different things: on the one hand ways to choose a box and on the other hand ways to choose a ball and this results to a confusion. In detail</p>
<ol>
<li>Your denominator is correct,</li>
<li>Your numerator is missing one term that should express the number of ways in which you can choose the $2$ balls out of $n$ that you will put in the choosen box with the $2$ balls. This can be done in $\dbinom{n}{2}$ ways.</li>
<li>The other terms in your numerator are correct. Note that your numerator can be written more simple as $$\dbinom{n}{1}\dbinom{n-1}{1} (n-2)!=n\cdot(n-1)\cdot(n-2)!=n!$$</li>
</ol>
<p>Adding the ommitted term, gives the correct result which differs from yours only in this term (the highlighted one)
$$\frac{\dbinom{n}{1}\color{blue}{\dbinom{n}{2}} \dbinom{n-1}{1} (n-2)!}{n^n}=\frac{\dbinom{n}{2}n!}{n^n}$$</p>
|
4,615,947 | <p>Let <span class="math-container">$a,b\in\Bbb{N}^*$</span> such that <span class="math-container">$\gcd(a,b)=1$</span>. How to show that <span class="math-container">$\gcd(ab,a^2+b^2)=1$</span>?</p>
| chroma | 1,006,726 | <p><span class="math-container">\begin{align*}I & =\int_{-\infty}^\infty\frac{\sin x}{(x+1)^2+1}dx=\int_{-\infty}^{\infty}\frac{\sin(x-1)}{x^2+1}dx \\ & =\cos(1)\int_{-\infty}^\infty\frac{\sin x}{x^2+1}dx-\sin(1)\int_{-\infty}^\infty\frac{\cos x}{x^2+1}dx \\ &=-2\sin(1)\int_{0}^\infty\frac{\cos x}{x^2+1}dx=-2\sin(1)\frac{\pi}{2e}=\frac{-\pi\sin(1)}{e}. \end{align*}</span>
(See <a href="https://math.stackexchange.com/q/140580">here</a>.)</p>
|
1,038,713 | <p>Suppose I am given a circle $C$ in $\Bbb C^*$ and two points $w_1,w_2$. Given another circle $C'$ and points $z_1,z_2$, what is the procedure to find a Möbius transformation that sends $C\to C'$, $w_i\to z_i,i=1,2$? Here $z_1\in C\not\ni z_2$; $w_1\in C'\not\ni w_2$. For example, take $|z|=2$, $w_1=-2,w_2=0$. Then, the transformation $T(z)=-\dfrac{z+2}{2}$ sends $|z|=2$ to $|z+1|=1$, $-2$ to $0$, and $0$ to $-1$. Hence, I need to find a transformation that fixes $|z+1|=1$ and $0$, and sends $-1$ to $i$. I know that if $|\alpha|\neq 1$, the transformation $$T(z)=\frac{z-\alpha}{1-z\bar \alpha}$$ fixes $|z|=1$, sends $\alpha$ to $0$ and has fixed points $\sqrt{\dfrac{\alpha}{\bar\alpha}}$. I obtained $T$ using $4$ successive transformations $T_1=-(z+2)$, $T_2=\dfrac{z}{z+2}$, $T_3=-z$ and $T_4=-\dfrac{z}{z+1}$, which seems a bit ineffective. How can I generally find $T$ given $(C,C',(z_1,z_2),(w_1,w_2))$?</p>
| Joonas Ilmavirta | 166,535 | <p>First, take a Möbius transform $\phi$ that takes $C$ to $\mathbb R$ and $z_1$ to $i$.
This $\phi$ always exists uniquely, provided $z_1\notin C$; it's not too hard to map a circle to the real axis, and the transforms fixing the real axis are easy to classify.
Let $\psi$ be the corresponding map for $C'$ and $w_1$.</p>
<p>Now $T=\psi^{-1}\circ\phi$ maps $C$ to $C'$ and $z_1$ to $w_1$.
If there was another Möbius transform $g$ with this property, then $\psi\circ g\circ\phi^{-1}$ is a Möbius transform fixing the real axis and $i$, so $\psi\circ g\circ\phi^{-1}$ is the identity and thus $g=T$.</p>
<p>That is, the data $(C,C',z_1,w_1)$ uniquely determines the mapping $T$ (assuming $z_1\notin C$ and $w_1\notin C'$).
Therefore your problem is overdetermined, and it has a solution if and only if $\phi(z_2)=\psi(w_2)$.
An easy way to see that some conditions are needed is to see that if $z_1$ is the reflection of $z_2$ across $C$, the same has to hold for $w_1$, $w_2$ and $C'$, or that if the points $z_i$ are on the same side of $C$, the points $w_i$ can't be on different sides of $C'$.</p>
<p>The above method gives a way to construct $T$ in terms of two Möbius transforms.
Is this method sufficient for you?</p>
|
2,820,464 | <p>Find the limit of the sequence {$a_{n}$}, given by$$ a_{1}=0,a_{2}=\dfrac {1}{2},a_{n+1}=\dfrac {1}{3}(1+a_{n}+a^{3}_{n-1}), \ for \ n \ > \ 1$$</p>
<p>My try:</p>
<p>$ a_{1}=0,a_{2}=\dfrac {1}{2},a_{3}=\dfrac {1}{2},a_{4}=0.54$ that is the sequence is incresing and each term is positive. Let the limit of the sequence be $x$.
Then $ \lim _{n\rightarrow \infty }a_{n+1}=\lim _{n\rightarrow \infty }a_{n}=x$
$$ \lim _{n\rightarrow \infty }a_{n+1}= \lim _{n\rightarrow \infty }1+a_{n}+a^{3}_{n-1}$$</p>
<p>$\Rightarrow x=\dfrac {1}{3}( 1+x+x^3)$</p>
<p>$\Rightarrow x^3-2x+1=0$</p>
<p>and this equation has three roots $x=\dfrac {-1\pm \sqrt {5}}{2},1$ </p>
<p>So the limit of the sequence is $\dfrac {-1 + \sqrt {5}}{2}$.</p>
<p><strong>how can i say that the limit is</strong>
$\dfrac {-1 + \sqrt {5}}{2}$?</p>
| Boyku | 567,523 | <p>We will prove that all $a_n$ are smaller than ${2 \over 3}=0.6666...$. </p>
<p>By induction, suppose that $0, 1/2, ... a_{n-1}, a_n < 2/3$.</p>
<p>then $a_{n+1} < {1 + 2/3 + 8/27 \over 3 }= {53 \over 81} < {54 \over 81} = {2\over 3}$</p>
<p>since $a_1 =0<{2 \over 3}$, for all n , $0 \leqslant a_n < {2 \over 3}$. </p>
<p>To prove the convergence we will show that a_n is increasing. Again, by induction,</p>
<p>$a_{n+1} - a_n = {a_n - a_{n-1} \over 3} + { (a_{n-1} - a_{n-2}) ( a_{n-1}^2 + a_{n-1} a_{n-2} + a_{n-2}^2 ) \over 3} > 0$ where </p>
<p>$a_4-a_3 = {1 \over 24} > 0 $ and $a_5- a_4 = {1 \over 72} > 0 $ are the two first consecutive positive differences.</p>
<p>We have here a strictly increasing bounded sequence i.e. a convergent one.</p>
<p>So the limit calculus in the question is valid. The limit is $\dfrac {-1 + \sqrt {5}}{2} \approx 0.618$</p>
|
3,736,580 | <p>Show that for <span class="math-container">$n>3$</span>, there is always a <span class="math-container">$2$</span>-regular graph on <span class="math-container">$n$</span> vertices. For what values of <span class="math-container">$n>4$</span> will there be a 3-regular graph on n vertices?</p>
<p>I think this question is slightly out of my control. Can you please help me out with this question...</p>
<p>For part two what I think is yes by handshaking I will exclude all the odd vertices as <span class="math-container">$3(2n+1)$</span> is not even number. So what should be the answer? All even number of vertices? Does that make sense?
And for part 1 it is obviously true but how can I proceed to the answer?
Thanks.</p>
| Jack D'Aurizio | 44,121 | <p>Using only <span class="math-container">$3$</span>s and <span class="math-container">$4$</span>s, with <span class="math-container">$n$</span> of them you can make any integer number between <span class="math-container">$3n$</span> and <span class="math-container">$4n$</span>.<br>
Let <span class="math-container">$\frac{p}{q}$</span> be a convergent of the continued fraction of <span class="math-container">$\pi$</span>: by choosing <span class="math-container">$n=q$</span> we may realize <span class="math-container">$p$</span> as a sum of <span class="math-container">$q$</span> numbers in <span class="math-container">$\{3,4\}$</span>,
since <span class="math-container">$p>3q$</span> and <span class="math-container">$p<4q$</span>. Moreover <span class="math-container">$\left|\pi-\frac{p}{q}\right|\leq \frac{1}{q^2}$</span>. If we consider the concatenation of these sequences given by convergents we get an infinite sequence whose average value clearly converges to <span class="math-container">$\pi$</span> as wanted.</p>
<p><span class="math-container">$$ \color{red}{\frac{3}{1}},\color{blue}{\frac{22}{7}},\color{purple}{\frac{333}{106}},\ldots\Longrightarrow \color{red}{3}\color{blue}{3333334}\color{purple}{3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333444444444444444}\ldots $$</span></p>
|
3,736,580 | <p>Show that for <span class="math-container">$n>3$</span>, there is always a <span class="math-container">$2$</span>-regular graph on <span class="math-container">$n$</span> vertices. For what values of <span class="math-container">$n>4$</span> will there be a 3-regular graph on n vertices?</p>
<p>I think this question is slightly out of my control. Can you please help me out with this question...</p>
<p>For part two what I think is yes by handshaking I will exclude all the odd vertices as <span class="math-container">$3(2n+1)$</span> is not even number. So what should be the answer? All even number of vertices? Does that make sense?
And for part 1 it is obviously true but how can I proceed to the answer?
Thanks.</p>
| Calum Gilhooley | 213,690 | <p>Each term in either of these sums is equal to either <span class="math-container">$\left\lfloor\pi\right\rfloor = 3$</span> or <span class="math-container">$\left\lceil\pi\right\rceil = 4$</span>:
<span class="math-container">\begin{align*}
\pi & = \lim_{n\to\infty}\frac{\left\lfloor{n\pi}\right\rfloor}{n} =
\lim_{n\to\infty}\frac1{n}\sum_{i=1}^n(\left\lfloor{i\pi}\right\rfloor -
\left\lfloor{(i - 1)\pi}\right\rfloor) \\
& = \lim_{n\to\infty}\frac{\left\lceil{n\pi}\right\rceil}{n} =
\lim_{n\to\infty}\frac1{n}\sum_{i=1}^n(\left\lceil{i\pi}\right\rceil -
\left\lceil{(i - 1)\pi}\right\rceil).
\end{align*}</span></p>
|
2,714,450 | <p>Suppose $A$ and $B$ are two square matrices so that $e^{At}=e^{Bt}$ for infinite (countable or uncountable) values of $t$ where $t$ is positive.</p>
<p>Do you think that $A$ <strong>has to be equal to</strong> $B$?</p>
<p>Thanks,
Trung Dung.</p>
<hr>
<p>Maybe I do not state clearly or correctly.</p>
<p>I mean that the equality holds for all $t\in (0, T)$ where $T>0$ or $T=+\infty$, i.e. for uncountable $t$. In this case I think some of the counter-examples above do not work because it is correct for countable $t$.</p>
| José Carlos Santos | 446,262 | <p>No. Let $A$ be the null matrix and let $B=2\pi i\operatorname{Id}$. Then$$(\forall t\in\mathbb{Z}):e^{tA}=e^{tB}.$$</p>
|
133,370 | <p>In differential geometry of surfaces, how can one define a non-zero Torsion tensor? It seems that the connection you provide has always to be symmetric since, by definition,
$$\Gamma^{\gamma}_{\alpha\beta}\equiv\mathbf{a}^{\gamma}\cdot\mathbf{a}_{\alpha,\beta}=\mathbf{a}^{\gamma}\cdot\mathbf{r}_{,\alpha\beta}=\mathbf{a}^{\gamma}\cdot\mathbf{r}_{,\beta\alpha}=\Gamma^{\gamma}_{\beta\alpha},$$
where $\mathbf{r}:U\to\mathbb{R}^3$, $U\subset\mathbb{R}^2$, is an embedded $C^3$ surface with parametrization $(\theta^1,\theta^2)\in U$, $\mathbf{a}_\alpha\equiv\mathbf{r}_{,\alpha}$ are the tangent vectors to the coordinate curves $\theta^\alpha$, $\alpha=\{1,2\}$, and $\mathbf{a}^\gamma$ is the covector of $\mathbf{a}_\alpha$.</p>
<p>This definition also implies that the connection is metric compatible:
$$\Gamma^{\gamma}_{\alpha\beta}=\frac{1}{2}a^{\gamma\lambda}(a_{\beta\lambda,\alpha}+a_{\gamma\alpha,\beta}-a_{\alpha\beta,\lambda}).$$
So there is no non-zero Non-metricity Tensor either. ($a_{\alpha\beta}\equiv\mathbf{a}_\alpha\cdot\mathbf{a}_\beta$,$a^{\alpha\beta}\equiv\mathbf{a}^\alpha\cdot\mathbf{a}^\beta$.)</p>
<p>Existence of non-zero Torsion tensor and Non-metricity tensor is important in studies of defects in two-dimensional crystals because in continuum model, they represent certain defect densities.</p>
| Peter Michor | 26,935 | <p>Levi-Civita means metric compatible and torsion free. Adding a skew symmetric $\binom{1}{2}$ tensor field (= your favorite torsion) to a covariant derivative does not change metric compatibility.</p>
|
3,435,256 | <p>The following statement is given in my book under the topic <em>Tangents to an Ellipse</em>:</p>
<blockquote>
<p>The <a href="http://mathworld.wolfram.com/EccentricAngle.html" rel="nofollow noreferrer">eccentric angles</a> of the points of contact of two parallel tangents differ by <span class="math-container">$\pi$</span></p>
</blockquote>
<p>In case of a circle, it is easy for me to visualise that two parallel tangents meet the circle at two points which are apart by <span class="math-container">$\pi$</span> radians as they are diametrically opposite. But in case of ellipse, as the eccentric angle is defined with respect to the <a href="http://mathworld.wolfram.com/AuxiliaryCircle.html" rel="nofollow noreferrer">auxiliary circle</a> and not the ellipse, I am unable to understand why two parallel tangents meet the ellipse at points which differ by <span class="math-container">$\pi$</span>. </p>
<p>Kindly explain the reason behind this fact.</p>
| mathlove | 78,967 | <blockquote>
<p>Kindly explain the reason behind this fact.</p>
</blockquote>
<p>The reason is that an ellipse can be obtained by stretching/shrinking a circle. The strech/shrink is a <a href="https://en.wikipedia.org/wiki/Linear_map" rel="nofollow noreferrer">linear map (linear transformation)</a>.</p>
<p>Let's consider two tangent lines on the circle <span class="math-container">$x^2+y^2=a^2$</span> at <span class="math-container">$(a\cos\theta,a\sin\theta)$</span>,<span class="math-container">$(a\cos(\theta+\pi),a\sin(\theta+\pi))$</span>. You already know that the two tangent lines are parallel.</p>
<p>Now, let's stretch/shrink the circle and the tangent lines. Stretching/shrinking the circle <span class="math-container">$x^2+y^2=a^2$</span> to obtain the ellipse <span class="math-container">$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$</span> means that you replace <span class="math-container">$y$</span> in <span class="math-container">$x^2+y^2=a^2$</span> with <span class="math-container">$\frac{a}{b}y$</span> to have <span class="math-container">$x^2+\left(\frac aby\right)^2=a^2$</span> which is nothing but <span class="math-container">$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$</span>.</p>
<p>By this stretch/shrink, we have the followings :</p>
<ul>
<li><p>The circle <span class="math-container">$x^2+y^2=a^2$</span> is transformed to the ellipse <span class="math-container">$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$</span>.</p></li>
<li><p>The two parallel lines are transformed to two parallel lines.</p></li>
<li><p>The two lines tangent to the cirlce are transformed to two lines tangent to the ellipse.</p></li>
<li><p>The tangent points <span class="math-container">$(a\cos\theta,a\sin\theta)$</span>,<span class="math-container">$(a\cos(\theta+\pi),a\sin(\theta+\pi))$</span> on the circle are transformed to two tangent points <span class="math-container">$(a\cos\theta,b\sin\theta)$</span>,<span class="math-container">$(a\cos(\theta+\pi),b\sin(\theta+\pi))$</span> on the ellipse respectively.</p></li>
</ul>
<p>From the above facts, it follows that the eccentric angles of the points of contact of two parallel tangents differ by <span class="math-container">$\pi$</span>.</p>
<hr>
<p>The followings are the proof for the above facts.</p>
<p>Let's consider the circle <span class="math-container">$x^2+y^2=a^2$</span> and two points <span class="math-container">$(a\cos\theta,a\sin\theta)$</span>,<span class="math-container">$(a\cos(\theta+\pi),a\sin(\theta+\pi))$</span>.</p>
<p>The equation of the tangent line at <span class="math-container">$(a\cos\theta,a\sin\theta)$</span> is given by
<span class="math-container">$$a\cos\theta\ x+a\sin\theta\ y=a^2\tag1$$</span></p>
<p>Similarly, the equation of the tangent line at <span class="math-container">$(a\cos(\theta+\pi),a\sin(\theta+\pi))$</span> is given by
<span class="math-container">$$a\cos(\theta+\pi)x+a\sin(\theta+\pi)y=a^2\tag2$$</span></p>
<p>Now, let's stretch/shrink the circle and the lines <span class="math-container">$(1)(2)$</span> by replacing <span class="math-container">$y$</span> with <span class="math-container">$\frac aby$</span> to have
<span class="math-container">$$(1)\to a\cos\theta\ x+a\sin\theta\cdot\frac aby=a^2\tag3$$</span>
<span class="math-container">$$(2)\to a\cos(\theta+\pi)x+a\sin(\theta+\pi)\cdot\frac aby=a^2\tag4 $$</span>
Here note that these lines <span class="math-container">$(3)(4)$</span> are parallel since the slope of each line is <span class="math-container">$\frac{-b\cos\theta}{a\sin\theta}$</span>.</p>
<p>Finally, note that <span class="math-container">$(3)$</span> can be written as
<span class="math-container">$$\frac{a\cos\theta}{a^2}x+\frac{b\sin\theta}{b^2}y=1\tag5$$</span>
which is nothing but the tangent line at <span class="math-container">$(a\cos\theta,b\sin\theta)$</span> on the ellipse <span class="math-container">$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$</span>.</p>
<p>Similarly, <span class="math-container">$(4)$</span> can be written as
<span class="math-container">$$\frac{a\cos(\theta+\pi)}{a^2}x+\frac{b\sin(\theta+\pi)}{b^2}y=1\tag6$$</span>
which is nothing but the tangent line at <span class="math-container">$(a\cos(\theta+\pi),b\sin(\theta+\pi))$</span> on the ellipse <span class="math-container">$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$</span>.</p>
<p>Since <span class="math-container">$(5)(6)$</span> are parallel, we see that the eccentric angles of the points of contact of two parallel tangents differ by <span class="math-container">$\pi$</span>. <span class="math-container">$\quad\square$</span></p>
|
617,275 | <p>$E$ is normed vector space.Let $f\in E^*$ in a bounded linear functional from $E$ to $C$ and fix $x\in E$. We have $$\forall y\in E;\ \ \ \ f(y-x)\leq \frac{1}{2}\|y\|^2-\frac{1}{2}\|x\|^2$$
And I have proven $f(x)=\|x\|^2$ and $\|x\|\leq \|f\|$. Prove that $\|f\|=\|x\|$.</p>
| Prahlad Vaidyanathan | 89,789 | <p>Since $f(x) = \|x\|^2$, for any $y\in E$, one has
$$
f(y) = f(x) + f(y-x) \leq \frac{1}{2}\|x\|^2 + \frac{1}{2}\|y\|^2 \leq \max\{\|x\|^2, \|y\|^2\}
$$
Hence, for $z \in X$ such that $\|z\| \leq 1$, let $y = \|x\|z$, then $\|y\| \leq \|x\|$, hence
$$
\|x\|f(z) = f(y) \leq \|x\|^2
$$
$$
\Rightarrow f(z) \leq \|x\|
$$
Conclude that $\|f\| \leq \|x\|$, and so you're done.</p>
|
4,036,558 | <p><span class="math-container">$f(x)=e^x(x^2+x)$</span>, derive <span class="math-container">$\dfrac{d^n\,f(x)}{dx^n}$</span></p>
<p>may use Leibniz formula but i'm not sure:(</p>
| Z Ahmed | 671,540 | <p>By Newton-Lebniz formula for <span class="math-container">$D^n[u(x) v(x)]$</span>, we het
<span class="math-container">$$f(x)=(x^2+x)e^x \implies D^n[e^x(x^2+x)]= (D^n e^x) (x^2+x)+ {n\choose 1} (D^{n-1} e^x) D(x^2+x)+{n \choose 2} (D^{n-2} e^x) D^2(x^2+x)+0$$</span>
<span class="math-container">$$f^{n}(x)=e^x(x^2+x)+n e^x (2x+1)+n(n-1)e^x=e^x[x^2+(2n+1)x+n^2].$$</span></p>
|
2,621 | <p>Let $A$ be a commutative Banach algebra with unit.
It is well known that if the Gelfand transform $\hat{x}$ of $x\in A$ is non-zero, then $x$ is invertible in $A$ (the so called Wiener Lemma in the case when $A$ is the Banach algebra of absolutely convergent Fourier series).</p>
<p>As a converse of the above, let $B$ be a Banach space contained in $A$ and suppose $B$ is closed under inversion - i.e.: If $x\in B$ and $x^{-1}\in A$ then $x^{-1}\in B$.</p>
<p>(1) Prove that $B$ is a Banach algebra.</p>
<p>(2) Must $A$ and $B$ have the same norm? If not are the norms similar?</p>
<p>(3) Do $A$ and $B$ have the same maximal ideal space?</p>
| WWright | 249 | <p>EDIT: a previous comment posted the same answer, I just noticed</p>
<p>You could use montecarlo method to approximate $\pi$.
You basically define a square of side length 2 and inscribe a circle of radius 1 in it. Let the center of the circle be at the origin.
Use a random number generator to pick an x-coordinate between 0 and 1 and a y-coordinate between 0 and 1.</p>
<p>number of pts in the circle / number of pts in square ~ $\frac{\pi}{/4}$</p>
<p>This is the simplest example of a monte carlo code I know and I think it is probably the standard example.</p>
|
1,357,922 | <p>How can I find the period of real valued function satisfying <span class="math-container">$f(x)+f(x+4)=f(x+2)+f(x+6)$</span>?</p>
<p>Note: Use of recurrence relations not allowed. Use of elementary algebraic manipulations is better!</p>
| Michael Burr | 86,421 | <p>Observe, since
$$
f(x)+f(x+4)=f(x+2)+f(x+6),
$$
we can substitute $x+2$ for $x$ to get
$$
f(x+2)+f(x+6)=f(x+4)+f(x+8).
$$</p>
<p>Equating these, we know that $f(x)=f(x+8)$.</p>
|
1,357,922 | <p>How can I find the period of real valued function satisfying <span class="math-container">$f(x)+f(x+4)=f(x+2)+f(x+6)$</span>?</p>
<p>Note: Use of recurrence relations not allowed. Use of elementary algebraic manipulations is better!</p>
| JimmyK4542 | 155,509 | <p>You are given that $f(x)+f(x+4) = f(x+2)+f(x+6)$ for all $x \in \mathbb{R}$. </p>
<p>Replace $x$ with $x+2$ to get $f(x+2)+f(x+6) = f(x+4)+f(x+8)$ for all $x \in \mathbb{R}$. </p>
<p>Thus, $f(x)+f(x+4) = f(x+2)+f(x+6) = f(x+4)+f(x+8)$ for all $x \in \mathbb{R}$. </p>
<p>Subtract $f(x+4)$ from the left and right side of the last equation to get: $f(x) = f(x+8)$ for all $x \in \mathbb{R}$. </p>
<p>Important note: This tells you that $f$ is $8$-periodic, but $8$ may not necessarily be the minimum period. For instance, $f(x) = \sin(\pi x)$ satisfies $f(x)+f(x+4) = f(x+2)+f(x+6)$ for all $x \in \mathbb{R}$, but has period $2$.</p>
|
298,791 | <blockquote>
<p>If a ring $R$ is commutative, I don't understand why if $A, B \in R^{n \times n}$, $AB=1$ means that $BA=1$, i.e., $R^{n \times n}$ is Dedekind finite.</p>
</blockquote>
<p>Arguing with determinant seems to be wrong, although $\det(AB)=\det(BA ) =1$ but it necessarily doesn't mean that $BA =1$.</p>
<blockquote>
<p>And is every left zero divisor also a right divisor ? </p>
</blockquote>
| Andreas Caranti | 58,401 | <p>I believe arguing with the determinant works, as $1 = A B$ implies $1 = \det(A B) = \det(A) \det(B)$, so $\det(A) \in R$ is invertible, and $A$ is.</p>
<p><strong>PS</strong> I believe this argument is implicit in @YACP comment to the original post.</p>
|
4,634,180 | <p><span class="math-container">$$\int \frac{\sin^2(x)dx}{\sin(x)+2\cos(x)}$$</span></p>
<p>I tried to use different substitutions such as <span class="math-container">$t=\cos(x)$</span>, <span class="math-container">$t=\sin(x)$</span>, <span class="math-container">$t=\tan(x)$</span>, and after expressing <span class="math-container">$\sin$</span> and <span class="math-container">$\cos$</span> through <span class="math-container">$\tan(\frac{x}{2})$</span>, I've got <span class="math-container">$ \int -4 \frac{t^2dt}{(1+t^2)^2(t^2-t-1)}.$</span></p>
<p>Rational fractions didn’t work.</p>
| Sofia Ibragimova | 1,146,983 | <p>After applying the method with rational fractions i’ve got three fractions in the integral and calculated it separately so that’s the answer i got in the end
-1/5Cos(x)-2/5Sin(x)-4/5^2,5ln(|2/5^0,5tg(x/2)-1/5^0,5-1/(2/5^0,5tg(x/2)-1/5^0,5+1)|+constant
I’m sorry for giving the answer in such an awkward form, but i’ve no idea how to write it correctly here</p>
|
764,632 | <p>The question is this :</p>
<p>$$\lim_{x\to-\infty} {\sqrt{x^2+x}+\cos x\over x+\sin x}$$</p>
<p>The solution is $-1$ and this seems to be only obtained from the change variable strategy, such as $t=-x$.</p>
<p>However, I have no idea why this isn't just solved by simply eliminating $x$ in numerator and denominator, which generates the value $1$.</p>
<p>It seems that this is related with $x\to-\infty$, but I have no specific idea.</p>
<p>Can anyone help me? </p>
| Karolis Juodelė | 30,701 | <p>You are thinking to do $\sqrt{x^2 + x} = x\sqrt{1+\frac 1 {x^2}}$, however, $\sqrt{x^2 + x}$ and $\sqrt{1+\frac 1 {x^2}}$ are (defined to be) positive, so how can you do this with a negative $x$?</p>
|
764,632 | <p>The question is this :</p>
<p>$$\lim_{x\to-\infty} {\sqrt{x^2+x}+\cos x\over x+\sin x}$$</p>
<p>The solution is $-1$ and this seems to be only obtained from the change variable strategy, such as $t=-x$.</p>
<p>However, I have no idea why this isn't just solved by simply eliminating $x$ in numerator and denominator, which generates the value $1$.</p>
<p>It seems that this is related with $x\to-\infty$, but I have no specific idea.</p>
<p>Can anyone help me? </p>
| DeepSea | 101,504 | <p>$t = -x$ gives: $L$ = $-\displaystyle \lim_{t \to \infty} \dfrac{\sqrt{t^2 - t} + \cos t}{t + \sin t} = -\displaystyle \lim_{t \to \infty} \dfrac{\sqrt{1 - \dfrac{1}{t}} + \dfrac{\cos t}{t}}{1 + \dfrac{\sin t}{t}} = -1$ because $\displaystyle \lim_{t \to \infty} \dfrac{\cos t}{t} = \displaystyle \lim_{t \to \infty} \dfrac{\sin t}{t} = \displaystyle \lim_{t \to \infty} \dfrac{1}{t} = 0$</p>
|
20,982 | <p>Let E be an elliptic curve over a finite field k (char(k) is not 2) be given by y^2 = (x-a)(x-b)(x-c) where a,b and c are distinct and are in k. Then why is (c,0) is in [2]E(k) iff c-a and c-b is a square in k-{0}? </p>
| Pete L. Clark | 1,149 | <p>For an elliptic curve over any field $K$ of characteristic different from $2$, the Kummer sequence reads</p>
<p>$0 \rightarrow E(K)/2E(K) \stackrel{\iota}{\rightarrow} H^1(K,E[2]) \rightarrow H^1(K,E)[2] \rightarrow 0$.</p>
<p>In particular, $\iota$ is an injection. Therefore $P \in 2E(K) \iff$ the image $[P]$
of $P$ in $E(K)/2E(K)$ is equal to zero $\iff \iota([P]) = 0$. </p>
<p>Moreover, since you have full $2$-torsion, $H^1(K,E[2]) \cong (K^{\times}/K^{\times 2})^2$ and in this case there is a well-known explicit description of the Kummer map: for any point $P = (x,y)$ different from $(a,0)$ and $(b,0)$,</p>
<p>$\iota(P) = (x-a,x - b) \pmod{K^{\times 2} \times K^{\times 2}}$: </p>
<p>see e.g. Proposition X.1.4 of Silverman's book. The result you want follows immediately from this, taking $P = (c,0)$. </p>
<p>Note that, as Bjorn points out in his nice answer to the question, the finiteness of $K$ is not needed or used here. In my original version of this answer, I mentioned the fact that $K$ finite implies $H^1(K,E) = 0$ -- it seemed like it could be helpful! -- but the argument does not in fact use the surjectivity of $\iota$, so is valid over any field of characteristic different from $2$ over which $E$ has full $2$-torsion. </p>
|
20,982 | <p>Let E be an elliptic curve over a finite field k (char(k) is not 2) be given by y^2 = (x-a)(x-b)(x-c) where a,b and c are distinct and are in k. Then why is (c,0) is in [2]E(k) iff c-a and c-b is a square in k-{0}? </p>
| Robin Chapman | 4,213 | <p>Pete's is certainly the right way to look at this problem,
but in this example one can argue naively using explicit
calculations. One loses no generality by assuming $c=0$
(by replacing $x$ by $x+c$). Then using the duplication formula,
one finds that the solutions of $[2]P = (0,0)$ are $P=(uv,uv(u+v))$
where $u$ and $v$ run through the square roots of $-a$ and $-b$
respectively. If $-a$ and $-b$ are squares in $k$ then each $P$
has coordinates in $k$. If one of the $P$ has coordinates in $k$
then they all do: so both $(uv,uv(u+v))$ and $(-uv,-uv(u-v))$ lie
in $E(k)$. Thus $uv$, $u+v$ and $u-v$ lie in $k$. Hence $u\in k$
and $v\in k$ so that $-a$ and $-b$ are squares in $k$.</p>
<p>(Like Pete's and Bjorn's solutions, this does not require the
finiteness of $k$.)</p>
|
2,081,792 | <p>I learnt the derivation of the distance formula of two points in first quadrant I.e., $d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$ where it is easy to find the legs of the hypotenuse (distance between two points) since the first has no negative coordinates and only two axes ($x$ coordinate and $y$ coordinate). while finding the distance between two points from two different quadrants of a Cartesian plane where four axes exist ($x$,$x_1$,$y$, $y_1$ coordinates), the same formula applies for this problem also. But, the derivation of the formula is based only on the distance between two points in first quadrant alone. Can you please explain the DERIVATION of the formula for more than two quadrants? Please</p>
| Logan Luther | 347,317 | <p>Consider the following diagram:<img src="https://i.stack.imgur.com/RiihX.jpg" alt="Distance is highlighted in red."></p>
<p>Now to find the wanted distance,By the Pythagorean theorem,we need to know the size of the Edges BC and AC.</p>
<p>Suppose A and B to be:$$A(x_1,y_1),B(x_2,y_2)$$
And to find AC ,We need to subtract the length of AD from CD. Observe that C has the y component equal to the point B. So:
$$AC =\vert(y_2 - y_1) \vert$$ In order to only have the length and not worry about its sign , we took take the absolute value of this quantity.
The same argument can be given for the length of BC. $$BC =\vert (x_2 - x_1) \vert$$. </p>
<p>And now to use our friend Pythagoras. ABC is a right triangle,So:$${AB}^2={AC}^2+{BC}^2 $$,and with our last results,we have :</p>
<p>$$d=\sqrt{{\vert x_2-x_1\vert}^2 +{\vert y_2-y_1\vert}^2}$$</p>
<p>Now notice that square of absolute value of a number is just equal to the square of that number</p>
<p>(absolute value only changes the number's sign ,not size,and when you square that number,it's sign will be positive ,so they are equal.)</p>
<p>So we can drop the absolute values in our formula:</p>
<p>$$d= \sqrt{{(x_2-x_1)}^2 +{(y_2-y_1)}^2}$$</p>
<p><img src="https://i.stack.imgur.com/oHqkV.jpg" alt="Requested diagram by the original poster."></p>
|
3,089,493 | <p>Calculate the volume between <span class="math-container">$x^2+y^2+z^2=8$</span> and <span class="math-container">$x^2+y^2-2z=0$</span>. I don't know how to approach this but I still tried something:</p>
<p>I rewrote the second equation as: <span class="math-container">$x^2+y^2+(z-1)^2=z^2+1$</span> and then combined it with the first one and got <span class="math-container">$2(x^2+y^2)+(z-1)^2=9$</span> and then parametrized this with the regular spheric parametrization which is:</p>
<p><span class="math-container">$$x=\frac {1}{\sqrt{2}}r\sin \theta \cos \phi$$</span>
<span class="math-container">$$y=\frac 1{\sqrt{2}}\sin\theta\sin\phi$$</span>
<span class="math-container">$$z=r\cos\theta + 1$$</span></p>
<p>And of course the volume formula:</p>
<p><span class="math-container">$$V(\Omega)=\int\int\int_{\Omega} dxdydz$$</span></p>
<p>But that led me to a wrong answer.. what should I do?</p>
<p>Else, I tried parametrizing like this: <span class="math-container">$x=r\cos t$</span>, <span class="math-container">$y=r\sin t$</span>. then <span class="math-container">$r^2+z^2=8$</span> and <span class="math-container">$r^2-2z=0$</span> giving the only 'good' solutions <span class="math-container">$r=2, z=2$</span> then <span class="math-container">$r\in[0,2]$</span> and <span class="math-container">$z=[\frac {r^2}2,\sqrt{8-r^2}]$</span> positive root because it's in the plane <span class="math-container">$z=2$</span>.</p>
<p>giving <span class="math-container">$\int_{0}^{2\pi}\int_0^2\int_{\frac {r^2}2}^{\sqrt{8-r^2}}rdzdrdt.$</span> But still i god the wrong answer..</p>
| G Cab | 317,234 | <p>A geometric view of the problem will be much of help to solve it.</p>
<p>One is a sphere of radius <span class="math-container">$\sqrt{8}$</span> centered at the origin. </p>
<p>The other is a paraboloid of revolution, given by the revolution of <span class="math-container">$z=x^2/2$</span>
around the <span class="math-container">$z$</span> axis, thus with the vertex at the origin.</p>
<p>The volume between the two is given by revolution around the <span class="math-container">$z$</span> axis
of the 2D area delimited by a parabola and a circle. </p>
<p>I suppose you can compute that by "shells" or "washer" method.</p>
|
131,051 | <p>So we want to find an $u$ such that $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$. I obtained that if $u$ is of the following form: $$u=\sqrt[6]{2^a5^b}$$Where $a\equiv 1\pmod{2}$, and $a\equiv 0\pmod{3}$, and $b\equiv 0\pmod{2}$ and $ b\equiv 1\pmod{3}$. This works since $$u^3=\sqrt{2^a5^b}=2^{\frac{a-1}{2}}5^{\frac{b}{2}}\sqrt{2}$$and also, $$u^2=\sqrt[3]{2^a5^b}=2^{\frac{a}{3}}5^{\frac{b-1}{3}}\sqrt[3]{5}$$Thus we have that $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})\subseteq \mathbb{Q}(u)$. Note that $\sqrt{2}$ has degree of $2$ (i.e., $[\mathbb{Q}(\sqrt{2}):\mathbb{Q}]=2$) and alsothat $\sqrt[3]{5}$ has degree $3$. As $\gcd(2,3)=1$, we have that $[\mathbb{Q}(\sqrt{2},\sqrt[3]{5}),\mathbb{Q}]=6$. Note that this is also the degree of the extension of $u$, since one could check that the set $\{1,u,...,u^5\}$ is $\mathbb{Q}$-independent. Ergo, we must have equality. That is, $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$.</p>
<p>My question is: How can I find all such $w$ such that $\mathbb{Q}(w)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$? This is homework so I would rather hints rather hints than a spoiler answer. I believe that They are all of the form described above, but apriori I do not know how to prove this is true. </p>
<p>My idea was the following, since $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$ has degree $6$, then if $w$ is such that the desired equality is satisfied, then $w$ is a root of an irreducible polynomial of degree $6$, moreover, we ought to be able to find rational numbers so that $$\sqrt{2}=\sum_{i=0}^5q_iw^i$$ and $$\sqrt[3]{5}=\sum_{i=0}^5p_iw^i$$But from here I do not know how to show that the $u$'s described above are the only ones with this property (It might be false, apriori I dont really know). </p>
| Gerry Myerson | 8,269 | <p>The field has degree 6 over the rationals. Any element $w$ of degree 6 will generate the field. </p>
<p>Now, every element of the field has degree 1, 2, 3, or 6. The only elements of degree 1 are the rationals. The only elements of degree 2 are those of the form $a+b\sqrt2$ (although it takes some work to check this). The only elements of degree 3 are those of the form $a+b\root3\of5+c\root3\of{25}$ (again, this takes some checking). It follows that the generators are all the elements $a+b\sqrt2+c\root3\of5+d\sqrt2\root3\of5+e\root3\of{25}+f\sqrt2\root3\of{25}$ except those with $b=c=d=e=f=0$, those with $c=d=e=f=0$, and those with $b=d=f=0$. </p>
|
139,385 | <p>Can anyone help me prove if $n \in \mathbb{N}$ and is $p$ is prime such that $p|(n!)^2+1$ then $(p-1)/2$ is even?</p>
<p>I'm attempting to use Fermats little theorem, so far I have only shown $p$ is odd.</p>
<p>I want to show that $p \equiv 1 \pmod 4$</p>
| marlu | 26,204 | <p>If $p$ divides $(n!)^2+1$, then $(n!)^2 \equiv -1 \pmod p$, so $n!$ has order $4$ in $\mathbb F_p^\times$. By Lagrange's theorem, 4 divides the order of $\mathbb F_p^\times$ which is $p-1$, hence $p \equiv 1 \pmod 4$.</p>
|
1,499,949 | <p>Prove that for all event $A,B$</p>
<p>$P(A\cap B)+P(A\cap \bar B)=P(A)$</p>
<p><strong>My attempt:</strong></p>
<p>Formula: $\color{blue}{P(A\cap B)=P(A)+P(B)-P(A\cup B)}$</p>
<p>$=\overbrace {P(A)+P(B)-P(A\cup B)}^{=P(A\cap B)}+\overbrace {P(A)+P(\bar B)-P(A\cup \bar B}^{=P(A\cap \bar B)})$</p>
<p>$=2P(A)+\underbrace{P(B)+P(\bar B)}_{=1}-P(A\cup B)-P(A\cup \bar B)$</p>
<p>$=1+2P(A)-P(A\cup B)-P(A\cup \bar B)$</p>
| Elekko | 101,668 | <p>They are the same, it's the exponent representation of "root operator".
Example $4^{\frac{1}{2}}=\sqrt[2]{4}=2$</p>
|
83,512 | <p>Question: (From an Introduction to Convex Polytopes)</p>
<p>Let $(x_{1},...,x_{n})$ be an $n$-family of points from $\mathbb{R}^d$, where $x_{i} = (\alpha_{1i},...,\alpha_{di})$, and $\bar{x_{i}} =(1,\alpha_{1i},...,\alpha_{di})$, where $i=1,...,n$. Show that the $n$-family $(x_{1},...,x_{n})$ is affinely independent if and only if the $n$-family $(\bar{x_{1}},...,\bar{x_{n}})$ of vectors from $\mathbb{R}^{d+1}$ is linearly independent.</p>
<p>-</p>
<p>Here is what I have so far, it is mostly just writing out definitions, if you can give me some hints towards how I can start the problem that would be great.</p>
<p>$(\Rightarrow)$ Assume that for $x_{i} = (\alpha_{1i},...,\alpha_{di})$, the $n$-family $(x_{1},...,x_{n})$ is affinely independent. Then, a linear combination $\lambda_{1}x_{1} + ... + \lambda_{n}x_{n} = 0$ can only equal the zero vector when $\lambda_{1} + ... + \lambda_{n} = 0$. An equivalent characterization of affine independence is that the $(n-1)$-families $(x_{1}-x_{i},...,x_{i-1}-x_{i},x_{i+1}-x_{i},...,x_{n}-x_{i})$ are linearly independent. We want to prove that for $\bar{x_{i}}=(1,\alpha_{1i},...,\alpha_{di})$, the $n$-family $(\bar{x}_{1},...,\bar{x}_{n})$ of vectors from $\mathbb{R}^{d+1}$ is linearly independent.</p>
| Agustí Roig | 664 | <p>So, we want to prove that these two statements are equivalent:</p>
<ul>
<li><p>(a) The <em>points</em> $x_1, \dots , x_n \in \mathbb{R}^d$ are <em>affinely</em> independent.</p></li>
<li><p>(b) The <em>vectors</em> $\overline{x}_1, \dots , \overline{x}_n \in \mathbb{R}^{d+1}$ are <em>linearly</em> independent.</p></li>
</ul>
<p>Where $\overline{x}_i = (1, x_i),\ i = 1, \dots , n$.</p>
<p>Let's go.</p>
<p>$\mathbf{(a)\Longrightarrow (b)}$. Let $\lambda_1, \dots , \lambda_n \in \mathbb{R}$ be such that</p>
<p>$$
\lambda_1 \overline{x}_1 + \dots + \lambda_n \overline{x}_n = 0 \ . \qquad \qquad \qquad [1]
$$</p>
<p>We have to show that $\lambda_1 = \dots = \lambda_n = 0$. But $[1]$ means</p>
<p>$$
\lambda_1 (1, x_1) + \dots + \lambda_n (1, x_n) = (0, 0) \ ,
$$</p>
<p>where $(0,0) \in \mathbb{R} \times \mathbb{R}^d$. And this is equivalent to</p>
<p>$$
\lambda_1 x_1 + \dots + \lambda_n x_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \ .
$$</p>
<p>Now, $x_i = x_i - 0 = \overrightarrow{0x_i} , \ i = 1, \dots , n$. (Here, $0 \in \mathbb{R}^d$.) So, since we are assuming $(a)$, it follows that </p>
<p>$$
\lambda_1 = \dots = \lambda_n = 0 \ .
$$</p>
<p>$\mathbf{(b)\Longrightarrow (a)}$. Let $p \in \mathbb{R}^d$ be any point. We have to show that</p>
<p>$$
\lambda_1 \overrightarrow{ px}_1 + \dots + \lambda_n \overrightarrow{ px}_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \qquad \qquad \qquad [2]
$$</p>
<p>implies $\lambda_1 = \dots = \lambda_n = 0$.</p>
<p>If the point $p$ was $0 \in \mathbb{R}^d$, the conclusion should be clear because, in this case, $\overrightarrow{px_i} = x_i, \ i = 1, \dots , n$, and $[2]$ reads as follows:</p>
<p>$$
\lambda_1 x_1 + \dots + \lambda_n x_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \ . \qquad \qquad \qquad [3]
$$</p>
<p>From here, we do the same reasoning as in the previous proof, but backwars: these two things entail</p>
<p>$$
\lambda_1 (1, x_1) + \dots + \lambda_n (1, x_n) = (0, 0) \ .
$$</p>
<p>Which is the same as</p>
<p>$$
\lambda_1 \overline{x}_1 + \dots + \lambda_n \overline{x}_n = 0 \ .
$$</p>
<p>And this implies </p>
<p>$$
\lambda_1 = \dots = \lambda_n = 0\ ,
$$</p>
<p>since we are assuming $(b)$. </p>
<p>Hence, we have to show that the particular case $[3]$ already implies the general one $[2]$, for every $p\in \mathbb{R}^d$. But this is obvious:</p>
<p>$$
\lambda_1 \overrightarrow{ px}_1 + \dots + \lambda_n \overrightarrow{ px}_n = \lambda_1 (x_1 -p ) + \dots + \lambda_n (x_n - p)
$$</p>
<p>Which is</p>
<p>$$
\lambda_1 x_1 + \dots + \lambda_n x_n - (\lambda_1 + \dots + \lambda_n)p = \lambda_1 x_1 + \dots + \lambda_n x_n = 0 \ .
$$</p>
|
380,452 | <p>A relation R is defined on ordered pairs of integers as follows :</p>
<p>$(x,y) R(u,v)$ if $x<u$ and $y>v.$ </p>
<p>Then R is </p>
<ol>
<li><p>Neither a Partial Order nor an Equivalence relation</p></li>
<li><p>A Partial Order but not a Total Order</p></li>
<li><p>A Total Order </p></li>
<li><p>An Equivalence relation</p></li>
</ol>
| rschwieb | 29,335 | <p>I think the touchstone for understanding direct limits is understanding <a href="http://en.wikipedia.org/wiki/Directed_union#Examples" rel="nofollow">directed unions</a>.</p>
<p>A collection $C$ of sets is directed if for every $X,Y\in C$, there exists $Z\in C$ containing both $X$ and $Y$. This becomes a direct system using inclusion mappings.</p>
<p>Now just by using the directness of this collection, you can compare any two sets (and inductively, any finite number of sets) by finding a set that contains them all. But what if you want to compare more than finitely many? That is what the limit is going to do: the direct limit for the system above turns out to be $\cup C$, and so you get a sense that the limit is "the limit of finite approximations by compositions of the morphisms".</p>
|
596,005 | <p>Show that $f:\mathbb{R}^2\to\mathbb{R}$, $f \in C^{2}$ satisfies the equation
$$\frac{\partial^2 f}{\partial x^2} - \frac{\partial^2 f}{\partial y^2} = 0$$
for all points $(x,y) \in \mathbb{R}^2$ if and only if for all $(x,y)\in \mathbb{R}^2$ and $t \in \mathbb{R}$ we have:
$$f(x, y + 2t) + f(x, y) = f(x + t,y + t) + f(x - t, y +t).$$</p>
<p><strong>Note</strong>. In such case, $f$ is said to satisfy the <em>parallelogram's law</em>.</p>
| Brian Rushton | 51,970 | <p>Try setting $u=x+y,v=x-y$, and notice that the inverse equations are $x=(u+v)/2,y=(u-v)/2$. This changes your equation to $f_{uv}=0$, which makes the problem much easier.</p>
<p><em>Edit</em>: Apparently we have some critics. Do what I said to do; the solution of this equation is any function of the form $f=g(u)+h(v)$. Then $f(x,y+2t)=g(u+2t)+h(v-2t)$, while $f(x+t,y+t)=g(u+2t)+h(v)$ and $f(x-t,y+t)=g(u)+h(v-2t)$.</p>
<p>For the other direction, take the second derivative with respect to $t$ twice in the equation and plug in 0 for $t$. </p>
|
2,249,020 | <p><a href="https://i.stack.imgur.com/L7PXf.jpg" rel="nofollow noreferrer">The Math Problem</a></p>
<p>I have issues with finding the Local Max and Min, and Abs Max and Min, after I find the Critical Point. How do I do this problem in its entirety? </p>
| Chappers | 221,811 | <p>The scalar product of vectors $a=(a_1,a_2)$ and $b=(b_1,b_2)$ is given by the two formulae (provable equivalent using the cosine rule, see <a href="https://math.stackexchange.com/a/2227712/221811">here</a>)
$$ a \cdot b = a_1b_1+a_2b_2 = \sqrt{a_1^2+a_2^2}\sqrt{b_1^2+b_2^2} \cos{\theta}, $$
where $\theta$ is the angle between $a$ and $b$.</p>
<p>To use this to deduce the cosine rule, choose the unit vectors
$$ a=(\cos{\alpha},\sin{\alpha}), \qquad b=(\cos{\beta},\sin{\beta}). $$
$a$ makes angle $\alpha$ with $(1,0)$, $b$ makes angle $\beta$ with the same vector, and it is clear, since these angles are both in the same direction (one goes anticlockwise from $(1,0)$ in both cases), that the angle between this $a$ and this $b$ is $\beta-\alpha$ (or $\alpha-\beta$, but cosine is even so it makes no difference). Applying the formulae for the dot product gives
$$ 1\cdot 1 \cdot \cos{(\alpha-\beta)} = a\cdot b = \cos{\alpha}\cos{\beta} + \sin{\alpha}\sin{\beta}, $$
as required.</p>
<p>The last formula can be found by replacing $\beta$ by $-\beta$ in this formula, and using that sine is odd and cosine is even.</p>
|
3,715,824 | <p>I proved that <span class="math-container">$$\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{\frac{n}{2}}=1$$</span>
using L'Hospital's rule. But is there a way to prove it without L'Hospital's rule? I tried splitting it as
<span class="math-container">$$\lim_{n\to\infty}n^{-n}(n^2+x^2)^{\frac{n}{2}},$$</span>
but that didn't work because <span class="math-container">$\lim_{n\to\infty}(n^2+x^2)^{\frac{n}{2}}$</span> diverges.</p>
| Ty. | 760,219 | <p>Consider the following for large n and finite x:
<span class="math-container">$$e^{\frac{x^2}{n^2}} \approx 1+\frac{x^2}{n^2}$$</span>
Therefore, rewrite the limit as:
<span class="math-container">$$\lim_{n \to \infty} {\left(e^{\frac{x^2}{n^2}}\right)}^{\frac{n}{2}}$$</span>
<span class="math-container">$$=\lim_{n \to \infty} e^{\frac{x^2}{2n}}$$</span>
<span class="math-container">$$=1$$</span></p>
|
44,391 | <p>The general equation of a conic is $A x^2 + B x y + C y^2 + D x + E y + F = 0$. At Wikipedia, there is an equation for the eccentricity, based on ABCDEF. </p>
<p>Is there a similar equation for getting the foci or directrix for a general ellipse, parabola, hyperbola from ABCDEF? Please assume that a non-degenerate form of the equation is given.</p>
| ccorn | 75,794 | <p>The appropriate setting for this is the <em>complex projective plane</em>.
While declaring some symbols, I will talk a tiny bit about that,
but do not mistake this as a proper introduction to the subject.</p>
<p>In the projective plane,
we use triples $(X:Y:Z)$ of homogenous coordinates for points,
not all equal to zero.
Two such triples are considered equal if one is a scalar multiple of the other.
Affine points $(x,y)$ are embedded as $(x:y:1)$.
The equation for the conic in homogenous coordinates is then
$$\begin{pmatrix}X\\Y\\Z\end{pmatrix}^\top
\underbrace{
\begin{pmatrix}2A & B & D\\B & 2C & E\\D & E & 2F\end{pmatrix}
}_M
\begin{pmatrix}X\\Y\\Z\end{pmatrix} = 0\tag{1}$$
with the matrix $M$ being regular for a non-degenerate conic.</p>
<p>We also use triples $[U:V:W]$ to represent lines,
with a point $(X:Y:Z)$ being on a line $[U:V:W]$ if $UX+VY+WZ = 0$.
The equation for the tangents to the conic then takes the form
$$\begin{pmatrix}U\\V\\W\end{pmatrix}^\top
\underbrace{
\begin{pmatrix}2G & H & R\\H & 2K & S\\R & S & 2T\end{pmatrix}
}_{L}
\begin{pmatrix}U\\V\\W\end{pmatrix} = 0\tag{2}$$
where $L$ is the matrix inverse of $M$,
multiplied with an arbitrary nonzero scalar.
Therefore, when given $M$, we can compute a suitable $L$,
and if we set $L$ to the <a href="https://en.wikipedia.org/wiki/Adjugate_matrix" rel="noreferrer">adjugate matrix</a>
of $M$, we do not even need to carry out divisions for that.</p>
<p>Much useful information about the conic is easier to find in $L$ than in $M$.
Examples:</p>
<ul>
<li>The sign of $T$, compared with the sign of $\det L$, discriminates between
ellipse, parabola, and hyperbola.</li>
<li>The projective point $(R:S:2T)$ (third row of $L$)
is the (symmetry) center of the conic;
if $T=0$, which signifies a parabola,
$(R:S)$ indicates the direction of its symmetry axis.</li>
</ul>
<p>By the way, we can interpret $M$ as a linear map
from projective points to projective lines.
The linear map represented by $L$ does the reverse.
Every point and line thus related form a
<a href="https://en.wikipedia.org/wiki/Pole_and_polar" rel="noreferrer"><em>(pole,polar)</em> pair</a>
with respect to the given conic. I will use the linear map feature later.
Let me just mention here that the conic equations $(1)$ and $(2)$
can be interpreted as stating that the conic consists of those points
that lie on their polars, and in that case the polars are tangents to the
conic.</p>
<p>Finally, complex coordinates. We need those for the following:</p>
<p><strong>Proposition 1:</strong>
<em>Let $F = (x_F:y_F:1)$ be a real finite focus of a non-degenerate conic section
with real coefficients.
Let $z_F = x_F + \mathrm{i}y_F$ where $\mathrm{i}$ is the imaginary unit.
Then the conic has the complex tangent</em>
$$q_F = [1:\mathrm{i}:-z_F]\tag{3}$$
In other words, $(2)$ is fulfilled for
$[U:V:W] = [1:\mathrm{i}:-z_F]$.
The proof is an algebraic exercise left to those readers who can express
the entries of the matrix $L$ in terms of focus, directrix, and excentricity
from first principles.
An expression for the elliptic case is given in <a href="https://math.stackexchange.com/a/2337987/75794">another answer</a>.</p>
<p>Note that the line $q_F$ passes through $F$:
Just do the dot product to see that.
You might take that as a hint at the reason why we need complex numbers here:
There are no real tangents to a conic that pass through a focus
or through any other interior point.</p>
<p>To apply proposition 1, we plug $[U:V:W] = [1:\mathrm{i}:-z_F]$ into $(2)$.
This yields the following equation for $z_F$, with complex coefficients:</p>
<p>$$T\,z_F^2 - (R + \mathrm{i} S)\,z_F + (G - K + \mathrm{i} H) = 0\tag{4}$$</p>
<p>If $T=0$ (parabola), then $(4)$ has a unique solution for $z_F$,
giving the finite focus of the parabola.
If $T\neq 0$, then $(4)$ is a quadratic equation for $z_F$ whose solutions
give both foci. In the case of a circle, both foci coincede.</p>
<p>So basically one just has to memorize "<em>tangent</em> $[1:\mathrm{i}:-z_F]$"
in order to deduce equation $(4)$ for the foci.</p>
<p>Now, how to get the directrix associated with the focus $F$?</p>
<p><strong>Proposition 2:</strong>
<em>The directrix is the polar of the focus.</em></p>
<p>Therefore, if the directrix is to be presented as a triple
$d_F = [U_d:V_d:W_d]$
and the focus is given as $F = (x_F:y_F:1)$,
then one just has to compute
$$d_F = M\cdot F\tag{5}$$
The proof of that is again left as algebraic exercise. Use the directrix-based
formulation of the conic equation to express $M$ suitably.</p>
|
537,228 | <p>I know some things about measures/probabilities and I know some things about categories. Shortly I realized that uptil now I have never encountered something as a category of measure spaces. It seems quite likely to me that something like that can be constructed. I am an amateur however and my scope is small. I have two questions:</p>
<blockquote>
<p>1 Is there indeed material of this sort and can you tell me about it? The whereabouts for instance.</p>
<p>2 Is there a reason for the fact that uptil now I did not find anything of the sort? Is it indeed rare for some reason?</p>
</blockquote>
| Did | 6,179 | <p>This has been asked before:</p>
<ul>
<li><a href="https://mathoverflow.net/questions/20740/is-there-an-introduction-to-probability-theory-from-a-structuralist-categorical">Is there an introduction to probability theory from a structuralist/categorical perspective?</a></li>
</ul>
<p>And for the notion of product:</p>
<ul>
<li><p><a href="https://mathoverflow.net/questions/120909/can-one-view-the-independent-product-in-probability-categorially">Can one view the Independent Product in Probability categorially?</a></p></li>
<li><p><a href="https://mathoverflow.net/questions/49426/is-there-a-category-structure-one-can-place-on-measure-spaces-so-that-category-t">Is there a category structure one can place on measure spaces so that category-theoretic products exist?</a></p></li>
</ul>
|
2,998,189 | <p>I'm looking at a matrix operator in which <span class="math-container">$T \in \mathcal{L}(\mathbb{R}^2)$</span> by <span class="math-container">$T(x,y) = (x, -y)$</span>. So its basis is <span class="math-container">$ \mathcal{M}(T) = \begin{pmatrix}1 & 0\\0 & -1\end{pmatrix}$</span>. </p>
<p>How do I show that <span class="math-container">$T$</span> is self-adjoint? I understand the definition is <span class="math-container">$T=T^*$</span>. I seem to be able to understand it better when the operator is not on matrix... How would I show this? </p>
| Fred | 380,717 | <p>We have <span class="math-container">$\mathcal{M}(T)=\mathcal{M}(T)^t$</span>, hence <span class="math-container">$\mathcal{M}(T)$</span> is symmetric, thus <span class="math-container">$T$</span> is self-adjoint.</p>
|
3,856,567 | <p>I’m new to number theory and I’m solving questions in the textbook one by one.
Here is one :
If <span class="math-container">$m\geq 1$</span> and <span class="math-container">$n\geq2$</span> , which both of them are natural numbers , prove this statement:</p>
<p><span class="math-container">$$(n-1)^2 | (n^m-1) \iff (n-1)| m.$$</span></p>
<p>This is my approach :
I started from the left part of the statement;
I used
<span class="math-container">$$(a^n-b^n)=(a-b)(a^{n-1}+a^{n-2}b+...+b^{n-1}),$$</span>
to expand <span class="math-container">$n^m-1$</span> in the statement ,
Then used geometric series sum and I came to this :
<span class="math-container">$$1|n^m-1$$</span></p>
<p>And I don’t know how to continue this .</p>
<p>I’m looking for a hint , and not an answer.
Give me an answer and you’ll be cursed :)</p>
<p>Thank you in advance.</p>
| Alessio K | 702,692 | <p>You want to show that <span class="math-container">$$(n-1)\mid(n^{m-1}+n^{m-2}+\ldots+n+1)\qquad\iff\qquad (n-1)|m$$</span></p>
<p>But <span class="math-container">$n-1\equiv 0 \pmod {n-1}$</span>, so <span class="math-container">$n\equiv 1 \pmod {n-1}.$</span> Now use this to evaluate<span class="math-container">$(n^{m-1}+n^{m-2}+\ldots+n+1) \pmod {n-1}.$</span></p>
|
3,856,567 | <p>I’m new to number theory and I’m solving questions in the textbook one by one.
Here is one :
If <span class="math-container">$m\geq 1$</span> and <span class="math-container">$n\geq2$</span> , which both of them are natural numbers , prove this statement:</p>
<p><span class="math-container">$$(n-1)^2 | (n^m-1) \iff (n-1)| m.$$</span></p>
<p>This is my approach :
I started from the left part of the statement;
I used
<span class="math-container">$$(a^n-b^n)=(a-b)(a^{n-1}+a^{n-2}b+...+b^{n-1}),$$</span>
to expand <span class="math-container">$n^m-1$</span> in the statement ,
Then used geometric series sum and I came to this :
<span class="math-container">$$1|n^m-1$$</span></p>
<p>And I don’t know how to continue this .</p>
<p>I’m looking for a hint , and not an answer.
Give me an answer and you’ll be cursed :)</p>
<p>Thank you in advance.</p>
| sirous | 346,566 | <p>Answer in more details:</p>
<p>Conder <span class="math-container">$n^{m-1}+n^{m-2} + . . . +n+1$</span> which has m terms and must also be divisible by <span class="math-container">$n-1$</span>. Now add <span class="math-container">$m-1$</span> times (-1) and (m-1) times (+1) you get:</p>
<p><span class="math-container">$(n^{m-1}-1)+(n^{m-2}-1) + . . . +n-1 +m-1+1=(n^{m-1}-1)+(n^{m-2}-1) + . . . +(n-1) +m$</span></p>
<p>All terms have factor <span class="math-container">$n-1$</span>, so m must also have factor <span class="math-container">$n-1$</span> that is <span class="math-container">$n-1|m$</span></p>
|
41,174 | <p>I am trying to find the precise statement of the correspondence between stable Higgs bundles on a Riemann surface $\Sigma$, (irreducible) solutions to Hitchin's self-duality equations on $\Sigma$, and (irreducible) representations of the fundamental group of $\Sigma$. I am finding it a bit difficult to find a reference containing the precise statement. Mainly I'd like to know the statement for the case of stable $GL(n,\mathbb{C})$ Higgs bundles. But if anyone knows the statement for more general Higgs bundles that would be nice too.</p>
<p>Just at the level of say sets and not moduli spaces, I think the statement is that the following 3 things are the same, if I am reading Hitchin's original paper correctly:</p>
<ul>
<li><p>stable $GL(n,\mathbb{C})$ Higgs bundles modulo equivalence,</p></li>
<li><p>irreducible $U(n)$ (or is it $SU(n)$?) solutions of the Hitchin equations modulo equivalence, </p></li>
<li><p>irreducible $SL(n,\mathbb{C})$ (or is it $GL(n,\mathbb{C})$? $PSL$? $PGL$?) representations of $\pi_1$ modulo equivalence. </p></li>
</ul>
<p>Is this correct? Is there a reference?</p>
<p>Hitchin's original paper (titled "Self duality equations on a Riemann surface") does some confusing maneuvers; for example he considers solutions of the self-duality equations for $SO(3)$ rather than for $U(2)$ or $SU(2)$, which would seem more natural to me. Moreover, for instance, he doesn't look at all stable Higgs bundles, but only a certain subset of them - but I think this is just for the purpose of getting a <em>smooth</em> moduli space. And finally, Hitchin looks at $PSL(2,\mathbb{C})$ representations of $\pi_1$ rather than $SL(2,\mathbb{C})$ representations or $GL(2,\mathbb{C})$ representations, which confuses me as well...</p>
<p>Thanks in advance for any help!!</p>
<p><strong>EDIT:</strong> Please note that I am only interested in the case of a Riemann surface. Here it appears that <em>degree zero</em> stable Higgs bundles correspond to $GL(n,\mathbb{C})$ representations. But the question remains: are stable Higgs bundles of <em>arbitrary degree</em> related to representations? If so, which representations, and how are they related? Moreover, I think that general stable Higgs bundles should correspond to solutions of the self-duality equations -- but what's the correct group to take? ("Gauge group"? Is that the correct terminology?) I think it's $U(n)$ but I am not sure.</p>
<p>For example, in Hitchin's paper, he considers the case of rank 2 stable Higgs bundles of <em>odd degree</em> and fixed determinant line bundle, with trace-zero Higgs field (see Theorem 5.7 and Theorem 5.8). As for the self-duality equations, he uses the group $SU(2)/\pm 1$. We get a smooth moduli space. In the discussion following Theorem 9.19, it is shown that this moduli space is a <em>covering</em> of the space of $PSL(2,\mathbb{C})$ representations. It seems that this should generalize...</p>
| Richard Wentworth | 9,867 | <p>With regard to $PGL(n,{\mathbb C})$ vs $SL(n,{\mathbb C})$, you're right that the $n=2$ case generalizes. On a Riemann surface, the moduli of rank $n$ degree $d$ semistable Higgs bundles with fixed determinant is a $n^{2g}$ cover of a component of the moduli space of $PGL(n,{\mathbb C})$ representations of the fundamental group. The components are labeled by $d$ mod $n$, $d=0$ corresponding to representations that lift to $SL(n,{\mathbb C})$. Hitchin's original space is a $2^{2g}$ covering of the component of $PGL(2,{\mathbb C})$ representations consisting of representations that do <strong>not</strong> lift.</p>
|
3,125,093 | <p>Let us remember, the conditions to apply L'Hôpital's Rule:</p>
<p>Let suppose:</p>
<p><span class="math-container">$f(x)$</span> and <span class="math-container">$g(x)$</span> are real and differentiable for all <span class="math-container">$x\in (a,b)$</span> </p>
<p>1-) <span class="math-container">$ \lim_{x\to c}{f(x)} = \lim_{x\to c}g(x) = 0$</span></p>
<p>2-) If <span class="math-container">$g'(x)\neq 0$</span> <span class="math-container">$,\,\,\,\,$</span> as on some deleted neighborhood of <span class="math-container">$\,\,$</span> <span class="math-container">$c$</span>.</p>
<p>3-) <span class="math-container">$\,\,\,\,\,$</span> <span class="math-container">$\lim_{x\to c}{\frac{f'(x)}{g'(x)}} = L$</span>, then</p>
<p>4-) <span class="math-container">$\lim_{x\to c}{\frac{f(x)}{g(x)}}=L$</span>, <span class="math-container">$\,\,\,\,\,$</span> thus we can write:</p>
<p><span class="math-container">$$ \lim_{x\to c}{\frac{f(x)}{g(x)}}=\lim_{x\to c}{\frac{f'(x)}{g'(x)}} = L $$</span></p>
<p><strong>Note 1:</strong> I have written all the above information just to remember to L'Hôpital's Rule. They are not related with my question. If I wrote something as incorect, you can correct it inside of the question. </p>
<p><span class="math-container">$$-----------$$</span>
ANYWAY, Let us suppose that our following example satisfies the conditions to apply L'Hôpital's Rule which we stated above. If we start:</p>
<p><span class="math-container">$\lim_{u\to \infty}\int_0^u F(t,a)dt=0$</span> <span class="math-container">$\,\,\,$</span> and<span class="math-container">$\,\,\,\,\,$</span> <span class="math-container">$\lim_{u\to \infty}\int_0^u G(t,a)dt=0$</span> </p>
<p><span class="math-container">$$\lim_{u \to \infty} \frac{ \int_0^u F(t,a) \, dt}{\int_0^u G(t,a) \, dt} $$</span> </p>
<p>THE QUESTION. <strong>Before</strong> to apply L'Hôpital's Rule with respect to the parameter <span class="math-container">$´u´$</span> to the above-equation, do we need to also <strong>to prove</strong> the following one? </p>
<p>There exists a number <span class="math-container">$M$</span> as <span class="math-container">$M> 0$</span>.</p>
<p>The denominator <span class="math-container">$\,\,$</span> <span class="math-container">$\int_0^u G(t,a)dt$</span> <span class="math-container">$\,\,$</span> should <strong>not</strong> equal to zero for any<span class="math-container">$\,\,\,\,$</span> <span class="math-container">$u > M$</span>.</p>
<p><strong>Note 2:</strong> Please aware I am <strong>not asking</strong> anything about <span class="math-container">$\,\,$</span> <span class="math-container">$ \frac{d}{du} {\int_0^u G(t,a) \, dt}$</span> </p>
| jmerry | 619,637 | <p>Yes, we can do it.</p>
<p>The cancellation laws immediately get us <span class="math-container">$i=jk$</span> and <span class="math-container">$k=ij$</span>. Multiply the first by <span class="math-container">$i$</span> on the right, and <span class="math-container">$i^2=jki$</span>, leading to <span class="math-container">$j=ki$</span> and completing that cycle.</p>
<p>Now, applying these laws <span class="math-container">$j^2=i^2$</span>, <span class="math-container">$ij=k$</span>, <span class="math-container">$jk=i$</span>, we have the following chain of equalities:
<span class="math-container">$$j^4i=j^2i^3=j^2ij^2=j^2kj=jij=jk=i$$</span>
Apply the cancellation law to that and <span class="math-container">$j^4=1$</span>. From there, <span class="math-container">$i^4=k^4=(ijk)^2=1$</span> follow easily.</p>
|
3,975,832 | <p>I think the following claim is clearly correct, but I cannot prove it.</p>
<blockquote>
<p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be sets. If <span class="math-container">$f:A \times B \to \mathbb{R}$</span> satisfies <span class="math-container">$f(a, b) \leq C_a$</span> and <span class="math-container">$f(a, b) \leq C_b$</span> for all <span class="math-container">$(a, b) \in A \times B$</span> , then <span class="math-container">$f$</span> is bounded above. Remark: <span class="math-container">$C_x$</span> is a constant that depends only on <span class="math-container">$x$</span>.</p>
</blockquote>
<p>Background of this question:
I asked the following question, and this can be solved immediately if this original question is correct. (But it was false.)
<a href="https://math.stackexchange.com/questions/3976112/boundedness-of-a-bilinear-operator?noredirect=1#3976154">Boundedness of a bilinear operator</a></p>
<p>This original question is interesting in the following sense. If <span class="math-container">$f:A \times B \to \mathbb{R}$</span> satisfies <span class="math-container">$f(a, b) \leq C_a$</span>, <span class="math-container">$f$</span> has a upper bound that is independent of <span class="math-container">$b \in B$</span>. Similarly, if <span class="math-container">$f(a, b) \leq C_b$</span>, <span class="math-container">$f$</span> has a upper bound that is independent of <span class="math-container">$a \in A$</span>. Therefore, it seems to be that <span class="math-container">$f$</span> has a upper bound that is independent of <span class="math-container">$(a, b) \in A \times B$</span>, namely, <span class="math-container">$f$</span> is bounded above. However, it was false actually.</p>
| Hank Igoe | 806,514 | <p>Let <span class="math-container">$f(a,b)=min(a,b).$</span> Then <span class="math-container">$f$</span> will be bounded by <span class="math-container">$min(C_a, C_b)$</span>, but <span class="math-container">$f(a,a)$</span> will diverge to <span class="math-container">$\infty$</span>.</p>
|
110,373 | <p>Are there classes of infinite groups that admit Sylow subgroups and where the Sylow theorems are valid?</p>
<p>More precisely, I'm looking for classes of groups <span class="math-container">$\mathcal{C}$</span> with the following properties:</p>
<ul>
<li><span class="math-container">$\mathcal{C}$</span> includes the finite groups</li>
<li>in <span class="math-container">$\mathcal{C}$</span> there is a notion of Sylow subgroups that coincides with the usual one when restricted to finite groups</li>
<li>Sylow's theorems (or part of them) are valid in <span class="math-container">$\mathcal{C}$</span></li>
</ul>
<p>An example of such a class <span class="math-container">$\mathcal{C}$</span> is given by the class of profinite groups.</p>
| Thomas Kalinowski | 12,674 | <p>In <a href="https://www.amazon.com/gp/search?index=books&linkCode=qs&keywords=0198534450" rel="nofollow noreferrer" title="Alexandre Borovik and Ali Nesin: Groups of Finite Morley Rank (Oxford Logic Guides, 26)">groups of finite Morley rank</a> there is a Sylow theory for the prime <span class="math-container">$p=2$</span>.</p>
|
1,613,863 | <p>How to express $\log_3(2^x)$ using $\log_{10}$? And how to evaluate $4^{\log_4y}$? </p>
| DeepSea | 101,504 | <p><strong>hint</strong>: $\binom{n}{2}=\dfrac{n(n-1)}{2}, \binom{4}{2} = 6, \binom{n+2}{4} = \dfrac{(n+2)!}{4!\cdot (n-2)!}= \dfrac{(n-1)n(n+1)(n+2)}{24}$</p>
|
840,700 | <p>I have to show that the following function $f:[0,1]\rightarrow\mathbb{R}$ is Riemann Integrable:</p>
<p>$$f(x) =
\left\{
\begin{array}{ll}
1 & \mbox{if } x = \frac{1}{n} \\
0 & \mbox{otherwise}
\end{array}
\right.$$</p>
<p>For the upper and lower Riemann sum I am using the following definitions:</p>
<p>$$S_{l}(f,V)=\sum^{n}_{j=1}\inf_{I(j)}(f)(x_j-x_{j-1})$$</p>
<p>With $I(j)$ denoting the interval $[x_{j-1},x_j$] and $V$ is a partition $V=\{0,x_1,...,1\}$. The upper sum is defined with the supremum. I have shown that for any partition on $[0,1]$ the lower sum is $0$. But now I need to prove that for every $\epsilon>0$ there is a partition $V$ such that $S_{u}(f,V)<\epsilon$. Completing the proof is easy. I see that any partition on $[0,1]$ will only contain a limited number of points of the set $\{\frac{1}{n}:n\in\mathbb{N}\}$. But I can't make the proof concrete. Could anybody help me out?</p>
| YTS | 126,222 | <p>Try the following: </p>
<p>The set $F=\{x\in [0,1]: f(x)>\epsilon \}$ is finite for every $\epsilon>0$. Then you can form a partition such that if an interval contains some $x\in F$ then it have no other. Finally you can choose the partition such that the sum of interval who contains some $x\in F$ is $<\epsilon$. Separate the interval wich cover $F$ and those which don't. </p>
<p>Can you continue from this?</p>
|
2,450,007 | <p>Show that if $x\in Q^p$, then there exists $-x\in Q^p$ where $$Q^p=\{a_{-l}p^{-l}+a_{-l+1}p^{-l+1}+...|l\in Z,a_i\in\{0,1,...,p-1\}\}$$ and p is a prime number.</p>
<p>Actually I don't quite understand p-adic numbers and how addition and multiplication work in this number system. For this question, I think I need to find a $y\in Q^p$ such that x+y=0 but I don't know how to start with this question.</p>
| Alex Ravsky | 71,850 | <p>For instance, the discrete topology on the set $\Bbb Z$ or $\Bbb R$ can be generated by a discrete uniformity with a base $\{\Delta\}$ or by a uniformity with a base $\{U_n:n\in\Bbb N\}$, where $U_n=\{(x,y): x=y$ or $x,y\ge n\}$.</p>
|
1,327,644 | <p>Using EM summation formula estimate
$$
\sum_{k=1}^n \sqrt k
$$</p>
<p>up to the term involving $\frac{1}{\sqrt n}$</p>
<p>My attempt is
$$
\sum_{k=1}^n \sqrt k = \frac{2 \sqrt{n^3}}{3} -\frac{2}{3} + \frac 1 2 (\sqrt n -1)+ \frac{1}{24} (\frac{1}{\sqrt n} -1) + \int_1^n P_{2k+1}(x)f^{(2k+1)}(x)dx
$$
I am not sure what can be said about the integral. Please tell me if I have made a mistake and how I can solve the integral? Have I stopped the summation at the right point?</p>
| Mark Viola | 218,419 | <p>The correct expansion is given by</p>
<p>$$\begin{align}
\sum_{i=1}^n f(i) &= \int_1^n f(x)dx + B_1 [f(n)+f(1)]\\\\
& + \sum_{k=1}^m \frac{B_{2k}}{(2k)!} \left(f^{(2k-1)}(n)-f^{(2k-1)}(1)\right)\\\\
& +\frac{1}{(2m+1)!}\int_1^n P_{2m+1}(x)f^{(2m+1)}(x)
\end{align}$$</p>
<p>For $f(x)=x^{1/2}$ and $m=1$, we have </p>
<p>$$\begin{align}
\sum_{i=1}^n\sqrt{i}&=\frac23 (n^{3/2}-1)+\frac12 (n^{1/2}+1)+\frac{1}{24}(n^{-1/2}-1)+\frac{1}{3!}\int_1^n P_3(x)f^{(3)}(x)dx\\\\
&=\frac23x^{3/2}+\frac12n^{1/2}+\left(-\frac23+\frac12-\frac{1}{24}\right)+\frac{1}{24}n^{-1/2}+\frac{1}{3!}\int_1^n P_3(x)f^{(3)}(x)dx
\end{align}$$</p>
<p>The next task is to determine an estimate for the remainder $R$ where $R$ is the integral </p>
<p>$$R\equiv \frac{1}{3!}\int_1^n P_3(x)f^{(3)}(x)dx$$</p>
<hr>
<p>We can find an estimate for the integral term</p>
<p>$$\int_1^n P_{3}(x)f^{(3)}(x)dx \tag1$$</p>
<p>using the well-known expression for the <a href="https://en.wikipedia.org/wiki/Bernoulli_polynomials" rel="nofollow">Bernoulli Polynomial</a></p>
<p>$$B_3(x)=x^3-\frac32 x^2+\frac12 x$$</p>
<p>We can easily verify that </p>
<p>$$-\frac{1}{12\sqrt{3}}<B_3(x)<\frac{1}{12\sqrt{3}}$$</p>
<p>for $x\in [0,1]$. Since $P_{2k+1}(x)=B_{2k+1}(x-\lfloor x\rfloor)$ then we see immediately that for $1<x<n$</p>
<p>$$-\frac{1}{12\sqrt{3}}< P_{3}(x) < \frac{1}{12\sqrt{3}}\tag 2$$</p>
<p>Now, we note that for $f(x)=x^{1/2}$, $f^{3}(x)=\frac38 x^{-5/2}>0$ for $x\in [1,n]$. Now, we use $(2)$ in the estimate of $(1)$ to reveal </p>
<p>$$\begin{align}
-\frac{1}{48\sqrt{3}}\left(1-n^{-3/2}\right)<\int_1^n P_{3}(x)f^{(3)}(x)dx \le \frac{1}{48\sqrt{3}}\left(1-n^{-3/2}\right)
\end{align}$$</p>
<p>Finally, we have the upper and lower bounds</p>
<blockquote>
<p>$$\bbox[5px,border:2px solid #C0A000]{\sum_{k=1}^n\sqrt{k}\le \frac23n^{3/2}+\frac12n^{1/2}+\left(-\frac{5}{24}+\frac{1}{288\sqrt{3}}\right)+\frac{1}{24}n^{-1/2}-\frac{1}{288\sqrt{3}}n^{-3/2}}$$</p>
<p>$$\bbox[5px,border:2px solid #C0A000]{\sum_{k=1}^n\sqrt{k}\ge \frac23n^{3/2}+\frac12n^{1/2}+\left(-\frac{5}{24}-\frac{1}{288\sqrt{3}}\right)+\frac{1}{24}n^{-1/2}+\frac{1}{288\sqrt{3}}n^{-3/2}}$$</p>
</blockquote>
|
1,991,238 | <p>How can I integrate this? $\int_{0}^{1}\frac{\ln(x)}{x+1} dx $</p>
<p>I've seen <a href="https://math.stackexchange.com/questions/108248/prove-int-01-frac-ln-x-x-1-d-x-sum-1-infty-frac1n2">this</a> but I failed to apply it on my problem.</p>
<p>Could you give some hint?</p>
<p>EDIT : From hint of @H.H.Rugh, I've got $\sum_{n=1}^{\infty} \frac{(-1)^{n}}{n^2}$, since $\int_{0}^{1}x^{n}\ln(x)dx = (-1)\frac{1}{(n+1)^2}$. How can I proceed this calculation hereafter?</p>
| Antonio Vargas | 5,531 | <p>In terms of the <a href="https://en.wikipedia.org/wiki/Harmonic_number">harmonic numbers</a> $H_n$, your sequence is</p>
<p>$$
s_n = H_{F_{n+1}} - H_{F_n-1}
$$</p>
<p>As $n \to \infty$ it's known that $H_n = \log n + \gamma + o(1)$, so</p>
<p>$$
\begin{align}
s_n &= \log F_{n+1} + \gamma + o(1) - \log(F_n-1) - \gamma - o(1) \\
&= \log F_{n+1} - \log(F_n-1) + o(1).
\end{align}
$$</p>
<p>Now $F_m \sim \varphi^m/\sqrt{5}$, where $\varphi$ is the golden ratio, so using the fact that $a \sim b \implies \log a = \log b + o(1)$ we have</p>
<p>$$
\begin{align}
s_n &= \log(\varphi^{n+1}/\sqrt{5}) - \log(\varphi^{n}/\sqrt{5}) + o(1) \\
&= \log \varphi + o(1).
\end{align}
$$</p>
<p>In other words,</p>
<p>$$
\lim_{n \to \infty} \sum_{k=F_n}^{F_{n+1}} \frac{1}{k} = \log \varphi.
$$</p>
|
4,292,618 | <p>I have the following function <span class="math-container">$$\frac{1}{1+2x}-\frac{1-x}{1+x} $$</span>
How to find equivalent way to compute it but when <span class="math-container">$x$</span> is much smaller than 1? I assume the problem here is with <span class="math-container">$1+x$</span> since it probably would be equal to 1. I don't know if multiplying by <span class="math-container">$(1-x)$</span> would be helpful as it would be
<span class="math-container">$$ \frac{1-x}{1+x-2x^2}-\frac{(1-x)^2}{1-x^2} $$</span> so there's still term <span class="math-container">$1+x$</span>.</p>
| Vasile | 959,234 | <p>If the therms in <span class="math-container">$x^2$</span> can be neglegted:</p>
<p><span class="math-container">$$\frac{1}{1+2x}\approx\frac{1-4x^2}{1+2x}=1-2x$$</span>
<span class="math-container">$$\frac{1-x}{1+x}\approx\frac{(1-x)(1-x^2)}{1+x}=(1-x)^2\rightarrow$$</span>
<span class="math-container">$$f(x)\approx1-2x-1+2x-x^2=-x^2$$</span>
If therm in <span class="math-container">$x^3$</span> can be neglegted:
<span class="math-container">$$\frac{1}{1+2x}\approx\frac{1-8x^3}{1+2x}=1-2x+4x^2$$</span>
<span class="math-container">$$\frac{1}{1+x}\approx\frac{1-x^3}{1+x}=1-x+x^2$$</span>
And so on...</p>
|
3,352,834 | <p><span class="math-container">$A^2 + A - 6I = 0$</span></p>
<p>A= <span class="math-container">$\begin{bmatrix}a & b\\c & d\end{bmatrix}$</span></p>
<p>I was asked to find
<span class="math-container">$a + d$</span>, and <span class="math-container">$ad - bc$</span></p>
<p><span class="math-container">$a+d>0$</span></p>
<p>What i get is
<span class="math-container">$A^2$</span> = <span class="math-container">$\begin{bmatrix}a^2+bc & ab + bd\\ac+dc & bc+d^2\end{bmatrix}$</span></p>
<p>I get
<span class="math-container">$a^2+bc+a=6, $</span></p>
<p><span class="math-container">$ab+bd+b = 0,$</span></p>
<p><span class="math-container">$bc+d^2+d=6, $</span></p>
<p><span class="math-container">$ac+dc+c=0$</span></p>
<p>I get a=-1/2, d=-1/2</p>
<p>Why i get wrong answer? Please help me?</p>
| Dietrich Burde | 83,966 | <p>"<em>I was asked to find <span class="math-container">$a + d$</span>, and <span class="math-container">$ad - bc$</span></em>". </p>
<p>Note that these are the trace of <span class="math-container">$A$</span> and the determinant of <span class="math-container">$A$</span>. Here we have several possibilities under the assumption that <span class="math-container">$A^2+A-6I=0$</span>, i.e.,
<span class="math-container">$$
(A+3I)(A-2I)=0.
$$</span>
This gives three possibilities: <span class="math-container">$a+d=4$</span> and <span class="math-container">$ad-bc=4$</span>, or
<span class="math-container">$a+d=-6$</span> and <span class="math-container">$ad-bc=9$</span>, or finally <span class="math-container">$a+d=-1$</span> and <span class="math-container">$ad-bc=-6$</span> because of Cayley-Hamilton
<span class="math-container">$$
A^2-tr(A)A+\det(A)I=0.
$$</span></p>
|
3,260,911 | <p>I am currently struggling with the following exercise:</p>
<blockquote>
<p>Let <span class="math-container">$B$</span> be a Banach space and <span class="math-container">$C, D \subset B$</span> closed subspaces of <span class="math-container">$B$</span>.<br>
There is a <span class="math-container">$M \in ]0, \infty[$</span> such that <span class="math-container">$\forall x \in D : \operatorname{dist}(x, C \cap D) \leq M \cdot \operatorname{dist}(x, C)$</span> holds.</p>
<p>Show that <span class="math-container">$C + D$</span> is closed.</p>
</blockquote>
| Jonathan Hole | 661,524 | <p>Consider the quotient maps <span class="math-container">$p: B\rightarrow B/C$</span> and <span class="math-container">$q: D \rightarrow D/D\cap C$</span>. We claim that <span class="math-container">$p(D)$</span> is closed in <span class="math-container">$B/C$</span>. Indeed if <span class="math-container">$\{p(d_n)\}_n$</span> is Cauchy in <span class="math-container">$B/C$</span> then by your assumption on the distances <span class="math-container">$\{q(d_n)\}_n$</span> is Cauchy in <span class="math-container">$D/D\cap C$</span> and since this space is complete <span class="math-container">$\{q(d_n)\}_n$</span> converges to a <span class="math-container">$q(d)$</span>.Then it is easy to see that <span class="math-container">$p(d_n)\rightarrow p(d)$</span>. Having verified that <span class="math-container">$p(D)$</span> is closed, it follows that <span class="math-container">$p^{-1}(p(D))=D+C$</span> is closed.</p>
|
1,492,660 | <p>I'm teaching a course on discrete math and came across <a href="http://ac.els-cdn.com/0097316573900204/1-s2.0-0097316573900204-main.pdf?_tid=700c69c2-78e3-11e5-9825-00000aacb35e&acdnat=1445535573_08d35be15f0f7d7d939fc2800d9be60b" rel="nofollow">a paper related to the Hadwiger-Nelson problem</a>. The question asks how many colors are needed to color every point in $\mathbb{Q}^2$ such that no two points at distance one are the same color.</p>
<p>The proof given in the paper works as follows. First, it defines an equivalence relation over $\mathbb{Q}^2$ where two points are related if the differences of their coordinates can be written with odd denominators. Next, it shows how to color the equivalence class containing the origin with two colors so that no two points at distance one are the same color. Finally, it argues that this coloring can be translated to the other equivalence classes, giving a 2-coloring.</p>
<p>My question is whether this last step - arguing that the coloring can be translated to the other equivalence classes - requires the axiom of choice. I suspect that it does because translating the coloring would require some way to identify which equivalence class a particular point is in and then using that to define the color, but I'm not entirely sure.</p>
<p>Does this result rely on choice? If so, if we reject the axiom of choice, is the chromatic number still two? Or does it change?</p>
| Cheerful Parsnip | 2,941 | <p>It does not require choice. It is true that you have to pick an isometry from each equivalence class to the equivalence class containing the origin. This can be done by translating some chosen point in the equivalence class to the origin. You can do this constructively by well-ordering $\mathbb Q^2$ and choosing the least element. (Note that it is easy to explicitly construct a well-ordering of $\mathbb Q^2$.)</p>
|
749,714 | <p>Does anyone know how to show this preferable <strong>without</strong> using modular</p>
<p>For any prime $p>3$ show that 3 divides $2p^2+1$ </p>
| Bill Dubuque | 242 | <p>Without using mod: note <span class="math-container">$\, 2p^2+1 = 2(p^2-1)+3\,$</span> so it suffices to show that <span class="math-container">$\,3\mid p^2-1.\,$</span> One of the <span class="math-container">$\,3\,$</span> consecutive integers <span class="math-container">$\,p-1,\,p,\,p+1$</span> <a href="https://math.stackexchange.com/a/253390/242">must be</a> a multiple of <span class="math-container">$\,3$</span>. It cannot be the prime <span class="math-container">$\,p> 3,\,$</span> therefore either <span class="math-container">$\,3\mid p-1\,$</span> or <span class="math-container">$\,3\mid p+1,\,$</span> hence <span class="math-container">$\,3\mid(p-1)(p+1) = p^2-1\ \ $</span> <strong>QED</strong> </p>
<p>Using mod, it is easier: <span class="math-container">$\,\ 3\nmid p\,\Rightarrow\ {\rm mod}\ 3\!:\ p\equiv \pm 1\,\Rightarrow\, p^2\equiv 1\,\Rightarrow\ 2p^2+1\equiv 3\equiv 0 \ \ $</span> <strong>QED</strong></p>
<p><strong>Remark</strong> <span class="math-container">$\ $</span> So the result is true for any integer <span class="math-container">$p$</span> not divisible by <span class="math-container">$3$</span>. It's equivalent to the special case <span class="math-container">$\,q=3\,$</span> of Fermat's little Theorem: <span class="math-container">$\,q\nmid p\,\Rightarrow\,p^{q-1}\equiv 1\pmod q$</span></p>
|
1,980,606 | <p>Let $f:[0, 1] \to \mathbb{R}$ differentiable in $[0, 1]$ and $|f'(x)| \leq\frac{1}{2}$ for all $x \in [0, 1]$. If $a_n = f(\frac{1}{n})$, show that $\lim_{n \to \infty} a_n$ exist (Hint: Cauchy).</p>
<p>Can you help me? Thanks.</p>
| Jacky Chong | 369,395 | <p>Hint: Observe by the mean value theorem we have
\begin{align}
\left|f\left(\frac{1}{n}\right)-f\left(\frac{1}{m}\right)\right| = |f'(\xi)|\left|\frac{1}{n}-\frac{1}{m}\right| \leq \frac{1}{2}\left|\frac{1}{n}-\frac{1}{m}\right|
\end{align}
where $\xi \in [1/n, 1/m] \subset [0, 1]$.</p>
|
3,601,865 | <p>What does the correspondence theorem (or 4th isomorphism theorem for rings) for rings mean and how is it used? That is, why do we care about it?</p>
<hr>
<p>Edit:</p>
<p>My version of the correspondence theorem:</p>
<blockquote>
<p>Let <span class="math-container">$R$</span> be a ring and <span class="math-container">$I$</span> be an ideal in <span class="math-container">$R$</span>. Let <span class="math-container">$K$</span> be the set of ideals in <span class="math-container">$R$</span> containing <span class="math-container">$I$</span>. Let <span class="math-container">$L$</span> be the set of ideals in <span class="math-container">$R/I$</span>. Then there is bijection from <span class="math-container">$K$</span> to <span class="math-container">$L$</span> given by <span class="math-container">$\phi(I’)= \{ x+ I : x \in I’ \}$</span>.</p>
</blockquote>
| QuantumSpace | 661,543 | <p>The correspondence says that if <span class="math-container">$R$</span> is a ring and <span class="math-container">$I$</span> is a two-sided ideal of <span class="math-container">$R$</span>, that the following map is a bijection:</p>
<p><span class="math-container">$$\{\mathrm{\ ideals \ of \ R \ containing \ I}\}\to \{\mathrm{ideals \ of \ the \ quotient \ ring \ R/I}\}$$</span>
<span class="math-container">$$J \mapsto J/I = \pi(J)$$</span></p>
<p>where <span class="math-container">$\pi: R \to R/I$</span> is the natural quotient map.</p>
<p>We have a similar correspondence for subrings of the quotient ring. </p>
<hr>
<p>Why do we care about it? Because it tells us that we know how ideals in the quotient ring look like and complete knowledge of ideals of <span class="math-container">$R$</span> gives us complete knowledge about the ideals in the quotient ring.</p>
|
3,717,506 | <p>I am reading some text about even functions and found this snippet:</p>
<blockquote>
<p>Let <span class="math-container">$f(x)$</span> be an integrable even function. Then,</p>
<p><span class="math-container">$$\int_{-a}^0f(x)dx = \int_0^af(x)dx, \forall a \in \mathbb{R}$$</span></p>
<p>and therefore,</p>
<p><span class="math-container">$$\int_{-a}^af(x)dx = 2\int_0^af(x), \forall a \in \mathbb{R}$$</span></p>
</blockquote>
<p>Why does <span class="math-container">$dx$</span> disappear from <span class="math-container">$2\int_0^af(x)$</span>? Is it just a notation convention?</p>
| A. Kriegman | 649,089 | <p>You can model this as a Markov chain and there are known techniques for how to solve these problems. I'll explain how we can solve this example.</p>
<p>Let <span class="math-container">$p_n$</span> be the probability of having a total of <span class="math-container">$n$</span> after a large number of rolls. If we've gone on long enough then we should expect these probabilities to not change after our next roll. So, <span class="math-container">$$p_n = \frac{1}{6}(p_{n-3} + p_{n-2} + p_{n-1} + p_{n+1} + p_{n+2})$$</span>
except when <span class="math-container">$n=0$</span>, in which case <span class="math-container">$$p_0 = \frac{1}{6}(p_{-3} + p_{-2} + p_{-1} + p_{1} + p_{2} + 1)$$</span>.
That extra one represents the chance that we could come back to 0 from any number.</p>
<p>Note that if we didn't have the set 0 roll, then this technique wouldn't work because the solution would be that all the <span class="math-container">$p_n$</span>s are equal, but this is impossible because there are infinitely many of them and they sum to 1. In that case we would have to limit ourselves to questions like what happens after <span class="math-container">$t$</span> throws instead of what happens asymptotically. I believe in this case we can solve this system, but I'm not exactly sure how off the top of my head.</p>
<p>Once we do solve this system, the solution is called the stationary distribution because it stays stationary after the next roll. There is a handy theorem that for any Markov chain that has a stationary distribution, it will approach the stationary distribution given enough time. I'm not sure the exact statement, but I believe it holds in this case. So all you have to do is solve that infinite system of equations.</p>
|
2,305,689 | <blockquote>
<p>If $x^6-12x^5+ax^4+bx^3+cx^2+dx+64=0$ has positive roots then find $a,b,c,d$.</p>
</blockquote>
<p>I did something but that don't deserve to be added here, but what I thought before doing that is following:</p>
<ol>
<li>For us, Product and Sum of roots are given.</li>
<li>Roots are positive.</li>
<li>Hence I tried to use AM-GM-HM inequalities, as sum and product are given, but I failed to conclude something good.</li>
</ol>
<p>So please deliver some hints or solution regarding AM-GM-HM inequalities.</p>
| Michael Rozenberg | 190,319 | <p>Let $x_i$ be our roots.</p>
<p>Now, by AM-GM $$2=\frac{\sum\limits_{i=1}^6x_i}{6}\geq\sqrt[6]{\prod_{i=1}^6x_i}=2,$$
which says that all $x_i=2$.</p>
<p>Thus, $$a=2^2\cdot\frac{6(6-1)}{2}=60,$$ $$b=-2^3\cdot\frac{6(6-1)(6-2)}{6}=-160,$$
$$c=2^4\cdot\frac{6(6-1)(6-2)(6-3)}{24}=240$$ and
$$d=-2^5\cdot\frac{6(6-1)(6-2)(6-3)(6-4)}{120}=-192.$$</p>
|
2,247,968 | <blockquote>
<p>$a,b$ are elements in a group $G$. Let $o(a)=m$ which means that $a^m=e$, $\gcd(m,n)=1$ and $(a^n)*b=b*(a^n)$. Prove that $a*b=b*a$.</p>
</blockquote>
<p><em>Hint: try to solve for $m=5,n=3$.</em></p>
<p>I am stuck in this question and can't find an answer to it, can anyone give me some hints?</p>
<p>Thanks.</p>
| Community | -1 | <p>If the minimum of $f$ is at $0$, then there's nothing to prove. So let's suppose that the minimum of $f$ is attained at some $a \in (0,2]$. Suppose that $f(a) < 0$. By continuity, there is $\epsilon > 0$ such that $f(x) < 0$ for all $x \in (-\epsilon + a, a)$. We have $f'(x) > 0$ for all $x \in (-\epsilon + a, a)$, so $f$ is strictly increasing on this interval, hence $f\left( -\frac{\epsilon}{2} + a\right) > f(a)$, a contradiction. Therefore, $f(a) \ge 0$ and so $f\ge 0$ on $[0,2]$.</p>
|
4,043,787 | <p>I have <span class="math-container">$$f_n(x)=\begin{cases}
\frac{1}{n} & |x|\leq n, \\
0 & |x|>n .
\end{cases}$$</span></p>
<p>Why cannot be dominated by an integrable function <span class="math-container">$g$</span> by the Dominated Convergence Theorem? I am also wondering what exactly it means for a function <span class="math-container">$g$</span> to dominate a sequence of functions, since I believe my definition of this is what I am understanding incorrectly.</p>
| WoolierThanThou | 686,397 | <p>Well, say <span class="math-container">$g(x)\geq |f_n(x)|$</span> for every <span class="math-container">$x$</span> and <span class="math-container">$n$</span>. Then,
<span class="math-container">$$
g(x)\geq \sup_n|f_n(x)|=\begin{cases} \frac{1}{n} & |x|\in (n-1,n] \\ 0 & x=0 \end{cases}
$$</span>
and thus,
<span class="math-container">$$
\int_{\mathbb{R}}g(x)\textrm{d}x=2\sum_{n=1}^{\infty} \frac{1}{n}=\infty
$$</span>
and hence, <span class="math-container">$g$</span> cannot be integrable.</p>
|
2,076,908 | <blockquote>
<p><strong>Question:</strong> Prove that $e^x, xe^x,$ and $x^2e^x$ are linearly independent over $\mathbb{R}$.</p>
</blockquote>
<p>Generally we proceed by setting up the equation
$$a_1e^x + a_2xe^x+a_3x^2e^x=0_f,$$
which simplifies to $$e^x(a_1+a_2x+a_3x^2)=0_f,$$ and furthermore to
$$a_1+a_2x+a_3x^2=0_f.$$</p>
<p>From here I think it's obvious that the only choice to make the sum the zero function is to let each scalar equal 0, but this is very weak reasoning.</p>
<p>As an undergraduate we learned to test for independence by determining whether the Wronskian is not identically equal to 0. But I can only use this method if the functions are solutions to the same linear homogeneous differential equation of order 3. In other words, I cannot use this method for an arbitrary set of functions. I was not given a differential equation, so I determined it on my own and got that they satisfy $$y'''-3y''+3y'-y = 0.$$</p>
<p>I found the Wronskian, $2e^{3x}\neq0$ for any real number. Thus the set is linearly independent. But it took me some time to find the differential equation and even longer finding the Wronskian so I'm wondering if there is a stronger way to prove this without using the Wronskian Test for Independence.</p>
| copper.hat | 27,978 | <p>Suppose $a_1e^x + a_2xe^x+a_3x^2e^x=0 $ for all $x$.</p>
<p>Setting $x=0$ shows that $a_1 = 0$.
Now note that $a_2xe^x+a_3x^2e^x=0 $ for all $x$ and hence
$a_2e^x+a_3xe^x=0 $ for all $x \neq 0$. Taking limits as $x \to 0$ shows
that $a_2 = 0$, and setting $x=1$ shows that $a_3 = 0$.</p>
|
1,665,064 | <p>I have a vector quadratic equation of the form
$\boldsymbol{x}^{T} \boldsymbol{A} \boldsymbol{x} + \boldsymbol{x}^{T} \boldsymbol{b} + c = 0$<br>
where $\boldsymbol{A}$ is symmetric and for my particular case, $\boldsymbol{x} \in \mathbb{R}^{2}$. I know that the solution for this system (if it exists) can be found using numerical methods like Newton's method. But I am interested in evaluating only if a real solution exists or not; I don't need to actually find the roots. Is there an analytical way of determining the existence of roots? </p>
| hardmath | 3,111 | <p>Since $A$ is (real) symmetric, it has real eigenvalues $\lambda_1$ and $\lambda_2$ which can be found analytically (quadratic formula). Therefore a real change of variables (change of basis to eigenvector components) can be made to put the equation into the form of a standard conic section:</p>
<p>$$ \lambda_1 u^2 + \lambda_2 v^2 + d_1 u + d_2 v + e_0 = 0 $$</p>
<p>Completing the squares in $u,v$ then tells us if the equation is that of a degenerate ellipse, asking for a sum of squares less than zero, which has no solutions in the real plane. Otherwise some real solution (a point on the conic section) will exist.</p>
<p>Since the OP is motivated by saving computational effort, let's suggest how best to exploit the analytic identification of solutions.</p>
<p>My first step would be computing $\det A = \lambda_1 \lambda_2$, at a cost in the $2\times 2$ case discussed here of two multiplications and one subtraction.</p>
<p>If $\det A \lt 0$, then the curve is a hyperbola and (even if degenerate) will contain real solutions (points in the plane). Consider, for illustration, this curve, which is nonempty no matter what value $c\in \mathbb{R}$ is given:</p>
<p>$$ u^2 - v^2 = c $$</p>
<p>If $\det A \gt 0$, then the matrix $A$ is either positive-definite or negative-definite (again making use of the restriction to the $2\times 2$ case, so that $\lambda_1,\lambda_2$ are both positive or both negative). We can distinguish which of these is the case by checking the sign of the entry $a_{1,1}$ of $A$. If $A$ were negative-definite, then we could easily reduce to the positive-definite case by multiplying the equation by $-1$ and treating $-A$ instead.</p>
<p>Now if $A$ is positive-definite, we may need to do some computation to see if a real solution exists (because potentially none does). First though we would check the sign of $c$. If $c \le 0$, then a real solution <em>does</em> exist by applying the intermediate value theorem. That is, as $||x||$ grows arbitrarily large, $x^T A x$ will become an arbitrarily large positive value and drive the left hand side of the equation positive (being quadratic, it grows faster than the linear term $x^T b$). By continuity, if $c \lt 0$, then a real solution (where the left hand side is exactly zero) must exist.</p>
<p>That leaves (for positive-definite $A$) the cases where $c \gt 0$, for which we need to do a smidge of computation to be sure a real solution exists. In an ideal implementation that work would be useful toward actually finding a real solution if one is found to exist, but for the sake of the present discussion I will suggest locating the <em>minimum</em> value of the left hand side:</p>
<p>$$ f(x) \equiv x^T A x + bx + c $$</p>
<p>It is well-known that the minimum value of this quadratic function occurs at the solution of the linear system:</p>
<p>$$ Ax_{\min} = -\frac{1}{2}b $$</p>
<p>Since we already have computed $\det A$, the formula for the inverse of $2\times 2$ matrix $A$ is simple enough to use to compute $x_{\min}$:</p>
<p>$$ x_{\min} = -\frac{1}{2} A^{-1} b $$</p>
<p>Now if $c \gt 0$ and $f(x_{\min}) \le 0$, then a real solution $f(x) = 0$ must exist by the same IVT argument as before. Moreover the converse is true, since if $f(x_{\min} \gt 0$, there is no solution $f(x) = 0$.</p>
<p>That finishes the cases where $A$ is invertible, and we turn to the possibility that $\det A = 0$. The "trivial" case $A=0$ is easy (real solutions exist unless $b=0$ and $c\neq 0$), so we will focus on the case rank$(A)=1$.</p>
<p>Then one of the eigenvalues is zero and the other is not. In fact trace$(A) = \lambda_1 \neq 0$ and $\lambda_2=0$, so the conic section takes the form of a parabola:</p>
<p>$$ \lambda_1 u^2 + d_1 u + d_2 v + e_0 = 0 $$</p>
<p>There will be real solutions unless $d_2 = 0$ and the discriminant of the remaining quadratic equation is negative:</p>
<p>$$ d_1^2 - 4 \lambda_1 e_0 < 0 $$</p>
<p>We can address this possibility without explicitly making the change of matrix representation implicit in the conic section formulation. </p>
<p>This involves a new numeric difficulty, besides that already present in trying to distinguish whether $\det A = 0$ or just very close to zero. Now we must check whether rank one $A$ can be expressed as a scalar multiple of the form:</p>
<p>$$ A = \alpha yy^T \text{ where } ||y|| = 1 $$</p>
<p>such that also $b$ is a scalar multiple of $y$:</p>
<p>$$ b = \beta y $$</p>
<p>If not, there will be a real solution (the conic section is non-degenerately parabolic), and nothing further needs to be checked.</p>
<p>If so, then notice the original equation becomes:</p>
<p>$$ \alpha x^T yy^T x + \beta x^T y + c = 0 $$</p>
<p>Letting $r = x^T y$, this becomes the ordinary quadratic equation:</p>
<p>$$ \alpha r^2 + \beta r + c = 0 $$</p>
<p>and the existence of a real solution is equivalent to:</p>
<p>$$ \beta^2 - 4\alpha c \ge 0 $$</p>
<p>This check can be simplified since $b\neq 0$ determines what unit vector $y$ has to be. So in practice I would implement this as two cases:</p>
<p>(a) If $b=0$, check whether trace$(A)$ is opposite in sign to $c$ (or $c=0$) and thus a real solution exists (otherwise not).</p>
<p>(b) If $b\neq 0$, just compare if $A$ is a scalar multiple $\alpha bb^T$ and proceed using $r = x^T b$ as the variable in the quadratic, which gives us the slightly simpler discriminant test for a real solution:</p>
<p>$$ 1 - 4 \alpha c \ge 0 $$</p>
<p>This covers all the possibilities for a $2\times 2$ matrix $A$. Much of this can be generalized to larger symmetric matrices, although of course such steps as computing $\det A$ will be more difficult because of the larger size.</p>
|
248,706 | <p>Let $X$ be a compact connected manifold. Since $\mathbb T^1$ is an Eilenberg-MacLane space $K(\mathbb Z,1)$, it follows that for every morphism $\varphi\colon\pi_1(X)\to\pi_1(\mathbb T^1)$ there is a continuous map $f\colon X\to\mathbb T^1$ such that $\varphi$ coincides with the induced morphism $f_*$.</p>
<p>Now assume that $G$ is a compact connected Lie group and $\varphi\colon\pi_1(G)\to\pi_1(\mathbb T^1)$ is a morphism. Does there exist a <em>morphism of Lie groups</em> $h\colon G\to\mathbb T^1$ with $h_*=\varphi$? If not, are there any necessary or sufficient conditions for this to be the case? What about non-compact Lie groups $G$?</p>
| William of Baskerville | 50,457 | <p>Finally I managed to work out a solution for compact Lie groups $G$.</p>
<p>By the structure theory of compact groups, $G$ can be expressed as a topological semi-direct product $G=\mathbb T^n\ltimes K$ of a torus $\mathbb T^n$ and a semi-simple Lie group $K$. Let $p\colon G\to\mathbb T^n$ be the projection morphism and $p_*\colon\pi_1(G)\to\pi_1(\mathbb T^n)$ be the induced morphism. It is known that $\pi_1(K)$ is a finite group. Also, there is the usual identification $\pi_1(G)=\pi_1(\mathbb T^n)\times\pi_1(K)$.</p>
<p>Now let $\varphi\colon \pi_1(G)\to\pi_1(\mathbb T^1)$ be a morphism. Since $\pi_1(\mathbb T^1)$ is torsion-free and $\pi_1(K)$ is a torsion group, $\varphi$ annihilates $\pi_1(K)$ and so $\varphi=\psi\circ p_*$ for some $\psi\in\text{Hom}(\pi_1(\mathbb T^n),\pi_1(\mathbb T^1))$. Clearly, we have $\psi=k_*$ for some $k\in\text{Hom}(\mathbb T^n,\mathbb T^1)$. Thus, $\varphi=k_*\circ p_*=(k\circ p)_*$ and so $k\circ p\in\text{Hom}(G,\mathbb T^1)$ is the desired morphism.</p>
|
1,549,138 | <p>I have a problem with this exercise:</p>
<p>Proove that if $R$ is a reflexive and transitive relation then $R^n=R$ for each $n \ge 1$ (where $R^n \equiv \underbrace {R \times R \times R \times \cdots \times R} _{n \ \text{times}}$).</p>
<p>This exercise comes from my logic excercise book. The problem is that I've proven $R^n=R$ is false for $n=2$ and non-empty $R$.</p>
<p>Here is how I've done it:</p>
<p>Let's take $n=2$. $R$ is a relation so it's a set. $R^2$ is, by definition, a set of ordered pairs where both of their elements belong to $R$. But $R$ is a set of elements that belong to $R$ - I mean it's not the set of pairs of elements from $R$. So $R^2\neq R$.</p>
<p>Please tell me something about my proof and this exercise. How would you solve the problem?</p>
| Bartłomiej Sługocki | 293,948 | <p>Sorry, that was my mistake. I haven't read my book carefully. There's small paragraph saying: for relations we define $R^1\equiv R$ and $R^{n+1} \equiv R^n\circ R$.</p>
<p>Thanks for help. Next time I will read all the definitions carefully before I post a problem here :)</p>
|
3,325,977 | <p>The function definition is as follows:</p>
<p><span class="math-container">$ f(z) = $</span> the unique v such that <span class="math-container">$|v-z|=|v-u|=|v|$</span> for some fixed <span class="math-container">$u=a+ib$</span>.</p>
<p>For this question, I'm able to understand the basic geometry of the question and understand intuitively that it should not be defined everywhere. As of now, I've only found appropriate values for <span class="math-container">$v = f(x,y)+i*g(x,y)$</span> (which is also in terms of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>).</p>
<p>For reference, the values I've found:</p>
<p><span class="math-container">$f(x,y) = \dfrac{x^2b-y^2b-ya^2-yb^2}{4bx-4a}$</span></p>
<p><span class="math-container">$g(x,y) = \dfrac{a^2+b^2-(ax^2+y^2ba-ya^3-yb^2a)}{4b^2x-4ab}$</span></p>
<p>The problem though is that these calculations don't seem to hint at any domain being undefined, which leads me to believe it was unnecessary to calculate (I do suspect also that I've made an error in my algebra).</p>
<p>To get an idea of what I'm solving, I've drawn a fixed point u=a+ib on the cartesian plane. Then, for some arbitrary z=x+iy, a line that is equidistant between said z=x+iy and u=a+ib. Then what I'm trying to find is the v on this line that satisfies the relation that its distance from z and u is the same as its modulus. But even this is giving me troubles as nothing appears obvious.</p>
<p>Any help or guidance would be greatly appreciated.</p>
| nonuser | 463,553 | <p>Hint: </p>
<p>Remember that <span class="math-container">$$|z-\alpha| = |z-\beta|$$</span> is a set of <span class="math-container">$z$</span> that are equaly apart from <span class="math-container">$\alpha $</span> and <span class="math-container">$\beta$</span> i.e. perpendicular bisector of segment between <span class="math-container">$\alpha $</span> and <span class="math-container">$\beta $</span>. So <span class="math-container">$f(u) =$</span> circumcenter of triangle <span class="math-container">$\Delta u0z$</span>.</p>
<p>Does it help?</p>
|
3,325,977 | <p>The function definition is as follows:</p>
<p><span class="math-container">$ f(z) = $</span> the unique v such that <span class="math-container">$|v-z|=|v-u|=|v|$</span> for some fixed <span class="math-container">$u=a+ib$</span>.</p>
<p>For this question, I'm able to understand the basic geometry of the question and understand intuitively that it should not be defined everywhere. As of now, I've only found appropriate values for <span class="math-container">$v = f(x,y)+i*g(x,y)$</span> (which is also in terms of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>).</p>
<p>For reference, the values I've found:</p>
<p><span class="math-container">$f(x,y) = \dfrac{x^2b-y^2b-ya^2-yb^2}{4bx-4a}$</span></p>
<p><span class="math-container">$g(x,y) = \dfrac{a^2+b^2-(ax^2+y^2ba-ya^3-yb^2a)}{4b^2x-4ab}$</span></p>
<p>The problem though is that these calculations don't seem to hint at any domain being undefined, which leads me to believe it was unnecessary to calculate (I do suspect also that I've made an error in my algebra).</p>
<p>To get an idea of what I'm solving, I've drawn a fixed point u=a+ib on the cartesian plane. Then, for some arbitrary z=x+iy, a line that is equidistant between said z=x+iy and u=a+ib. Then what I'm trying to find is the v on this line that satisfies the relation that its distance from z and u is the same as its modulus. But even this is giving me troubles as nothing appears obvious.</p>
<p>Any help or guidance would be greatly appreciated.</p>
| Andrei | 331,661 | <p>Rewrite your condition as <span class="math-container">$$|v-z|=|v-u|=|v-0|$$</span>This means that <span class="math-container">$v$</span> is the point at equal distances from <span class="math-container">$z$</span>, <span class="math-container">$u$</span>, and <span class="math-container">$0$</span>. If <span class="math-container">$u\ne kz$</span> with <span class="math-container">$k\in \mathbb R$</span>, then <span class="math-container">$v$</span> is the circumcenter of the triangle with vertices <span class="math-container">$z$</span>, <span class="math-container">$u$</span>, and <span class="math-container">$0$</span>. Obviously, if the three points are collinear, you might have a problem to find the circumcenter, or find a unique solution.</p>
<p>Also note that you might have some errors in your calculations. For both <span class="math-container">$f$</span> and <span class="math-container">$g$</span> I've got expressions with <span class="math-container">$ay-bx$</span> at the denominator. This also hints that there might be a problem when <span class="math-container">$ay=bx$</span> or if you want, when <span class="math-container">$\frac ax=\frac by=k$</span></p>
|
2,668,447 | <p>Let <span class="math-container">$F$</span> be a subfield of a field <span class="math-container">$K$</span> and let <span class="math-container">$n$</span> be a positive integer. Show that a nonempty linearly-independent subset <span class="math-container">$D$</span> of <span class="math-container">$F^n$</span> remains linearly independent when considered as a subset of <span class="math-container">$K^n$</span>.</p>
<p>I'm not sure how to proceed, I tried to assume that <span class="math-container">$D$</span> is dependent in <span class="math-container">$F^n$</span> and then conclude that is also dependent in <span class="math-container">$K^n$</span>. </p>
<p>It is Exercise 178 of Jonathan Golan, <em>The Linear Algebra a Beginning Graduate Student Ought to Know</em>.</p>
| Joshua Mundinger | 106,317 | <p>This result does not require determinants. Instead, we just need that <span class="math-container">$K$</span> is a vector space over <span class="math-container">$F$</span>, so that it has some basis <span class="math-container">$B$</span> over <span class="math-container">$F$</span>. Let <span class="math-container">$D \subseteq F^n$</span> be linearly independent over <span class="math-container">$F$</span>, and suppose that
<span class="math-container">$$ \sum_{\mathbf{x} \in D} \alpha_{\mathbf x} \mathbf x = 0$$</span>
for <span class="math-container">$\alpha_{\mathbf x} \in K$</span>. Then we may write uniquely
<span class="math-container">$$\alpha_{\mathbf x} = \sum_{b \in B} \gamma^b_{\mathbf x} b$$</span> for <span class="math-container">$\gamma^b_{\mathbf x} \in F$</span>. Combining these gives
<span class="math-container">$$ \sum_{\mathbf x \in D} \sum_{b \in B} \gamma^b_{\mathbf x} b \mathbf x = \sum_{b \in B} \sum_{\mathbf x \in D} b \gamma^b_{\mathbf x} \mathbf x = 0.$$</span>
Since <span class="math-container">$B$</span> is a basis for <span class="math-container">$K$</span> over <span class="math-container">$F$</span>, taking these equations entry-wise in <span class="math-container">$K^n$</span> gives
<span class="math-container">$$\sum_{\mathbf x \in D} \gamma_\mathbf{x}^b \mathbf{x} = 0$$</span>
for all <span class="math-container">$b \in B$</span>.
Since <span class="math-container">$D$</span> is linearly independent, we conclude <span class="math-container">$\gamma^b_\mathbf x = 0$</span> for all <span class="math-container">$b \in B$</span> and <span class="math-container">$\mathbf x \in D$</span>, which shows <span class="math-container">$\alpha_\mathbf x = 0$</span> for all <span class="math-container">$\mathbf x \in D$</span>, as desired.</p>
|
2,668,447 | <p>Let <span class="math-container">$F$</span> be a subfield of a field <span class="math-container">$K$</span> and let <span class="math-container">$n$</span> be a positive integer. Show that a nonempty linearly-independent subset <span class="math-container">$D$</span> of <span class="math-container">$F^n$</span> remains linearly independent when considered as a subset of <span class="math-container">$K^n$</span>.</p>
<p>I'm not sure how to proceed, I tried to assume that <span class="math-container">$D$</span> is dependent in <span class="math-container">$F^n$</span> and then conclude that is also dependent in <span class="math-container">$K^n$</span>. </p>
<p>It is Exercise 178 of Jonathan Golan, <em>The Linear Algebra a Beginning Graduate Student Ought to Know</em>.</p>
| Qiaochu Yuan | 232 | <p>This is a comment. There is something interesting to consider here in the generalization of this question to the case of modules over commutative rings. That is, suppose <span class="math-container">$f : R \to S$</span> is a morphism of commutative rings, and suppose we are given a collection of linearly independent elements <span class="math-container">$v_1, \dots v_k \in R^n$</span>. Do they remain linearly independent when considered as vectors in <span class="math-container">$S^n$</span> after applying <span class="math-container">$f$</span> componentwise?</p>
<p>The answer is no in general (e.g. consider <span class="math-container">$f : \mathbb{Z} \to \mathbb{F}_2$</span> the usual quotient map and vectors <span class="math-container">$(2, 2), (2, -2) \in \mathbb{Z}^2$</span>). When is it yes? Here linear independence means that if <span class="math-container">$\sum r_i v_i = 0$</span> where <span class="math-container">$r_i \in \mathbb{R}$</span> then each <span class="math-container">$r_i = 0$</span>; equivalently, the <span class="math-container">$v_i$</span> induce a map </p>
<p><span class="math-container">$$g : R^k \ni (r_1, \dots r_k) \mapsto \sum r_i v_i \in R^n$$</span></p>
<p>and linear independence means that this map is injective / a monomorphism. We want to know when tensoring <span class="math-container">$(-) \otimes_R S$</span> preserves this property. A sufficient condition is that <span class="math-container">$S$</span> is <a href="https://en.wikipedia.org/wiki/Flat_module" rel="nofollow noreferrer">flat</a> over <span class="math-container">$R$</span>: one way of characterizing flatness is precisely the condition that <span class="math-container">$(-) \otimes_R S$</span> always preserves monomorphisms. And <span class="math-container">$\mathbb{F}_2$</span> isn't flat as a <span class="math-container">$\mathbb{Z}$</span>-module because it's torsion. </p>
<p>The general fact, which produces the desired result as a special case, is that every module over a field is flat; this means that if <span class="math-container">$R$</span> is a field then the result is always true even if we don't require that <span class="math-container">$S$</span> is a field. The most explicit proof of this proceeds via choosing a basis of <span class="math-container">$S$</span> exactly as done in Joshua Mundinger's answer. </p>
<p>But flatness also tells us what to look for in more general cases; for example, if <span class="math-container">$R = \mathbb{Z}$</span> then the condition we want is exactly that <span class="math-container">$S$</span> is torsion-free. </p>
|
3,063,742 | <p>Consider the closed interval <span class="math-container">$[0, 1]$</span> in the real line <span class="math-container">$\mathbb{R}$</span> and the product space <span class="math-container">$([0, 1]^{\mathbb{N}}, τ ),$</span></p>
<p>where <span class="math-container">$τ$</span> is a topology on <span class="math-container">$[0, 1]^\mathbb{N} $</span>. Let <span class="math-container">$ D : [0, 1] \rightarrow [0, 1]^\mathbb{N} $</span> be the map defined by
<span class="math-container">$D(x) := (x, x, · · · , x, · · ·)$</span> for <span class="math-container">$x \in [0, 1].$</span>
. The map <span class="math-container">$D$</span> is</p>
<p>choose the correct satements</p>
<p><span class="math-container">$(a)$</span> not continuous if <span class="math-container">$τ$</span> is the box topology and also not continuous if <span class="math-container">$τ$</span> is the product
topology.</p>
<p><span class="math-container">$(b)$</span> continuous if <span class="math-container">$τ$</span> is the product topology and also continuous if <span class="math-container">$τ$</span> is the box
topology.</p>
<p><span class="math-container">$(c)$</span> continuous if <span class="math-container">$τ$</span> is the box topology and not continuous if <span class="math-container">$τ$</span> is the product
topology.</p>
<p><span class="math-container">$(d)$</span> continuous if <span class="math-container">$τ$</span> is the product topology and not continuous if <span class="math-container">$τ$</span> is the box
topology</p>
<p>My attempt : i thinks option <span class="math-container">$c)$</span> is true because box topology is finer then product</p>
<p>Is its true ?</p>
<p>Any hints/solution</p>
| Adam Higgins | 454,507 | <p>Hint: It might be easier to show that both <span class="math-container">$A,B$</span> are closed in <span class="math-container">$X$</span>, and then since <span class="math-container">$X = A \cup B$</span>, we immediately have that <span class="math-container">$A,B$</span> are both open in <span class="math-container">$X$</span>.</p>
|
383,037 | <p>I was going through "Convergence of Probability Measures" by Patrick Billingsley. In Section 1: I encountered the following problem:</p>
<p><strong>Show that inequivalent metrics can give rise to the same class of Borel sets.</strong></p>
<p>My idea is that the 2 metrics generate different topologies but the Sigma algebra generated by them is the same. However I don't know how to go about proceeding to prove this. I guess I need a convincing example.</p>
<p><strong>My Background</strong>:
I read Topology from "Topology and modern analysis" by G.F Simmons, the Rudin texts and Billingsley "Probability and Measure". But this still boggles me.</p>
<p><strong>My Searches</strong>: I searched for "non equivalent metrics" and "inequivalent metrics" getting 85 and 3 results respectively. But neither helpful nor relevant.</p>
<p>I would appreciate any useful hints, tips and even complete answers (preferably the first two). </p>
| user642796 | 8,348 | <p><strong>Theorem.</strong> Given a separable completely metrisable (Polish) space $( X , \mathcal{T} )$, and any Borel $B \subseteq X$ you can define a new Polish topology $\mathcal{T}_B$ on $X$ which is finer than, and has the same Borel subsets as, the original topology, and in which $B$ is a clopen set. </p>
<p>Metrics witnessing that $\mathcal{T}$ and $\mathcal{T}_B$ are Polish are clearly inequivalent (as long as $B$ is a non-clopen subsets of $X$).</p>
<p>The following outline comes from Kechris, <em>Classical Descriptive Set Theory</em>.</p>
<blockquote>
<p><strong>Claim 1.</strong> If $( X , \mathcal{T} )$ is Polish and $F \subseteq X$ is closed, then there is a Polish topology $\mathcal{T}_F$ on $X$ extending the original topology with the same Borel sets in which $F$ is clopen </p>
<p><em>proof sketch.</em> Consider the topology $\mathcal{T}_F$ on $X$ generated by the family $$\{ U \cap F : U \subseteq X \text{ is open} \} \cup \{ U \setminus F : U \subseteq X \text{ is closed} \}.$$ This is easily seen to be the topological sum of the subspace topologies on $F$ and $X \setminus F$, which are themselves Polish, and so the sum is as well. It is easy to see that $\mathcal{T}$ and $\mathcal{T}_F$ have the same Borel sets. $\;$ $\Box$</p>
<p><strong>Claim 2.</strong> Suppose $( X , \mathcal{T} )$ is a Polish space and $\langle \mathcal{T}_n \rangle$ is a sequence of Polish topologies on $X$ such that each $\mathcal{T}_n$ is finer than, and has the same Borel sets as, the original topology $\mathcal{T}$. Then the topology $\mathcal{T}_\infty$ on $X$ generated by $\bigcup_n \mathcal{T}_n$ is Polish and has the same Borel sets as $\mathcal{T}$. </p>
<p><em>proof sketch.</em> The diagonal map $f : X \to X^{\mathbb{N}}$ ($f(x) = \langle x \rangle_{n \in \mathbb{N}}$) is clearly injective and has range a closed subset of the product space $\prod_{n \in \mathbb{N}} ( X , \mathcal{T}_n )$. It is easily seen that the topology on $X$ induced by $f$ is $\mathcal{T}_\infty$, and is Polish since $f[X]$ is a Polish subspace of $\prod_{n \in \mathbb{N}} ( X , \mathcal{T}_n )$. As $\mathcal{T}_\infty$ is generated by a countable family of Borel subsets of $( X , \mathcal{T} )$ it follows that the Borel subsets of $( X , \mathcal{T}_\infty )$ coincide with the Borel subsets of $( X , \mathcal{T} )$. $\;$ $\Box$</p>
<p>From these two is follows that the family $\mathcal{S}$ of all subsets $B \subseteq X$ for which there is a Polish topology on $X$ extending the original topology but with the same Borel sets, and in which $B$ is clopen forms a $\sigma$-algebra containing all closed (open) subsets of $X$. Thus the Theorem holds. $\;$ $\Box$</p>
</blockquote>
<hr>
<p>For an explicit example, given any closed $F \subseteq \mathbb{R}$ define a new metric $d$ on $\mathbb{R}$ by $$d ( x , y ) = \begin{cases}
| x - y |, &\text{if }x,y \in F \\
| x - y |, &\text{if }x,y \in \mathbb{R} \setminus F\\
1, &\text{if }| F \cap \{ x , y \} | = 1.
\end{cases}$$</p>
|
785,188 | <p>I found a very simple algorithm that draws values from a Poisson distribution from <a href="http://www.akira.ruc.dk/~keld/research/javasimulation/javasimulation-1.1/docs/report.pdf" rel="nofollow">this project.</a></p>
<p>The algorithm's code in Java is:</p>
<pre><code>public final int poisson(double a) {
double limit = Math.exp(-a), prod = nextDouble();
int n;
for (n = 0; prod >= limit; n++)
prod *= nextDouble();
return n;
}
</code></pre>
<p><a href="http://docs.oracle.com/javase/7/docs/api/java/util/Random.html#nextDouble%28%29" rel="nofollow"><code>nextDouble()</code></a> is a function from the <code>Random</code> package in Java that returns a uniformly distributed random <code>double</code>, for example <code>0.885598042879084</code>.</p>
<p>I can't understand how this creates a Poisson distribution. </p>
<p>Can someone explain?</p>
| craig tovey | 460,645 | <p>Yes, the previous answer is correct. This is simply a version of Knuth's algorithm. It gets slower the larger the parameter lambda. </p>
|
265,537 | <p>I have a set of inequalities</p>
<pre><code>Cos[a]Cos[b]>=Cos[t-a]Cos[b]&&Cos[a]Cos[b]>=Cos[t/2]&&Cos[a]Cos[b]>=Sin[t/2]&&a<=t<=Pi
</code></pre>
<p>How to solve this to get a range of values for <code>a,b,t</code>?</p>
| user64494 | 7,152 | <p>Reducing trigonometry to algebra by <code>Reduce[ca*cb >= ((ct^2 - st^2)*ca + 2*ct*st*sa)*cb && cb*ca >= ct && cb*ca >= st && ca^2 + sa^2 == 1 && cb^2 + sb^2 == 1 && ct^2 + st^2 == 1, {ct, st}, Reals]</code>, one obtains a huge and useless output. In order to obtain a concrete result, you have to specify <code>a</code> and <code>b</code>, e.g.</p>
<pre><code>a = 1/20; b = Pi/6; Reduce[Cos[a] Cos[b] >= Cos[t - a]*Cos[b] && Cos[a]*Cos[b] >= Cos[t/2] &&
Cos[a]*Cos[b] >= Sin[t/2] && a <= t <= Pi, t, Reals]
</code></pre>
<blockquote>
<p><code>2 ArcCos[1/2 Sqrt[3] Cos[1/20]] <= t <= 2 ArcSin[1/2 Sqrt[3] Cos[1/20]]</code></p>
</blockquote>
<p>Try <code>a=0.05;b=Pi/6;</code> on your own.</p>
|
1,365,489 | <p>What is the value of the following expression?</p>
<p>$$\sqrt[3]{\ 17\sqrt{5}+38} - \sqrt[3]{17\sqrt{5}-38}$$</p>
| ajotatxe | 132,456 | <p>Let $u=\sqrt[3]{38+17\sqrt{5}}$, $v= \sqrt[3]{38-17\sqrt{5}}$</p>
<p>From the equation
$$u+v=n$$
and cubing,
we obtain
$$u^3+v^3+3uv(u+v)=n^3$$
that is
$$76-3n=n^3$$
A root of this equation is $4$. In fact, it has no more real roots.</p>
<p>So we know that $u+v=4$, but we need $u-v$. We also know that $uv=-1$. Then
$$u-v=\sqrt{(u+v)^2-4uv}=\sqrt{16+4}=2\sqrt 5$$</p>
|
1,365,489 | <p>What is the value of the following expression?</p>
<p>$$\sqrt[3]{\ 17\sqrt{5}+38} - \sqrt[3]{17\sqrt{5}-38}$$</p>
| Community | -1 | <p>The expression strangely reminds of Cardano's formula for the cubic equation</p>
<p>$$x=\sqrt[3]{-\frac q2-\sqrt{\frac{q^2}4+\frac{p^3}{27}}}+\sqrt[3]{-\frac q2+\sqrt{\frac{q^2}4+\frac{p^3}{27}}}.$$</p>
<p>Identifying, we have </p>
<p>$$q=-76,p=3,$$</p>
<p>and $x$ is a root of $$x^3+3x-76=0.$$
By inspection (trying values close to $\sqrt[3]{76}$), $\color{green}4$ is a solution.</p>
<p>As $$x^3+3x-76=(x-4)(x^2+4x+19),$$ the other roots are complex.</p>
|
126,739 | <p><strong>I changed the title and added revisions and left the original untouched</strong> </p>
<p>For this post, $k$ is defined to be the square root of some $n\geq k^{2}$. Out of curiousity, I took the sum of one of the factorials in the denominator of the binomial theorem;
$$\sum _{k=1}^{\infty } \frac{1}{k!} \equiv e-1$$
<a href="http://oeis.org/A091131" rel="nofollow">OEIS A091131</a></p>
<p>Because I need to show that only the contiguous non-overlapping sequences of size $k$ up to $k^{2}+2k$ are valid for my purpose, I took the same sum with the denominator multiplied by $k+2$:
$$\sum _{k=1}^{\infty } \frac{1}{(k+m) k!} \equiv \frac{1}{2}\text{ for $m=2$ }$$
<a href="http://oeis.org/A020761" rel="nofollow">OEIS A020761</a></p>
<p>This is not a sum that I expected.</p>
<p>When $m\neq2$ the convergence returns alternating values like $\frac{1}{k}(-x+y e)$ and $\frac{1}{k}(x^{\prime}-y^{\prime} e)$, so $\frac{1}{2}$ seems to be the only value constructed out of integers.</p>
<p>Two questions:</p>
<p>$1)$ Is there a proof technique that can use this specific convergence to show that $k+2$ is the natural limit to my sequences? And that those specific non-overlapping sequences are the only ones that apply?</p>
<p>$2)$ Is this convergence interesting enough to put into OEIS?</p>
<p>I need some hints for my next step.</p>
<p><strong>Edit</strong><br>
Q1 is answered. I have enough info to keep me going for a few months.<br>
Q2: if you look at the OEIS entries for constants like $\pi$ and $e$, you will see dozens of identities. The entry for $\frac{1}{2}$ has only two identities. I feel it should have many more. But, just because I find this series interesting, doesn't mean others do, therefore, the question. </p>
<p>My motivation is to prove <a href="http://en.wikipedia.org/wiki/Oppermann%27s_conjecture" rel="nofollow">Oppermann's conjecture</a>. Thanks for the great answers and comments, and your patience.</p>
<p><strong>Revised</strong></p>
<p>Original post revised to use $k=0$ as starting index. And we show an example of the underlying pattern. </p>
<p>$ e= \sum_{k=0}^{\infty} 1/k!\textit{ Revised }$ </p>
<p>$ e-1= \sum_{k=0}^{\infty} 1/((k+m)k!)\text{ for }m=1$ </p>
<p>$ 1= \sum_{k=0}^{\infty} 1/((k+m)k!)\text{ for }m=2$ </p>
<p>$\sum_{k=0}^{\infty} 1/((k+m)k!)\not \in \textbf{Q} \text{ for }m>2$ </p>
<p>Example of underlying pattern for (say) $k=3$: </p>
<p>$(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12), (13, 14, 15)$<br>
$(1, 2, 3), (1, 2, 3), (1, 2, 3), (1, 2, 3), (1, 2, 3)$<br>
$(1, 2, 3), (2, 1, 2), (1, 2, 3), (2, 1, 2), (1, 2, 3)$ </p>
<p>Top: Number line partitioned into $k+2$ non-overlapping ordered lists<br>
Middle: Equivalence classes $n-1 \mod k +1$<br>
Bottom: Least divisors. $1= p_{x}$ </p>
<p>What is it about these patterns that causes the convergence result for $m=2$ to be $\in \textbf{Q}$?</p>
<p><strong>Coda</strong></p>
<p>Removed the identities as not quite in step. Below I show the summand of my function on left, the summand of an 'instep' identity, and a variation of the identity.</p>
<p>$$\frac{1}{(k+2)k!} \equiv \frac{1}{(k+1)!+k!} \equiv \frac{1}{\Gamma(k+2)+k!}$$ </p>
<p>So, $\frac{1}{(k+2)k!}$ sums two consecutive factorials. Why? </p>
<p><strong>New</strong> This ratio equals $(e-1)^{-1}$ as shown <a href="http://mathworld.wolfram.com/ContinuedFraction.html" rel="nofollow">here</a>,</p>
<p>$$
\frac{\sum _{k=0}^{\infty } \frac{1}{(k+2) k!}}{\sum _{m=0}^{\infty } \left(\sum _{k=m}^{\infty } \frac{1}{(k+2) k!}\right)}=\frac{1}{1+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\frac{6}{6+\frac{7}{7+\frac{8}{8+\frac{9}{9+\frac{10}{10+11}}}}}}}}}}
$$</p>
<p><strong>Another interesting pattern for the series:</strong><br>
$$
11_2,22_3,33_4,44_5,55_6,66_7,77_8,88_9,99_{10},\text{AA}_{11},\text{BB}_{12},\text{CC}_{13}{}{}{}
$$</p>
| Community | -1 | <p>A way to get this, and also to understand the behavior for other values would be like so (though I do not know if this is not overly indirect):</p>
<p>Recall that
$$
e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!}
$$
so
$$
x^{m-1}e^x = \sum_{k=0}^{\infty} \frac{x^{k+m-1}}{k!}
$$
Now 'integerate', then
$$
F(x) = \sum_{k=0}^{\infty} \frac{x^{k+m}}{(k+m)k!}
$$
where $F$ is some antiderivative of $x^{m-1}e^x$.</p>
<p>For $m=2$ one has the antiderivatives $e^x(x-1) +c$.
Setting $x=0$ one finds that $F(x) = e^x(x-1) +1$.
Setting $x=1$ one finds $1=\sum_{k=0}^{\infty} \frac{1}{(k+2)k!}$.
Now subtract the term for $k=0$, which is $1/2$ to get your result. </p>
<p>(Not sure this is on-topic, but it is weekend and I was bored. Sorry in advance, to those how might mind.)</p>
|
1,231,772 | <p>Motivated by Baby Rudin Exercise 6.9</p>
<p>I need to show that $\int_0^\infty \frac{|\cos x|}{1+x} \, dx$ diverges.</p>
<p>My attempt: </p>
<p>$\frac{|\cos x|}{1+x} \geq \frac{\cos^2 x}{1+x}$, and then $\int_0^\infty \frac{\cos^2 x}{1+x} \, dx + \int_0^\infty \frac{\sin^2 x}{1+x} \, dx = \int_0^\infty \frac{1}{1+x} \, dx$. </p>
<p>Since the right integral diverges, either or both of the integrals on the left most diverge. Since both diverge (at least I'm inclined to believe), now if I show that $\int_0^\infty \cos^2 x / (1+x) \, dx \geq \int_0^\infty \sin^2 x / (1+x) \, dx $ we'll be done. Here's where I am stuck.</p>
| Jack D'Aurizio | 44,121 | <p>The integrand function is positive and for every $x\in\mathbb{R}^+$ close to an element of $\pi\mathbb{Z}$ we have that $|\cos x\,|$ is close to one. For instance, if the distance between $x$ and $\pi\mathbb{Z}$ is $\leq\frac{\pi}{3}$ we have $|\cos x\,|\geq\frac{1}{2}$. That gives:
$$\int_{0}^{n\pi}\frac{\left|\cos x\right|}{1+x}\,dx = \pi \int_{0}^{n}\frac{\left|\cos(\pi x)\right|}{1+\pi x}\,dx \geq \frac{\pi}{2}\sum_{k=1}^{n-1}\int_{k-1/3}^{k+1/3}\frac{dx}{1+\pi x}$$
and the last sum is greater than:
$$\frac{\pi}{3}\sum_{k=1}^{n-1}\frac{1}{\left(1+\frac{\pi}{3}\right)+\pi k}\geq\frac{H_n-1}{3}.$$
We may also use integration by parts together with the identity:
$$ \int_{0}^{y}|\cos x\,|\,dx =\frac{2}{\pi} y +O(1) $$
that gives:</p>
<blockquote>
<p>$$ \int_{0}^{y}\frac{|\cos x\,|}{1+x}\,dx = O(1)+\frac{2}{\pi}\int_{0}^{y}\frac{x\,dx}{(1+x)^2} =\frac{2}{\pi}\,\log(y+1)+O(1).$$</p>
</blockquote>
|
1,231,772 | <p>Motivated by Baby Rudin Exercise 6.9</p>
<p>I need to show that $\int_0^\infty \frac{|\cos x|}{1+x} \, dx$ diverges.</p>
<p>My attempt: </p>
<p>$\frac{|\cos x|}{1+x} \geq \frac{\cos^2 x}{1+x}$, and then $\int_0^\infty \frac{\cos^2 x}{1+x} \, dx + \int_0^\infty \frac{\sin^2 x}{1+x} \, dx = \int_0^\infty \frac{1}{1+x} \, dx$. </p>
<p>Since the right integral diverges, either or both of the integrals on the left most diverge. Since both diverge (at least I'm inclined to believe), now if I show that $\int_0^\infty \cos^2 x / (1+x) \, dx \geq \int_0^\infty \sin^2 x / (1+x) \, dx $ we'll be done. Here's where I am stuck.</p>
| Zarrax | 3,035 | <p>You're on the right track. Since $\cos^2(x + {\pi \over 2}) = \sin^2 x$,
$$\int_{0}^{\infty} {\cos^2 x \over 1 + x}\,dx = \int_{{\pi \over 2}}^{\infty} {\cos^2 (x - {\pi \over 2}) \over 1 + (x - {\pi \over 2})}\,dx $$
$$= \int_{{\pi \over 2}}^{\infty} {\sin^2 x \over 1 + (x - {\pi \over 2})}\,dx $$
$$> \int_{{\pi \over 2}}^{\infty} {\sin^2 x \over 1 + x}\,dx $$
Although the integral here starts at ${\pi \over 2}$, the first part will always be finite. So if $\int_{0}^{\infty} {\sin^2 x \over 1 + x}\,dx $ is
infinite, so is $\int_{{\pi \over 2}}^{\infty} {\sin^2 x \over 1 + x}\,dx$, and therefore by the above so is $\int_{0}^{\infty} {\cos^2 x \over 1 + x}\,dx $.</p>
|
446,456 | <p>Educators and Professors: when you teach first year calculus students that infinity isn't a number, how would you logically present to them $-\infty < x < +\infty$, where $x$ is a real number?</p>
| Asaf Karagila | 622 | <p>The symbols $+\infty,-\infty$ (and $\infty$) simply denote a formal symbol which means "larger/small than any real number".</p>
|
3,193,288 | <p>i have the following question. Let <span class="math-container">$\phi_1$</span> and <span class="math-container">$\phi_2$</span> fundamental system solutions on an interval <span class="math-container">$I$</span> for the second order equation
<span class="math-container">$$
y''+a(x)y= 0.
$$</span>
Prove that there exists fundamental system solutions <span class="math-container">$\{y_1,y_2\}$</span> such that the Wronksian <span class="math-container">$W[y_1,y_2]$</span> satisfies <span class="math-container">$W[y_1,y_2]=1$</span>.</p>
<p>So, I know that <span class="math-container">$\{y_1,y_2\}$</span> is a system of fundamental solution means that any solution <span class="math-container">$y$</span> of edo is written: <span class="math-container">$y(x)= c_1 y_1(x)+ c_2 y_2(x)$</span> where <span class="math-container">$c_1$</span> and <span class="math-container">$c_2$</span> arbitrary contacts. Then <span class="math-container">$W[y_1,y_2]= y_1(x)y'_2(x)-y_2(x)y_1'(x)$</span>. But I don't know how to resolve the question and what's utility of <span class="math-container">$\phi_1$</span> and <span class="math-container">$\phi_2$</span>.</p>
<p>Thank's in advance to the help.</p>
| Lutz Lehmann | 115,115 | <p>You should have found out that the Wronskian is constant as the coefficient of the first derivative term is zero.</p>
<p>After that, it is just a matter of re-scaling one or both of the solutions to get the Wronski-determinant to have the value 1 at one and thus every point.</p>
<hr>
<p>(<em>Add</em>) Interpreting the term "fundamental" in "system of fundamental solutions" more strictly, it means that at some point <span class="math-container">$x_0$</span> you have initial values
<span class="math-container">$$
\pmatrix{y_1(x_0)&y_2(x_0)\\y_1'(x_0)&y_2'(x_0)}
=\pmatrix{1&0\\0&1}
$$</span>
so that no rescaling is necessary.</p>
|
218,479 | <p>I am trying to evaluate the following integral with Mathematica:</p>
<p><span class="math-container">\begin{align}
I = \int_{0}^{\infty} da \, \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \mbox{sinc}\left(\tfrac{w}{2} a \right) \delta' \left( \frac{D^2}{a}- a \right),
\end{align}</span>
where the prime on the delta function denotes differentiation with respect to the argument of the Delta function. When I evaluate this integral with Mathematica as:</p>
<pre><code>Integrate[Exp[-a^2/(4 s^2)]/a^2 Sinc[w a / 2] Derivative[1][DiracDelta][D^2/a - a],{a,0,Infinity}, Assumptions -> s > 0 && w > 0 && D > 0]
</code></pre>
<p>I get the result:
<span class="math-container">\begin{align}
I_{Mathematica} = \frac{e^{-\frac{D^2}{4 s ^2}} }{4 D^4 s ^2 w } \left[\left(D^2+6 s ^2\right) \sin \left(\frac{D w }{2}\right)-D s ^2 w \cos \left(\frac{D w }{2}\right)\right].
\end{align}</span></p>
<p>However, if I evaluate this integral analytically, using the fact that
<span class="math-container">\begin{align}
\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) = - \delta' \left( \frac{D^2}{a}- a \right) \left(\frac{D^2}{a^2}+1\right) \implies \delta' \left( \frac{D^2}{a}- a \right) = - \left[\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) \right] \left(\frac{D^2}{a^2}+1\right)^{-1},
\end{align}</span>
I get the following result:
<span class="math-container">\begin{align}
I_{analytic} &= \int_{0}^{\infty} da \, \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \delta' \left( \frac{D^2}{a}- a \right) \\
&=- \int_{0}^{\infty} da \, \left[\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) \right] \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \\
&= \int_{0}^{\infty} da \, \delta\left( \frac{D^2}{a}- a \right) \left[\frac{d}{da} \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \right] \\
&= \int_{0}^{\infty} da \, \frac{\delta\left( D - a \right)}{2} \left[\frac{d}{da} \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \right] \\
&= - \frac{e^{-\frac{ D^2 }{ 4s^{2}}}}{4 D^{4} s^{2} w} \left[ \left(D^{2}+4 s^{2}\right)\sin\left( \frac{ Dw}{2} \right) - D s^{2} w \cos \left( \frac{ Dw}{2} \right) \right],
\end{align}</span>
which differs from <span class="math-container">$I_{Mathematica}$</span> by an overall negative sign and the prefactor in front of <span class="math-container">$s^2$</span> in the first term.</p>
<p>I'm not sure if the issue is with the way Mathematica handles the derivative of the delta function or if I've made a mistake in my analytic calculation. Any help would be much appreciated, I've been staring at this for days!</p>
| Ulrich Neumann | 53,677 | <p>Here my attempt to solve the integral <code>Integrate[f[a] Derivative[1][DiracDelta][d^2/a - a],{a,0,Infinity}]</code>:</p>
<pre><code>f[a_] := Exp[-a^2/(4 s^2)]/a^2 Sinc[w a/2]
</code></pre>
<p>Substitution <code>u[a]=d^2/a-a</code> (integrationlimits change to u[0]=Infinity],u[Infinity]=-Infinity)</p>
<pre><code>u[a_] := d^2/a - a
sola = Solve[u == d^2/a - a, a][[2]] (*solution a>0*)
</code></pre>
<p>Now Mathematica is able to solve the integral</p>
<pre><code>int=Integrate[f[a/.sola] Derivative[1][DiracDelta][u]/u'[a]/.sola ,{u, Infinity,-Infinity}]
(*(E^(-(d^2/(4 s^2))) (d s^2 w Cos[(d w)/2] - (d^2 + 4 s^2) Sin[(d w)/2]))/(4 d^3 Sqrt[d^2] s^2 w)*)
</code></pre>
<p>Hope it helps solving your problem!</p>
|
218,479 | <p>I am trying to evaluate the following integral with Mathematica:</p>
<p><span class="math-container">\begin{align}
I = \int_{0}^{\infty} da \, \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \mbox{sinc}\left(\tfrac{w}{2} a \right) \delta' \left( \frac{D^2}{a}- a \right),
\end{align}</span>
where the prime on the delta function denotes differentiation with respect to the argument of the Delta function. When I evaluate this integral with Mathematica as:</p>
<pre><code>Integrate[Exp[-a^2/(4 s^2)]/a^2 Sinc[w a / 2] Derivative[1][DiracDelta][D^2/a - a],{a,0,Infinity}, Assumptions -> s > 0 && w > 0 && D > 0]
</code></pre>
<p>I get the result:
<span class="math-container">\begin{align}
I_{Mathematica} = \frac{e^{-\frac{D^2}{4 s ^2}} }{4 D^4 s ^2 w } \left[\left(D^2+6 s ^2\right) \sin \left(\frac{D w }{2}\right)-D s ^2 w \cos \left(\frac{D w }{2}\right)\right].
\end{align}</span></p>
<p>However, if I evaluate this integral analytically, using the fact that
<span class="math-container">\begin{align}
\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) = - \delta' \left( \frac{D^2}{a}- a \right) \left(\frac{D^2}{a^2}+1\right) \implies \delta' \left( \frac{D^2}{a}- a \right) = - \left[\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) \right] \left(\frac{D^2}{a^2}+1\right)^{-1},
\end{align}</span>
I get the following result:
<span class="math-container">\begin{align}
I_{analytic} &= \int_{0}^{\infty} da \, \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \delta' \left( \frac{D^2}{a}- a \right) \\
&=- \int_{0}^{\infty} da \, \left[\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) \right] \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \\
&= \int_{0}^{\infty} da \, \delta\left( \frac{D^2}{a}- a \right) \left[\frac{d}{da} \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \right] \\
&= \int_{0}^{\infty} da \, \frac{\delta\left( D - a \right)}{2} \left[\frac{d}{da} \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \right] \\
&= - \frac{e^{-\frac{ D^2 }{ 4s^{2}}}}{4 D^{4} s^{2} w} \left[ \left(D^{2}+4 s^{2}\right)\sin\left( \frac{ Dw}{2} \right) - D s^{2} w \cos \left( \frac{ Dw}{2} \right) \right],
\end{align}</span>
which differs from <span class="math-container">$I_{Mathematica}$</span> by an overall negative sign and the prefactor in front of <span class="math-container">$s^2$</span> in the first term.</p>
<p>I'm not sure if the issue is with the way Mathematica handles the derivative of the delta function or if I've made a mistake in my analytic calculation. Any help would be much appreciated, I've been staring at this for days!</p>
| AestheticAnalyst | 71,610 | <p>Let's talk about the Dirac <span class="math-container">$\delta$</span>-"function". Strictly speaking, it's a linear functional
<span class="math-container">$$\delta:C^\infty(\mathbb R)\to\mathbb R\qquad\qquad\delta(f)=f(0).$$</span>
However, we usually use the notation
<span class="math-container">$$\int_{-\infty}^\infty\delta(x)f(x)dx$$</span>
to denote the evaluation <span class="math-container">$\delta(f)$</span>. The derivative of the <span class="math-container">$\delta$</span>-"function" is computed via formal integration by parts:
<span class="math-container">$$\delta'(f)=\int_{-\infty}^\infty\delta'(x)f(x)dx=-\int_{-\infty}^\infty\delta(x)f'(x)dx=-f'(0).$$</span>
Your integral has the additional complications that there is a function inside the argument of <span class="math-container">$\delta'(x)$</span>, and that the integral is not taken over all of <span class="math-container">$\mathbb R$</span>. Composing distributions with functions is, in general, not possible, but in this case we can appeal to a theorem of Hormander:</p>
<p><strong>Theorem:</strong> <em>Suppose <span class="math-container">$f:M\to N$</span> is a smooth function whose differential is everywhere surjective. Then there is a linear map <span class="math-container">$f^*:\mathscr D(N)\to\mathscr D(M)$</span> such that <span class="math-container">$f^*u=u\circ f$</span> for all <span class="math-container">$u\in C(N)$</span>.</em></p>
<p>For our purposes, this means <span class="math-container">$\int_{-\infty}^\infty\delta'(f(x))g(x)dx$</span> makes sense provided <span class="math-container">$f(x)$</span> is smooth and <span class="math-container">$f'(x)$</span> never vanishes. Similarly, reducing the domain of integration is, in general, not possible, but we have:</p>
<p><strong>Theorem</strong> <em>Suppose <span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span> are disjoint closed sets, and let <span class="math-container">$\mathscr D_{E_i}$</span> denote the set of distributions which coincide with a smooth function on <span class="math-container">$E_i^c$</span> for <span class="math-container">$i=1,2$</span>. Then there is a bilinear map
<span class="math-container">$$m:\mathscr D_{E_1}\times\mathscr D_{E_2}\to\mathscr D(\mathbb R^n)$$</span>
such that <span class="math-container">$m(u,v)=uv$</span> when <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are continuous.</em></p>
<p>In our case, we would like to compute the integral
<span class="math-container">$$\int_0^\infty\delta'\left(\frac{D^2}{x}-x\right)g(x)dx=\int_{-\infty}^\infty\chi_{(0,\infty)}(x)\delta'\left(\frac{D^2}{a}-a\right)g(x)dx,$$</span>
where <span class="math-container">$\chi_{(0,\infty)}$</span> is the characteristic function of the half-line <span class="math-container">$(0,\infty)$</span>. The theorem says that the product
<span class="math-container">$$\chi_{(0,\infty)}(x)\delta'\left(\frac{D^2}{x}-x\right)$$</span>
makes sense whenever the singular support of <span class="math-container">$\chi_{(0,\infty)}$</span>, namely <span class="math-container">$\{0\}$</span>, does not intersect the singular support of <span class="math-container">$\delta'\left(\frac{D^2}{x}-x\right)$</span>, namely <span class="math-container">$\{D,-D\}$</span>. Thus when <span class="math-container">$D\neq 0$</span>, our integral makes sense and
<span class="math-container">$$\int_0^\infty\delta'\left(\frac{D^2}{x}-x\right)g(x)dx=\begin{cases}g'(D),&D>0\\g(-D),&D<0\end{cases}.$$</span>
To compute your integral, just plug in your particular function <span class="math-container">$g(x)$</span>. When you're working with distributions (like <span class="math-container">$\delta$</span>) you need to be very careful about what you do with them. I don't know how Mathematica conceptualizes the <span class="math-container">$\delta$</span>-distribution, but I wouldn't be inclined trust that it would go through the necessary analytical reasoning and get the right answer.</p>
<p>TL;DR: Do your distributional calculus by hand.</p>
|
218,479 | <p>I am trying to evaluate the following integral with Mathematica:</p>
<p><span class="math-container">\begin{align}
I = \int_{0}^{\infty} da \, \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \mbox{sinc}\left(\tfrac{w}{2} a \right) \delta' \left( \frac{D^2}{a}- a \right),
\end{align}</span>
where the prime on the delta function denotes differentiation with respect to the argument of the Delta function. When I evaluate this integral with Mathematica as:</p>
<pre><code>Integrate[Exp[-a^2/(4 s^2)]/a^2 Sinc[w a / 2] Derivative[1][DiracDelta][D^2/a - a],{a,0,Infinity}, Assumptions -> s > 0 && w > 0 && D > 0]
</code></pre>
<p>I get the result:
<span class="math-container">\begin{align}
I_{Mathematica} = \frac{e^{-\frac{D^2}{4 s ^2}} }{4 D^4 s ^2 w } \left[\left(D^2+6 s ^2\right) \sin \left(\frac{D w }{2}\right)-D s ^2 w \cos \left(\frac{D w }{2}\right)\right].
\end{align}</span></p>
<p>However, if I evaluate this integral analytically, using the fact that
<span class="math-container">\begin{align}
\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) = - \delta' \left( \frac{D^2}{a}- a \right) \left(\frac{D^2}{a^2}+1\right) \implies \delta' \left( \frac{D^2}{a}- a \right) = - \left[\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) \right] \left(\frac{D^2}{a^2}+1\right)^{-1},
\end{align}</span>
I get the following result:
<span class="math-container">\begin{align}
I_{analytic} &= \int_{0}^{\infty} da \, \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \delta' \left( \frac{D^2}{a}- a \right) \\
&=- \int_{0}^{\infty} da \, \left[\frac{d}{da} \delta\left( \frac{D^2}{a}- a \right) \right] \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \\
&= \int_{0}^{\infty} da \, \delta\left( \frac{D^2}{a}- a \right) \left[\frac{d}{da} \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \right] \\
&= \int_{0}^{\infty} da \, \frac{\delta\left( D - a \right)}{2} \left[\frac{d}{da} \left(\frac{D^2}{a^2}+1\right)^{-1} \frac{e^{-\frac{a ^2}{4s^2}} }{a^2} \frac{\sin \left(\tfrac{w}{2} a \right) }{\tfrac{w}{2} a} \right] \\
&= - \frac{e^{-\frac{ D^2 }{ 4s^{2}}}}{4 D^{4} s^{2} w} \left[ \left(D^{2}+4 s^{2}\right)\sin\left( \frac{ Dw}{2} \right) - D s^{2} w \cos \left( \frac{ Dw}{2} \right) \right],
\end{align}</span>
which differs from <span class="math-container">$I_{Mathematica}$</span> by an overall negative sign and the prefactor in front of <span class="math-container">$s^2$</span> in the first term.</p>
<p>I'm not sure if the issue is with the way Mathematica handles the derivative of the delta function or if I've made a mistake in my analytic calculation. Any help would be much appreciated, I've been staring at this for days!</p>
| SolutionExists | 70,331 | <p>My previous answer and comments were wrong. I didn't notice the argument of the δ function was not linear in the integration variable (and I wasn't even drunk).</p>
<p>In the <a href="https://en.wikipedia.org/wiki/Dirac_delta_function" rel="nofollow noreferrer">Wikipedia page</a>, there is this paragraph</p>
<blockquote>
<p>In the integral form the generalized scaling property may be written as
<span class="math-container">$∫_{-∞}^∞ f ( x ) δ ( g ( x ) ) d x = ∑_i f ( x_i ) / | g ′ ( x_i ) | $</span>.</p>
</blockquote>
<p>The Jacobian of the transformation is 1/g'(x). Please note the absolute value in the denominator.</p>
<p>Basically, find the zeroes of the argument of the δ, and integrate around them (by parts if necessary). Also, </p>
<blockquote>
<p>The distributional derivative of the Dirac delta distribution is the distribution δ′ defined on compactly supported smooth test functions φ by
<span class="math-container">$δ ′ [ φ ] = − δ [ φ ′ ] = − φ ′ ( 0 )$</span> . </p>
</blockquote>
<p>(1) Finding the zeroes:</p>
<pre><code>Solve[-a + Δ^2/a == 0, a]
</code></pre>
<p>(2) Finding the Jacobian:</p>
<pre><code>jac = Solve[Dt[-a + Δ^2/a == u[a]], u'[a]]
/. Dt[Δ] → 0 /. a → Δ // FullSimplify
</code></pre>
<p>(3) Evaluating the integral by parts (don't forget the minus sign in front):</p>
<pre><code>v1 = -D[(E^(-(a^2/(4 s^2))) Sinc[(a w)/2])/a^2, a] / Abs[jac]
/. a → Δ // FullSimplify
</code></pre>
<p>(4) Divide the integral by the Jacobian (the previous division was because of the integration by parts, this one because of the scaling):</p>
<pre><code>v1 / Abs[ jac ]
</code></pre>
<p>The answer is the same as <span class="math-container">$I_{MMA}$</span>. By the way, MMA is simply using</p>
<pre><code>Integrate[ f[a] DiracDelta'[2 a], {a, -∞, ∞}]
(*-(1/4) f'[0]*)
</code></pre>
<p>Prove that analytically and you will find the error in your analytic calculation.</p>
|
3,942,512 | <p>If X and Y are independent binomial random variables with identical parameters n and p, calculate the conditional expected value of X given X+Y = m.</p>
<p>The conditional pmf turned out to be a hypergeometric pmf, but I'm a but unclear on how to relate that back into finding E[X|X+Y=m]</p>
| Aphelli | 556,825 | <p>Another argument (with quite sophisticated machinery) is that if <span class="math-container">$\mathbb{Z}[C_n]$</span> has finite projective dimension, so has <span class="math-container">$R=\mathbb{F}_p[C_n]$</span> for <span class="math-container">$p|n$</span> (a finite exact sequence of free <span class="math-container">$\mathbb{Z}$</span>-modules remains exact after tensoring with anything). In particular, <span class="math-container">$R$</span> is a regular ring finite over a field, so is reduced. But <span class="math-container">$R \supset \mathbb{F}_p[C_p] \cong \mathbb{F}_p[x]/(x-1)^p$</span>.</p>
|
1,190,759 | <p>I was trying to show the following
$\int_{-\infty}^{\infty} x^{2n}e^{-x^2}dx = (2n)!{\sqrt{\pi}}/4^nn!$ by using $\int_{-\infty}^{\infty} e^{-tx^2}dx = \sqrt{\pi/t}$
thus</p>
<p>I differentiated this exponential integral n times to get the following. </p>
<p>$\int_{-\infty}^{\infty} \frac{d^ne^{-tx^2}}{dt^n}dx $$=\frac{2^{n}\times \sqrt{\pi}t^{\frac{2n-1}{2}}} {1\times 3\times 5 \times ... \times (2n-1)}$
after applying limit for $t\rightarrow 1} I am not getting the desired result. Where am I going wrong ? </p>
<p>Thanks </p>
| kobe | 190,421 | <p>The $n$th derivative of $\sqrt{\pi/t} = t^{-1/2}\sqrt{\pi}$ is </p>
<p>$$(-1)^n \left(\frac{1}{2}\right)\left(\frac{3}{2}\right)\cdots \cdot \left(\frac{2n-1}{2}\right)t^{-1/2 - n}\sqrt{\pi},$$</p>
<p>which can be written</p>
<p>$$\frac{(-1)^n(1)(3)(5)\cdots (2n-1) t^{-1/2 - n}\sqrt{\pi}}{2^n}.$$</p>
<p>This is the same as </p>
<p>$$\frac{(-1)^n (1)(2)(3)\cdots (2n-1)(2n)t^{-1/2 - n}\sqrt{\pi}}{2^n\cdot 2^n n!} = \frac{(-1)^n(2n)!t^{-1/2 - n}\sqrt{\pi}}{4^n n!}.$$</p>
<p>So since </p>
<p>$$\frac{d^n}{dt^n} e^{-tx^2} = (-1)^n x^{2n}e^{-tx^2}$$</p>
<p>we have</p>
<p>$$\int_{-\infty}^\infty (-1)^n x^{2n}e^{-tx^2}\, dx = \frac{(-1)^n(2n)!t^{-1/2 - n}\sqrt{\pi}}{4^n n!}.$$</p>
<p>Cancelling the $(-1)^n$ on both sides, then substituting $t = 1$, we deduce</p>
<p>$$\int_{-\infty}^\infty x^{2n}e^{-tx^2}\, dx = \frac{(2n)!\sqrt{\pi}}{4^nn!}.$$</p>
|
889,155 | <blockquote>
<p>There are $2n-1$ slots/boxes in all and two objects say A and B; total number of A's are $n$ and total number of B's are $n-1$. (All A's are identical and all B's are identical.) In how many ways can we arrange A's and B's in $2n-1$ slots.</p>
</blockquote>
<p>My approach: there are $2n-1$ boxes in total and for A, $n$ have to be selected, so number of ways to select $n$ A's is $C(2n-1,n)$ and can be permuted in $n!/n!$ ways, i.e., $1$. And similarly for B, $C(n-1,n-1)$ and $(n-1!)/(n-1!)$ permutations in total.</p>
<p>So total $$C(2n-1,n) \times 1 \times C(n-1,n-1) \times 1=C(2n-1,n).$$</p>
<p>Please help i am stuck.</p>
| user140591 | 140,591 | <p>The problem can be reduced to this: How many ways can we arrange the n A's into 2n-1 positions?
This is because we can say that once all A's have been placed, the rest must be B's. In a given arrangement of A's, this is the only arrangement of that type since all B's are identical.</p>
<p>Hence the answer is (2n-1)C(n) = (2n-1)C(n-1)</p>
|
889,155 | <blockquote>
<p>There are $2n-1$ slots/boxes in all and two objects say A and B; total number of A's are $n$ and total number of B's are $n-1$. (All A's are identical and all B's are identical.) In how many ways can we arrange A's and B's in $2n-1$ slots.</p>
</blockquote>
<p>My approach: there are $2n-1$ boxes in total and for A, $n$ have to be selected, so number of ways to select $n$ A's is $C(2n-1,n)$ and can be permuted in $n!/n!$ ways, i.e., $1$. And similarly for B, $C(n-1,n-1)$ and $(n-1!)/(n-1!)$ permutations in total.</p>
<p>So total $$C(2n-1,n) \times 1 \times C(n-1,n-1) \times 1=C(2n-1,n).$$</p>
<p>Please help i am stuck.</p>
| Adi Dani | 12,848 | <p>$$\frac{(n+(n-1))!}{n!(n-1)!}=\frac{(2n-1)!}{n!((2n-1)-n)!}=\binom{2n-1}{n}$$</p>
|
794,875 | <p>Let $\{v_1, v_2,....,v_n\}$ be the standard basis for $\mathbb R^n$.Prove for any two $m\times n$ matrices that their linear transformations are equal if and only if the two matrices are equal. I know what two linear transformations need to be equal (same basis, domain and codomain), but how do I show that?</p>
| Belgi | 21,335 | <p><strong>Hint:</strong> $T(v_{i})$ is encoded into the matrices</p>
|
4,200,602 | <p>Let <span class="math-container">$\alpha$</span> be a class <span class="math-container">$\mathcal{K}$</span> function defined on <span class="math-container">$[0,a)$</span>. Then
<span class="math-container">\begin{equation}
\alpha(r_1+r_2) \leq \alpha(2r_1) + \alpha(2r_2), \quad \forall r_1,\,r_2 \in [0,\,a/2).
\end{equation}</span></p>
<p><strong>Definition</strong> (class <span class="math-container">$\mathcal{K}$</span> function): A continuous function <span class="math-container">$\alpha: [0, \,a) \rightarrow [0,\,\infty)$</span> is a class <span class="math-container">$\mathcal{K}$</span> function if it is strictly increasing and <span class="math-container">$\alpha(0) = 0$</span>.</p>
<p>I have seen this result in at least two resources and a proof was not provided in neither of them. The authors explained that it is a direct consequence of the increasing property of class <span class="math-container">$\mathcal{K}$</span> functions, but there is one particular case that is not so obvious to me.</p>
<p>Here's my thoughts on it:</p>
<ul>
<li>Equality holds when <span class="math-container">$r_1 = r_2 = 0$</span>.</li>
<li>When <span class="math-container">$r_1 = r_2 \neq 0$</span>, the result is true because of its increasing nature.</li>
<li>For <span class="math-container">$r_1 \neq r_2$</span>, we know that <span class="math-container">$\alpha(r_1) < \alpha(2 r_1)$</span> and <span class="math-container">$\alpha(r_2) < \alpha(2 r_2)$</span> because they are increasing. Given that <span class="math-container">$\alpha(0) = 0$</span>, if either <span class="math-container">$r_1$</span> or <span class="math-container">$r_2$</span> is <span class="math-container">$0$</span>, then the inequality also holds.</li>
<li>But for the case where <span class="math-container">$r_1 \neq r_2$</span> and both are not <span class="math-container">$0$</span>, can we say anything about the relationship between <span class="math-container">$\alpha(r_1)+\alpha(r_2)$</span> and <span class="math-container">$\alpha(r_1+r_2)$</span>? Or is there anything else to be applied for this last case?</li>
</ul>
| Martin R | 42,969 | <p>Without loss of generality assume that <span class="math-container">$0 \le r_1 \le r_2$</span>. Then
<span class="math-container">$$
\begin{align}
0 \le 2 r_1 &\implies 0 = \alpha(0) \le \alpha(2r_1) \\
r_1 + r_2 \le 2 r_2 &\implies \alpha(r_1 +r_2) \le \alpha(2 r_2)
\end{align}
$$</span>
because <span class="math-container">$\alpha$</span> is increasing. Adding these inequalities gives the desired estimate:
<span class="math-container">$$
\alpha(r_1 +r_2) \le \alpha(2r_1) + \alpha(2 r_2) \, .
$$</span></p>
<p>Since <span class="math-container">$\alpha$</span> is strictly increasing, equality holds only if <span class="math-container">$0 = r_1 = r_2$</span>.</p>
|
200,876 | <p>Is there a topological space $(C,\tau_C)$ and two points $c_0\neq c_1\in C$ such that the following holds?</p>
<blockquote>
<blockquote>
<p>A space $(X,\tau)$ is connected if and only if for all $x,y\in X$ there is a continuous map $f:C\to X$ such that $f(c_0) = x$ and $f(c_1) = y$.</p>
</blockquote>
</blockquote>
<p>Is there also a Hausdorff space satisfying the above?</p>
| Goldstern | 14,915 | <p>No such space $C$ can exist. </p>
<p>We will derive a contradiction from the assumption that $C,c_0,c_1$ as desired exists. </p>
<p>Let κ be any cardinal greater than $|C|$. View $\kappa$ as an ordinal. For each β in κ add a copy of the unit interval between β and β+1, and add a point ∞ at the end. The resulting "Very Long Line" $L_\kappa$ is dense and complete, hence connected as a topological space. </p>
<p>As $L_\kappa$ is connected, there must be a continuous map $f$ from $C$ into $L_\kappa$ whose image $L'$ contains $0$ and $\infty$. Let $b\notin L'$. Then the map $h$ that sends everything below $b$ to $0$ and everything above $b$ to $1$ is continuous from $L'$ to the discrete space $\{0,1\}$. </p>
<p>So the map $h\circ f:C\to \{0,1\}$ witnesses that $\{0,1\}$ is connected, a contradiction. </p>
<p>(This is just a variant of Helene Sigloch's earlier argument.)</p>
|
72,537 | <blockquote>
<p>Let $A\in M_{n}$ have Jordan canonical form $J_{n_1}(\lambda_{1})\oplus\cdots\oplus J_{n_k}(\lambda_{k})$. If $A$ is non-singular ($\lambda_i\neq 0$), what is the Jordan canonical form of $A^{2}$?</p>
</blockquote>
<p>I can prove that if the eigenvalues of $A$ are $\sigma(A)=\{\lambda_{1},\dots, \lambda_{n} \}$ then $\sigma(A^{2})=\{\lambda_{1}^{2},\dots, \lambda_{n}^{2} \}$, for this reason I have been trying to attack this problem using this fact, but I am getting nowhere. How should I proceed?</p>
| Edison | 11,857 | <p>Thank you @Mariano. Intuitively I believe this makes sense, but I just want to go through some details.</p>
<p>Given the Jordan canonical form of a matrix $A$, I want to show that an arbitrary Jordan block of $A$ corresponding to the eigenvalue $\lambda$, $J_{k}(\lambda)$, gives rise to precisely one Jordan block $J_{k}(\lambda^{2})$ for $J_{k}(\lambda)^{2}$.</p>
<p>Let $J_{k}(\lambda)=\lambda I + N$, where $N=J_{k}(0)$. Then $J_{k}(\lambda)^{2}=\lambda^{2}I+2\lambda N +N^{2}=\lambda^{2}I+N'$.</p>
<p>Now consider
$$\mathrm{rank}(J_{k}(\lambda)-\lambda I)=\mathrm{rank}(N).$$ </p>
<p>By construction, $N$ is the matrix with all zero entries except for $1$'s on the super diagonal, so $\mathrm{rank}(N^{i})=k-i$ for $i=1,2,...,k$. Initially the rank of $N$ is $k-1^{(*)}$ because the first column consists of all zeros and the rest of the columns contain nonzero entries. Each successive power of $N$ reduces the rank by $1$. Similarly, </p>
<p>$$\mathrm{rank}(J_{k}(\lambda)^{2}-\lambda^{2} I)=\mathrm{rank}(N')$$
and for similar reasons $\mathrm{rank}(N'^{i})=k-i$ for $i=1,2,...,k$.</p>
<p>Therefore $$\mathrm{rank}(J_{k}(\lambda)-\lambda I)^{i}=\mathrm{rank}(J_{k}^{2}(\lambda)-\lambda^{2} I)^{i}$$ for $i=1,2,...,k$.</p>
<p>In particular, $$\mathrm{rank}(J_{k}(\lambda)^{2}-\lambda^{2} I)^{0}=k$$ and $$\mathrm{rank}(J_{k}(\lambda)^{2}-\lambda^{2} I)^{1}=k-1.$$
This tells us that the Jordan canonical form of the single Jordan block $J_{k}(\lambda)^{2}$ is $J_{k}(\lambda^{2})$. (If $\mathrm{rank}(J_{k}(\lambda)^{2}-\lambda^{2} I)^{1}>k-1$ then the Jordan canonical form for $J_{k}(\lambda)^{2}$ would contain more than one block.)</p>
<p>$^{(*)}$ Note: I glossed over the proof that $N=J_{k}(0)$ has rank $k-1$. I think to prove this I can argue that since $I_{k-1}$ has rank $k-1$, by appending first a $1\times k-1$ zero row to $I_{k}$ and then a $k\times 1$ zero column, the rank remains unchanged. And by the property of nilpotent matrices, successive power reduce the rank by 1.</p>
|
4,350,699 | <blockquote>
<blockquote>
<p><span class="math-container">$r:$</span>All prime numbers are either even or odd, Is it a true statement?</p>
</blockquote>
</blockquote>
<p>I was studying Mathematical Logic then i came across above question.
Since here connecting word is "OR"
so if i separate two statement then it become</p>
<p><span class="math-container">$p:$</span> All the prime number are even</p>
<p><span class="math-container">$q:$</span> All the prime number are odd</p>
<p>Because both statement <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are false so final statement <span class="math-container">$r$</span> must be false using truth value of statement for "OR" connective.</p>
<p>But my intuition says <span class="math-container">$r$</span> is true.</p>
<p>Am i thinking correct?</p>
<p>Please Help me in this.</p>
| ryang | 21,813 | <p><span class="math-container">$✔\quad$</span> All numbers are either even or odd.</p>
<p><span class="math-container">$✗\quad$</span> Either all numbers are even, or all numbers are odd.</p>
<p>In general, <span class="math-container">$$∀x \;\Big(A(x)\text{ or }B(x)\Big)\quad\text{does not imply}\quad∀x A(x)\;\text{ or }\;∀x B(x);$$</span> however, <span class="math-container">$$∃x \;\Big(A(x)\text{ or }B(x)\Big)\quad\text{is equivalent to}\quad∃x A(x)\;\text{ or }\;∃x B(x),$$</span> and <span class="math-container">$$∀x \;\Big(A(x)\text{ and }B(x)\Big)\quad\text{is equivalent to}\quad∀x A(x)\;\text{ and }\;∀x B(x).$$</span></p>
|
82,254 | <p>Consider the standard form polyhedron, and assume that the rows of the matrix A are linearly independent.</p>
<p>$$ \left \{ x | Ax = b, x \geq 0 \right \} $$</p>
<p>(a) Suppose that two different bases lead to the same basic solution. Show that the basic solution is degenerate (has less than m non-zero entries).</p>
<p>(b) Consider a degenerate basic solution. Is it true that it corresponds to two or more distinct bases? Prove or give a counterexample.</p>
<p>(c) Suppose that a basic solution is degenerate. Is it true that there exists an adjacent basic solution which is degenerate? Prove or give a counterexample.</p>
<p><strong>Solution</strong></p>
<p>(a) I think it's obvious but how build the proof, the two different bases lead to the same basic solution, when the last entering variable cannot be increased at all because it's b value equals 0 therefore as result we have the same basic solution. But how to prove that?</p>
<p>(b) no, degenerate basic solution can correspond to one basis only as well. But how to prove that?</p>
<p><strong>Addendum</strong></p>
<p>I found great description of (a) and (b), but level of this text is much higher than I can apprehend. I will appreciate if someone could shed light on this explanation. </p>
<p>(a) every basic feasible solution is equivalent to an extreme point. However, there may exist more than one basic corresponding to the same basic feasible solution or extreme point. Case of degeneracy corresponds to that of a extreme point at which some $r > p \equiv n- m $ defining hyperplanes from $x\geq 0$ are binding. Hence, for any associated basis, $(r-p)$ of the $X_{B}$ - variables area also zero. Consequently, the number of positive variables is $q = m-(r-p)<m$. In this case, each possible choice of a basis $B$ that includes the columns of these q positive variables represents this point. Clearly, if there exists more than one basis representing an extreme point, then this extreme point is degenerate</p>
<p>(b) Consider example
$$x_{1} + x_{2} + x_{3} = 1$$</p>
<p>$$-x_{1} + x_{2} + x_{3} = 1$$</p>
<p>$$x_{1}, x_{2}, x_{3} \geq 0$$</p>
<p>Consider the solution $\bar{x}=(0,1,0)$. Observer that this is an extreme point or a basic feasible solution with a corresponding basis having $x_{1}$ and $x_{2}$ as basic variables. Moreover, this is a degenerate extreme point. There are four defining hyperplanes binding at $\bar{x}$. Moreover, there are three ways of choosing three linearly independent hyperplanes from this set that yield $\bar{x}$ as the (unique) solution. However, the basis associated with $\bar{x}$ is unique.
Consider a degenerate basic variable ${x_{B}}_{r}$ (with $\bar{b}_{r}=0$), which is such that $Ax=b$ does not necessarily imply that ${x_{B}}_{r}=0$. Given that such a variable exists, we will construct another basis representing this point. Let $x_{k}$ be some component of $x_{N}$ that has a nonzero coefficient $\theta_{r}$ in the row corresponding to ${x_{B}}_r$. Note that $x_{k}$ exists. Then consider a new choice of $(n-m)$ nonbasic variables given by ${x_{B}}_{r}$ and $x_{N-k}$, where $x_{N-k}$ represents the components of $x_{N}$ other than $x_{k}$. Putting ${x_{B}}_{r}= 0$ and $x_{N-k}=0$ above uniquely gives $x_{k}=\frac{\bar{b}_{r}}{\theta_{r}}=0$ from row $r$, and so ${x_{B}}_{i} = \bar{b}_{i}$ is obtained as before from the other rows. Hence, this corresponds to an alternative basis that represents the same extreme point. Finally, note that if no degenerate basic variable ${x_{B}}_{r}$ of this type exists, then there is only one basis that represents this extreme point.</p>
| Mike Spivey | 2,370 | <p>(Most of this was written before the recent addendum. It addresses the OP's original question, not the addendum.)</p>
<p>(a) Suppose we have distinct bases $B_1$ and $B_2$ that each yield the same basic solution ${\bf x}$. Now, suppose (we're looking for a contradiction) that ${\bf x}$ is nondegenerate; i.e., every one of the $m$ variables in ${\bf x}$ is nonzero. Thus every one of the $m$ variables in $B_1$ is nonzero, and every one of the $m$ variables in $B_2$ is nonzero. Since $B_1$ and $B_2$ are distinct, there is at least one variable in $B_1$ not in $B_2$. But this yields at least $m+1$ nonzero variables in ${\bf x}$, which is a contradiction. Thus ${\bf x}$ must be degenerate.</p>
<p>(b) No. The counterexample linked to by the OP involves the system $$
\begin{align}
x_1 + x_2 + x_3 = 1, \\
-x_1 + x_2 + x_3 = 1, \\
x_1, x_2, x_3 \geq 0.
\end{align}$$<br>
There are three potential bases in this system: $B_1 = \{x_1, x_2\}$, $B_2 = \{x_1, x_3\}$, $B_3 = \{x_2, x_3\}$. However, $B_3$ <em>can't actually be a basis because the corresponding matrix $\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$ isn't invertible</em>. $B_1$ yields the basic solution $(0,1,0)$, and $B_2$ yields the basic solution $(0,0,1)$. Both of these are degenerate, but there is only one basis corresponding to each. </p>
<p>(c) No. Look at the system
$$
\begin{align}
x_1 + x_2 = 1, \\
x_2 + x_3 = 1, \\
x_1, x_2, x_3 \geq 0.
\end{align}
$$
The basic solution $(0,1,0)$ corresponds to bases $\{x_1, x_2\}$ and $\{x_2, x_3\}$. The only other basis is $\{x_1, x_3\}$, which implies that the only other basic solution is $(1,0,1)$. Thus the degenerate basic solution $(0,1,0)$ is not adjacent to another degenerate basic solution.</p>
<p><HR>
(<em>More on part (a), addressing OP's questions in the comments</em>.)</p>
<p>Say there are $n$ total variables in the problem: $x_1, x_2, \ldots, x_n$. Every basis $B$ consists of some $m$ of these variables. The basic solution ${\bf x}$ corresponding to a given basis $B$ has the other $n-m$ variables equal to $0$. (Setting these to $0$ is partly how you determine the value of ${\bf x}$; see, for instance, the examples above). If ${\bf x}$ is degenerate it might have some of the variables in $B$ equal to $0$, too, but the point in terms of the argument is that ${\bf x}$ can have no more than $m$ nonzero variables. </p>
<p>Now, suppose $B_1$ and $B_2$ are distinct and each have $m$ nonzero variables, yet both correspond to ${\bf x}$. Let's say $B_2 = \{x_1, x_2, \ldots, x_m\}$. Since $B_1$ and $B_2$ are distinct, $B_1$ has at least one variable that's not in $B_2$. Let's say this variable is $x_{m+1}$. But since every variable in $B_1$ and $B_2$ is nonzero, that means that $x_1, x_2, \ldots, x_m, x_{m+1}$ are all nonzero. However, $B_1$ and $B_2$ both correspond to ${\bf x}$, which means that there are at least $m+1$ nonzero variables in ${\bf x}$. That cannot happen for a basic solution, and so we have a contradiction. </p>
|
2,166,897 | <blockquote>
<p>Let X be a complex Banach space. Let <span class="math-container">$T\in B(X)$</span> be a bounded linear operator on <span class="math-container">$X$</span>. Let <span class="math-container">$T^*\in B(X^*)$</span> be the adjoint of <span class="math-container">$T$</span>.</p>
<p>Prove: If <span class="math-container">$T^*$</span> is invertible, then for all elements <span class="math-container">$x\in X$</span>,
<span class="math-container">$$ \|Tx \| \geq \| (T^*)^{-1}\|^{-1}\| x \|$$</span>
and use the inequality to prove that <span class="math-container">$T$</span> is invertible</p>
</blockquote>
| Guy Fsone | 385,707 | <p>Obviously we have,
\begin{split}
\|x\| &=& \sup\{|\langle x,y\rangle| ; \|y\|=1\}\\
&=& \sup\{|\langle T^{-1}Tx,y\rangle| ; \|y\|=1\}\\
&=& \sup\{|\langle T^{-1}Tx,(T^{-1})^*y\rangle| ; \|y\|=1\}\\
&\le& \|Tx\|\sup\{\|T^{-1})^*y\| ; \|y\|=1\}\\
& = &\|Tx\|\|(T^{-1})^*\|
\end{split}
that is $$\|x\| |(T^{-1})^*\|^{-1}\le \|Tx\|$$</p>
|
1,708,996 | <p>If $x = a( \theta +\sin \theta)$ and $y = a(1-\cos \theta)$ then $\frac{dy}{dx}$ will be equal to : </p>
<p>$a) \sin \frac{\theta}{2}$</p>
<p>$b) \cos \frac{\theta}{2}$</p>
<p>$c) \tan \frac{\theta}{2}$</p>
<p>$d) \cot \frac{\theta}{2}$</p>
<p>I have solved till : $\frac{dy}{dx} = \frac{\sin \theta}{1 + \cos \theta}$ using $\frac{dy}{dx} = \frac{dy}{d \theta} . \frac{d \theta}{dx}$. </p>
<p>How do I reduce to the option's forms?</p>
| Rayees Ahmad | 249,254 | <p>$$\dfrac{\sin a}{1+\cos a}=\frac{\sin\frac{a}{2}cos\frac{a}{2}}{{\cos^2\frac{a}{2}}} $$</p>
<p>$=\tan\frac{a}{2} $ is the solution</p>
<p>( Rotation of cycloid?)</p>
|
2,553,175 | <p>How can I verify that
$$1-2\sin^2x=2\cos^2x-1$$
Is true for all $x$?</p>
<p>It can be proved through a couple of messy steps using the fact that $\sin^2x+\cos^2x=1$, solving for one of the trigonemtric functions and then substituting, but the way I did it gets very messy very quickly and you end up with a bunch of factoring, etc.</p>
<p>What's the simplest way to solve this?</p>
| D.R. | 405,572 | <p>Given $$\sin^2(x)+\cos^2(x)=1$$
we rearrange to get
$$\sin^2(x)=1-\cos^2(x)$$
Substituting:
$$1-2\sin^2(x)=1-2(1-\cos^2(x))=2\cos^2(x)-2+1=2\cos^2(x)-1$$
Perhaps not as messy as you imagined.</p>
|
2,381,406 | <p>Somewhere I saw that </p>
<blockquote>
<p>To show that $x^2-y^3$ is irreducible in $k[x,y]$ it suffices to show that $x^2-y^3$ is irreducible in $k(y)[x]$.</p>
</blockquote>
<p>My question is what is the relation between $k[x,y]$ and $k(y)[x]$ ?</p>
<p>Also there is a confusion that if $k(y)$ is the smallest field containing $y$ and $k$ (by definition) then what will be the inverse of $y?$ Is it $1/y$ ?</p>
| Noah Schweber | 28,111 | <p>A space is first-countable if for each point $x$ there is a <em>single</em> sequence of neighborhoods such that <em>every</em> neighborhood of $x$ contains in some neighborhood in the sequence. Although for each specific neighborhood $V$ we can easily find a sequence with some element contained in $V$, we can't necessarily find a single sequence which works for every such neighborhood. (This is similar to the reason why the reals are uncountable, even though any <em>specific</em> real can be put on a list.)</p>
<p>This just shows why first-countability is not obviously trivial. It turns out that it is <em>really</em> nontrivial: there are lots of non-first-countable sequences. For example, consider the <a href="https://en.wikipedia.org/wiki/Order_topology#Ordinals_as_topological_spaces" rel="nofollow noreferrer">order topology on $\omega_1+1$</a>. Every open neighborhood of $\omega_1$ here contains a neighborhood of the form $(\alpha, \omega_1]$ for $\alpha$ a countable ordinal. But the supremum of countably many countable ordinals is countable, so for any sequence $U_i$ ($i\in\mathbb{N}$) of neighborhoods of $\omega_1$ contains a neighborhood of the form $(\beta, \omega_1]$ for some countable ordinal $\beta$. But then the neighborhood $(\beta+1, \omega_1]$ doesn't contain any of the neighborhoods $U_i$.</p>
<hr>
<p>EDIT: here's a much better example, not requiring ordinals at all: the <a href="https://en.wikipedia.org/wiki/Cocountable_topology" rel="nofollow noreferrer"><em>cocountable topology</em></a> on any uncountable set $S$. For any sequence of neighborhoods $(U_i)_{i\in\mathbb{N}}$ of a point $x$, the intersection $\bigcap U_i$ is again cocountable, and hence uncountable (since $S$ is uncountable). Pick $s\in \bigcap U_i$, $s\not=x$; then $(\bigcap U_i)\setminus\{s\}$ is a neighborhood of $x$ not containing any of the $U_i$s.</p>
<hr>
<p>Interestingly, first countability is <strong>nontrivially nontrivial</strong>: showing that a space is not first countable generally requires using a principle of the form "the union of countably many "small" sets is "small,"" and such principles are generally not true without some amount of the axiom of choice. For instance, it is consistent with ZF that <em>every infinite set is a countable union of sets of strictly smaller cardinality</em>! In fact, I don't offhand know of an example of a space which is provably non-first-countable <em>in ZF alone</em> (and I've now <a href="https://math.stackexchange.com/questions/2381554/universal-first-countability-in-zf">asked a question about this</a>).</p>
<p>(Incidentally, there are some neat examples if choice fails sufficiently badly - e.g. the cofinite topology on any <a href="https://en.wikipedia.org/wiki/Amorphous_set" rel="nofollow noreferrer">amorphous set</a>, if such a set exists.)</p>
|
3,816,041 | <blockquote>
<p>How many ways <span class="math-container">$5$</span> identical green balls and <span class="math-container">$6$</span> identical red balls can be arranged into <span class="math-container">$3$</span> distinct boxes such that no box is empty?</p>
</blockquote>
<p>My attempt :</p>
<p>Finding coefficient of <span class="math-container">$x^{11}$</span> in the expansion of <span class="math-container">$$( x + x^2 + x^3 + x^4 + x^5+x^6 )^3 ( x + x^2 + x^3 + x^4 + x^5 )^ 3$$</span> and arranging them which was wrong when inspected</p>
<p>Please help me out</p>
| nguyen quang do | 300,700 | <p>Your problem can be nicely generalized as follows:
Suppose that <em>p, q</em> two distinct primes such that the class <em>p</em> mod <em>q</em> is a generator of the multiplicative group <span class="math-container">$\mathbf F_q^{\times}$</span> ; in other words, for any <span class="math-container">$d < q-1,q$</span> does not divide <span class="math-container">$p^d - 1$</span>. Then <span class="math-container">$f_q(X):= X^q - 1/X - 1$</span> is irreducible over <span class="math-container">$\mathbf F_p$</span>. In your case, <span class="math-container">$q=5$</span> and it is immediately checked that the hypothesis
is equivalent to <span class="math-container">$p \neq \pm 1$</span> mod <span class="math-container">$5$</span>.</p>
<p><em>Proof</em>: It is a simple application of the Galois theory of finite fields. Let <span class="math-container">$\zeta_q$</span> be a primitive <span class="math-container">$q$</span>-th root of unity (in an algebraic closure of <span class="math-container">$\mathbf F_p$</span>). The extension <span class="math-container">$E=\mathbf F_p(\zeta_q) =$</span> splitting field of <span class="math-container">$f_q(X)$</span> over <span class="math-container">$\mathbf F_p$</span>, has the form <span class="math-container">$\mathbf F_{p^d}$</span>, where <span class="math-container">$d$</span> is the degree of the minimal polynomial of <span class="math-container">$\zeta_q$</span> over <span class="math-container">$\mathbf F_p$</span>, in particular <span class="math-container">$d \le (q-1)$</span>. Besides, <span class="math-container">$E^{\times}$</span> is cyclic of order (<span class="math-container">$p^d-1$</span>), hence <span class="math-container">$q$</span> divides <span class="math-container">$(p^d - 1)$</span>; by our hypothesis, it follows that <span class="math-container">$d \ge q-1$</span>, so finally <span class="math-container">$d=q-1$</span> and <span class="math-container">$f_q(X)$</span> is irreducible.</p>
|
1,968 | <p>We're evaluating the feasibility of <strong>sponsoring a member of the math community to speak at a conference in 2011</strong>.</p>
<p>Speaking is a relatively big "ask", so this needs to be planned many months in advance. Let's get started! </p>
<p>We'd like the community to establish <strong>where</strong> ...</p>
<blockquote>
<p>What relevant math conferences are coming up in 2011 that have open speaker slots or calls for papers?</p>
</blockquote>
<p>... and then <strong>who</strong>.</p>
<blockquote>
<p>Which members of the community are strongly interested in being sponsored by Stack Exchange, Inc to speak at one of the above conferences in 2011?</p>
</blockquote>
<p>To be clear, the speaker is free talk about anything he or she wants so long as it would be roughly on topic for this site -- with a quick acknowledgement of support from Stack Exchange and a mention of the community here.</p>
| Willie Wong | 1,543 | <p>Well, the obvious suggestion for visibility is <a href="http://jointmathematicsmeetings.org/jmm" rel="nofollow">the annual joint maths meetings</a> of the American Mathematical Society, the Mathematical Association of America, and the Society of Industrial and Applied Mathematics; it is held every year in early January, so the next one is January of 2012 (I know you said 2011, but academics like to operate on the academics calendar...)</p>
<p>If you are willing to sponsor somebody, you may consider contacting directly the <a href="http://www.maa.org/" rel="nofollow">MAA</a> or the <a href="http://www.ams.org" rel="nofollow">AMS</a> for more than just a contributed report. I don't think the slots for open speakers and contributed papers (which are generally 10 - 15 minutes, sometimes shorter) have enough visibility for it to be worth your while. (This is, however, generally the case in larger, general audience mathematics conferences, that <strong>invited</strong> talks get 20 minutes to an hour, while contributed talks much shorter.)</p>
<p>An alternative to sponsoring just one speaker, and getting some publicity, is to try to sponsor a special event. Perhaps a "Math.SE meet and greet"? </p>
<hr>
<p>In general, I think given the scope and stated purpose for Math.SE, the best conferences would be those <a href="http://www.maa.org/subpage_4.html" rel="nofollow">run by the MAA</a>. </p>
|
3,301,696 | <p>Prove that if <span class="math-container">$R$</span> is a non-commutative ring with <span class="math-container">$1$</span> and if <span class="math-container">$a,b \in R$</span> and if <span class="math-container">$ab =1 $</span> but <span class="math-container">$ba \neq 1$</span> then <span class="math-container">$R$</span> is infinite</p>
<p>I'm not sure about this one.</p>
<p>I noticed that <span class="math-container">$ba$</span> is an idempotent, and thus <span class="math-container">$(1-ba)$</span> is idenmpotent... So far I have 5 elements wooh! Only need <span class="math-container">$\infty$</span> more! Haha but seriously i'm pretty confused, insight appreciated!</p>
| Jens Hemelaer | 81,217 | <p>Hint: show that if <span class="math-container">$R$</span> is finite, then <span class="math-container">$a^n = 1$</span> for some natural number <span class="math-container">$n$</span>. Now compute <span class="math-container">$b$</span>.</p>
|
1,241,864 | <p>I would like to know if there is formula to calculate sum of series of square roots $\sqrt{1} + \sqrt{2}+\dotsb+ \sqrt{n}$ like the one for the series $1 + 2 +\ldots+ n = \frac{n(n+1)}{2}$.</p>
<p>Thanks in advance.</p>
| Allan Henriques | 666,324 | <p><a href="https://i.stack.imgur.com/1YbEs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1YbEs.png" alt="enter image description here" /></a></p>
<p>Hopefully you can figure it out using this sketch.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.