qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
222,093 | <p>For what value of m does equation <span class="math-container">$y^2 = x^3 + m$</span> has no integral solutions?</p>
| PAD | 27,304 | <p>Here is the solution in Ireland and Rosen (page 270). </p>
<p>Suppose the equation has a solution. Then $x$ is odd. For otherwise reduction modulo 4 would imply 3 is a square modulo 4. Write the equation as
$$y^2+1=(x+2)(x^2-2x+4)=(x+2) ((x-1)^2+3) \ . \ \ \ (*)$$
Now since $(x-1)^2 +3$ is of the form $4n+3$ there is a prime $p$ of the form $4n+3$ dividing it and reduction of $(*)$ modulo $p$ implies that $-1$ is a square modulo $p$ which is a contradiction. </p>
|
688,742 | <p>Given $P\colon\mathbb{R} \to \mathbb{R}$ , $P$ is injective (one to one) polynomial function i need to formally prove that $P$ is onto $\mathbb{R}$</p>
<p>my strategy so far .......
polynomial function is continuous and since it one-to-one function it must be strictly monotonic and now i have no idea what to do .... </p>
<p>there is a theorem saying all continuous and monotonic functions have an inverse function and another theorem saying function have an inverse function if and only if its one-to-one and onto ... </p>
<p>but for formal proof of the on-to function property i think i need to show here that every element in the co-domain/target-set have a "source" in the domain .</p>
<p>i don't think its useful for this proof but only a polynomials of odd degrees have the one-to-one property . </p>
| Community | -1 | <p>If we <em>continuously</em> extend the real exponential to have values $e^{+\infty} = +\infty$ and $e^{-\infty} = 0$, then this can be seen as taking the limit of a continuous function, which can be done by simply plugging in the limiting value:</p>
<p>$$\lim_{x\to +\infty} \frac{11 - e^{-x}}{7} = \frac{11 - e^{-(+\infty)}}{7}
= \frac{11 - e^{-\infty}}{7} = \frac{11 - 0}{7} = \frac{11}{7} $$</p>
|
688,742 | <p>Given $P\colon\mathbb{R} \to \mathbb{R}$ , $P$ is injective (one to one) polynomial function i need to formally prove that $P$ is onto $\mathbb{R}$</p>
<p>my strategy so far .......
polynomial function is continuous and since it one-to-one function it must be strictly monotonic and now i have no idea what to do .... </p>
<p>there is a theorem saying all continuous and monotonic functions have an inverse function and another theorem saying function have an inverse function if and only if its one-to-one and onto ... </p>
<p>but for formal proof of the on-to function property i think i need to show here that every element in the co-domain/target-set have a "source" in the domain .</p>
<p>i don't think its useful for this proof but only a polynomials of odd degrees have the one-to-one property . </p>
| Ilmari Karonen | 9,602 | <p><a href="https://math.stackexchange.com/a/688730">As EPAstor notes</a>, your reason (A) is closer to the truth, but it's not the <em>complete</em> answer.</p>
<p>The reason why $\displaystyle \lim_{x \to \infty} e^{-x} = 0$ implies $\displaystyle \lim_{x \to \infty} \frac{11 - e^{-x}}{7} = \frac{11}{7}$ is that</p>
<p>$$\lim_{x \to \infty} \frac{11 - e^{-x}}{7}
= \frac{\lim_{x \to \infty} 11 - e^{-x}}{7}
= \frac{11 - (\lim_{x \to \infty} e^{-x})}{7}
= \frac{11 - 0}{7}
= \frac{11}{7}$$</p>
<p>and the reason we can move the limit inside the fraction like that is that the functions
$$z \mapsto 11 - z \quad \text{and} \quad z \mapsto \frac z 7$$
are both everywhere continuous. It is a rather straightforward consequence of the <a href="//en.wikipedia.org/wiki/Continuous_function#Definition_in_terms_of_limits_of_functions" rel="nofollow noreferrer">definition of continuity</a> (prove it!) that, if a function $f$ is continuous at $c = \displaystyle\lim_{x \to a} z(x)$, then
$$\lim_{x \to a} f(z(x)) = f(\lim_{x \to a} z(x)) = f(c).$$</p>
|
1,946,438 | <p>I solved the equation $e^{e^z}=1$ and it seemed to easy so I suspect I must be missing something.</p>
<blockquote>
<p>Would someone please check my answer?</p>
</blockquote>
<p>My original answer:</p>
<p>$e^{e^z}=1$ if and only if $e^z = 2\pi i k$ for $k\in \mathbb Z$ if and only if $z=\ln(2\pi i k)$ for $k\in \mathbb Z$.</p>
<p><strong>Edit</strong></p>
<p>After reading the comments and answers I tried to do it again. Unfortunately, I still do not get the same result as in the answers.</p>
<p><strong>My second attempt:</strong></p>
<p>We have </p>
<p>$$ e^x = 1 \iff x = 2 \pi i k$$</p>
<p>hence </p>
<p>$$ e^z = 2 \pi i k$$</p>
<p>for some $k$ in $\mathbb Z$. </p>
<p>Letting $e^z = e^x (\cos y + i \sin y)$ we get </p>
<p>$$ e^x \cos y + i e^x \sin y = 2 \pi k i$$</p>
<p>which implies that $\cos y = 0$ which happens if and only if $y_j = {\pi \over 2} + \pi j$ where $j\in \mathbb Z$. At $y_j$ we have
$\sin y = \pm 1$ hence if $j$ is even</p>
<p>$$ e^x = 2 \pi i k$$</p>
<p>and if $j$ is odd </p>
<p>$$ e^x = -2 \pi i k$$</p>
<p>Hence if $j$ is even,</p>
<p>$$ x = {\pi \over 2} + \ln(2 \pi k)$$</p>
<p>and if $j$ is odd,</p>
<p>$$ x = {3\pi \over 2} + \ln(2 \pi k)$$</p>
<p>So we see that the solutions are</p>
<p>$$
z_{t,k}=\begin{cases}
{\pi \over 2} + \ln(2 \pi k) + i ({\pi \over 2} + 2t \pi )\\
{3\pi \over 2} + \ln(2 \pi k) + i({\pi \over 2} + (2 +1)t \pi )
\end{cases}
$$
for $k,t \in \mathbb Z$.</p>
<blockquote>
<p>What am I doing wrong?</p>
</blockquote>
| BCLC | 140,308 | <p>I believe you are making too many jumps</p>
<p>Consider that in Approach 1 you said</p>
<blockquote>
<ol>
<li><p><span class="math-container">$e^{e^z}=1$</span> if and only if <span class="math-container">$e^z = 2\pi i k$</span></p>
</li>
<li><p><span class="math-container">$e^z = 2\pi i k$</span> for <span class="math-container">$k\in \mathbb Z$</span> if and only if <span class="math-container">$z=\ln(2\pi i k)$</span> for <span class="math-container">$k\in \mathbb Z$</span></p>
</li>
</ol>
</blockquote>
<p>and in Approach 2, you said</p>
<blockquote>
<p><span class="math-container">$$ e^x = 1 \iff x = 2 \pi i k$$</span></p>
<p>hence</p>
<p><span class="math-container">$$ e^z = 2 \pi i k$$</span></p>
</blockquote>
<hr />
<p>Of course we see now that <a href="https://math.stackexchange.com/a/1946466/140308">(2) is wrong</a>. While (1) is right (Or not?), (1) seems like <a href="http://hitchhikers.wikia.com/wiki/God#Final_Proof_of_the_Non-Existence_of_God" rel="nofollow noreferrer">cheating death (or cheating vanishment-in-a-puff-of-logic)</a>. You're assuming <span class="math-container">$e^{iy}$</span> and <span class="math-container">$e^z$</span> behave too similarly, if not outright the same, to <span class="math-container">$e^x$</span>. I suggest you expand <span class="math-container">$e^{e^z}$</span> by definition instead of making jumps as to which term should equal some multiple of <span class="math-container">$\pi$</span>.</p>
<p>Let's start with the <span class="math-container">$z$</span>:</p>
<p><span class="math-container">$$e^{e^z}= e^{e^{x+iy}}$$</span></p>
<p>and the inner <span class="math-container">$e^z = e^{x+iy}$</span>:</p>
<p><span class="math-container">$$e^{e^{x+iy}}=e^{e^{x}e^{iy}}$$</span></p>
<p>Now for the <span class="math-container">$e^{iy}$</span>:</p>
<p><span class="math-container">$$e^{e^{x}e^{iy}}= e^{e^{x}(\cos(y)+i\sin(y))}=e^{e^{x}\cos(y)+ie^{x}\sin(y)}$$</span></p>
<p>Let's proceed to the outer <span class="math-container">$e^{f(x,y)+ig(x,y)}$</span>:</p>
<p><span class="math-container">$$e^{e^{x}\cos(y)+ie^{x}\sin(y)}=e^{e^{x}\cos(y)}e^{ie^{x}\sin(y)} $$</span></p>
<p>Now for the <span class="math-container">$e^{ig(x,y)}$</span>:</p>
<p><span class="math-container">$$e^{e^{x}\cos(y)}e^{ie^{x}\sin(y)}= e^{e^{x}\cos(y)}(\cos(e^{x}\sin(y))+i\sin(e^{x}\sin(y)))$$</span></p>
<p>Finally, we can bring in the 1:</p>
<p><span class="math-container">$$e^{e^{x}\cos(y)}(\cos(e^{x}\sin(y))+i\sin(e^{x}\sin(y))) = e^{e^{x}\cos(y)}\cos(e^{x}\sin(y))+ie^{e^{x}\cos(y)}\sin(e^{x}\sin(y)) = 1+0i$$</span></p>
<p><span class="math-container">$$\iff e^{e^{x}\cos(y)}\cos(e^{x}\sin(y)) = 1, e^{e^{x}\cos(y)}\sin(e^{x}\sin(y)) = 0$$</span></p>
<p><span class="math-container">$\therefore,$</span> we reduce a complex equation into a system of 2 real equations, whose solutions are:</p>
<p><span class="math-container">$$(x,y) = (\ln(2m \pi),\frac{\pi}{2} + 2 l \pi)$$</span></p>
<p>This can be shown to be equivalent to <a href="http://www.wolframalpha.com/input/?i=e%5E%7Be%5Ez%7D%3D1" rel="nofollow noreferrer">WA's answer</a>:</p>
<p><span class="math-container">$$z = 2 \pi i n_2 + \ln(2 i \pi n_1)$$</span></p>
|
4,151 | <p>Since it's currently summer break, and I've a bit more time than normal, I've been organizing my old notes. I seem to have an almost unwieldy amount of old handouts and tests from classes previously taught. I'm hesitant to get rid of these, as they may provide useful for some future course. Because I adjunct at a few different colleges, with slightly different course content, I find that a lot of these materials are useful for multiple classes. I've begun organizing them according to specific content, but I'm curious, does anyone have an organizational style that they have found useful?</p>
| SlyPuppy | 625 | <p>I use schoology.com to keep my notes and such.</p>
<p>It's great because it goes where I go plus it's free for both students and teachers.</p>
|
265,047 | <p>Let $X$ be a Banach space and let $T:X \rightarrow X$ be a bounded linear map. Show that: If $T$ is surrjective then its transpose $T':X' \rightarrow X'$ is bounded below.</p>
<p>My try: We know that $R^\perp_M = N_{M'}$ and since X is surrjective
$R_M = X$ hence $R_M^\perp = N_{M'} = 0$ so $M'$ is invertible and bounded below.
Am I missing some details? Is invertible and bounded below true?</p>
| Community | -1 | <p>By expanding out the brackets you cans split it into two series:
$$\sum_{n=1}^\infty\frac{\left(\ln n+(-1)^n\right)n^{1/2}}{n\cdot n^{1/2}} = \sum_{n=1}^\infty{\frac{\ln n}{n}}+{\sum_{n=1}^\infty{\frac{(-1)^n}{n}}}$$
The second series is the alternating Harmonic, widely known to be convergent. This can be verified by the <a href="http://en.wikipedia.org/wiki/Alternating_series_test">Alternating series test</a>. The first series however, when you apply <a href="http://en.wikipedia.org/wiki/Ratio_test">Ratio test</a>, you will find that:
$$\frac{a_{n+1}}{a_n}=\frac{\ln{(n+1)}}{\ln{n}}\frac{n}{n+1}$$
Which has a limit of $1$ as $n \to \infty$, so the ratio test in inconclusive. However, we can compare it term by term with the Harmonic series:
$$\frac{1}{n}<\frac{\ln n}{n} \forall n > 1$$
Hence:
$$\sum_{n=1}^\infty{\frac{\ln n}{n}} > \sum_{n=1}^\infty{\frac{1}{n}}$$
And the right hand side is known to diverge (<a href="http://en.wikipedia.org/wiki/Harmonic_series_%28mathematics%29">Harmonic series</a>), hence the entire series must diverge.</p>
|
57,769 | <p>Consider a finitely axiomatized theory $T$ with axioms $\phi_1,...,\phi_n$ over a first-order language with relation symbols $R_1,...,R_k$ of arities $\alpha_1,...,\alpha_k$. Consider the atomic formulas written in the form $(x_1,...,x_{\alpha_j})\ \varepsilon R_j$.</p>
<p>Translate this theory into a (finite) set-theoretic definition</p>
<p>$T(X) :\equiv (\exists R_1)...(\exists R_k) R_i \subseteq X^{\alpha_i} \wedge \phi'_1 \wedge ... \wedge \phi'_n$</p>
<p>where $\phi'_i$ is $\phi_i$ with $(\forall x)$ replaced by $(\forall x \in X)$ and $(x_1,...,x_{\alpha_j})\ \varepsilon R_j$ replaced by $(x_1,...,x_{\alpha_j})\ \in R_j$ with $(x_1,...,x_{\alpha_j})$ an abbreviation for ordered tuples.</p>
<p>To show that $T$ has a model — i.e. to show that $T$ is consistent — is to prove the statement $(\exists x) T(x)$ from the axioms of set theory. </p>
<p>It is essential that the relations fulfill the conditions $\phi_i$ simultaneously. Thus it is not clear at first sight, how the existence of a model of a theory can be proved (or even be stated set-theoretically) that is <em>not</em> finitely axiomatizable, since it cannot be translated into a finite sentence.</p>
<p>Some other things are not clear (to me):</p>
<ol>
<li><p>In this setting, doesn't the consistency of every theory dependend on the consistency of the choosen set theory? (If the set theory isn't consistent, every theory has a model.)</p></li>
<li><p>Furthermore, doesn't the consistency of a theory depend on the choice of the set theory in which $(\exists x) T(x)$ is proved? (In some set theories $(\exists x) T(x)$ can be proved, in others maybe not.)</p></li>
<li><p>What conditions has a theory to fulfill to be able to play the role of set theory in this setting? [It doesn't have to be the element relation $\in$ which $\varepsilon$ is mapped on. But one needs to be able to build ordered tuples of arbitrary length. What else? Something like powersets (since $R_i \subseteq X^{\alpha_i}$ is $R_i \in \mathcal{P}(X^{\alpha_i})$)? Is extensionality necessary? What is the general framework to discuss such questions?]</p></li>
</ol>
| bonnnnn2010 | 13,590 | <p>Fourier Analysis has many applications in Nonlinear PDEs, for example, Nonlinear Schrödinger Equation, a very often used method is Hardy-Littlewood decomposition, to get the well-posedness(existence, uniqueness and some kind of dependence on the initial data) of the solution. A good book is "Nonlinear Dispersive Equation" by Terence Tao.</p>
|
3,186,239 | <p>Two Independent variables have Bernoulli distribution:
<span class="math-container">$X_1$</span> with <span class="math-container">$b(n,p)$</span> and <span class="math-container">$X_2$</span> with <span class="math-container">$b(m,p)$</span>.
How can I find conditional distribution <span class="math-container">$\mathbb P(X_1|X_1+X_2=t)$</span>?</p>
| drhab | 75,923 | <p>Guide (preassuming that we are dealing with binomial distribution, see my comment on the question):</p>
<p><span class="math-container">$$P(X_1=k\mid X_1+X_2=t)P(X_1+X_2=t)=P(X_1=k, X_2=t-k)=P(X_1=k)P(X_2=t-k)$$</span>where the second equality rests on independence.</p>
<p>Further <span class="math-container">$X_1+X_2$</span> will have binomial distribution with parameters <span class="math-container">$n+m$</span> and <span class="math-container">$p$</span> (this also rests on independence of <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> and for a proof of this see <a href="https://math.stackexchange.com/a/1176442/75923">here</a>), enabling you to find and expression for <span class="math-container">$P(X_1+X_2=t)$</span>.</p>
|
663,435 | <p>Bob has an account with £1000 that pays 3.5% interest that is fixed for 5 years and he cannot withdraw that money over the 5 years</p>
<p>Sue has an account with £1000 that pays 2.25% for one year, and is also inaccessible for one year.</p>
<p>Sue wants to take advantage of better rates and so moves accounts each year to get the better rates.</p>
<p>How much does the interest rate need to increase per year (on average) for Sue to beat Bob's 5 year account?</p>
<p>Compound interest formula:
$A = P(1 + Q)^T$</p>
<p>Where:</p>
<p>$A$ = Amount Earned
$P$ = Amount Deposited
$R$ = Sues Interest Rate
$T$ = Term of Account
$Q$ = Bobs Interest rate
$I$ = Interest Increase Per Period</p>
<p>My method of working thus far:</p>
<p>\begin{align}
\text{First I calculate Bobs money at 5 years}\\
P(1 + Q)^T &= A \\
1000(1 + 0.035)^5 &= A \\
1187.686 &= A\\
1187.68 &= A (2DP)\\
\text{Now work out Sues first years interest}\\
1000(1 + 0.0225) ^ 1 &= A \\
1022.5 &= A\\
\text{Then I work out the next 4 years compound interest}\\
((1187.686/1022.5) ^ {1/4}) - 1 &= R \\
-0.7096122249388753 &= R\\
-0.71 &= R (2DP)\\
\text{Then I use the rearranged formula from Ross Millikan}\\
4/{10}R - 9/{10} &= I\\
4/{10}*-0.71 - 9/{10} &= I\\
0.0 &= I\\
\end{align}</p>
| Ross Millikan | 1,827 | <p>You should just be able to plug the values into your first equation to get the value of Bob's account at the end of 5 years. Note that A is the total value, not the amount earned. Also note that the interest rate needs to be expressed as a decimal. </p>
<p>For Sue, first calculate how much she has at the end of the first year: $1000\cdot (1+0.0225)=1022.50$ When she deposits that into the new account, that becomes P. You should be able to find what her R needs to be for $T=4$ to match Bob's value at the end of 5 years. To a good approximation, that is what here average needs to be over the four, so you have $\frac 14[(2.25+I)+(2.25+2I)+(2.25+3I)+(2.25+4I)]=2.25+\frac {10I}4=R$</p>
|
3,869,237 | <p>I know this is quite weird or it does not make much sense, but I was wondering, does <span class="math-container">$\int e^{dx}$</span> has any meaning or whether it makes sense at all? If it does means something, can it be integrated and what is the result?</p>
| alienare 4422 | 834,770 | <p>Well it makes sense when you think about it one way and it doesn't when you think about it another way...First of all <span class="math-container">$dx$</span> is supposed to be extremely close to <span class="math-container">$0$</span> so (as J.G. said)
that's going to equal <span class="math-container">$1+dx$</span> which is essentially 1. On the other hand its not supposed to make sense cuz <span class="math-container">$e^{dx}$</span> it self doesn't have a value (you can't take it as <span class="math-container">$1$</span> because <span class="math-container">$dx$</span> is not <span class="math-container">$0$</span> and you can't take it as a value because <span class="math-container">$dx$</span> is super small) so how can you actually add up infinite amounts of this...I mean integration is essentially zooming in so that the area under a graph is actually infinite amounts of small rectangles but with the <span class="math-container">$dx$</span> difference you can't zoom in infinitely because there will always be the difference...I mean you can zoom in on <span class="math-container">$e^2$</span> because it has a fixed value...but you can't do that for <span class="math-container">$e^{dx}$</span> because <span class="math-container">$dx$</span> doesn't really have a well defined value</p>
<p>Hope you got your answer</p>
|
3,869,237 | <p>I know this is quite weird or it does not make much sense, but I was wondering, does <span class="math-container">$\int e^{dx}$</span> has any meaning or whether it makes sense at all? If it does means something, can it be integrated and what is the result?</p>
| Amaan | 814,546 | <p>Tool 1: Know, <span class="math-container">$\displaystyle \lim_{\Delta x\rightarrow 0} \frac{a^{\Delta x}-1}{\Delta x}=\ln(a) $</span>. Now, <span class="math-container">\begin{align*}
\int e^{\mathrm{d}x}&=\int e^{\mathrm{d}x}-1+1\\
&=\int \frac{e^{\mathrm{d}x}-1}{\mathrm{d}x}\mathrm{d}x+1\\
&=\int \ln(e)\,\mathrm{d}x+1\\
&=x\ln(e)+1\\
&=x+1 +\text{constant}
\end{align*}</span></p>
|
3,421,858 | <p><span class="math-container">$\sqrt{2}$</span> is irrational using proof by contradiction.</p>
<p>say <span class="math-container">$\sqrt{2}$</span> = <span class="math-container">$\frac{a}{b}$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are positive integers. </p>
<p><span class="math-container">$b\sqrt{2}$</span> is an integer. ----[Understood]</p>
<p>Let <span class="math-container">$b$</span> denote the smallest such positive integer.----[My understanding of this is </p>
<p>that were are going to assume b is the smallest possible integer such that <span class="math-container">$\sqrt{2}$</span> = <span class="math-container">$\frac{a}{b}$</span>, ... Understood]</p>
<p>Then <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span>----[I'm not sure I understand the point that's being made here,</p>
<p>from creating a variable <span class="math-container">$b^{*} = a - b$</span> ]</p>
<p>Next, <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span> is a positive integer such that <span class="math-container">$b^{*}\sqrt{2}$</span> is an integer.----[ I get that (<span class="math-container">$a - b$</span>) has to</p>
<p>be a positive integer, why does it follow that then <span class="math-container">$b^{*}\sqrt{2}$</span> is an integer?]</p>
<p>Lastly, <span class="math-container">$b^{*}<b$</span>, which is a contradiction.----[I can see that given <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span>, we then have </p>
<p><span class="math-container">$b^{*}<b$</span>, I don't get how that creates a contradiction]</p>
<p>Any help is appreciated, thank you. </p>
| Deepak | 151,732 | <p>Your initial premise is that <span class="math-container">$b$</span> is <em>defined</em> to be the smallest positive integer such that <span class="math-container">$b\sqrt 2$</span> is a positive integer. This is equivalent to reducing <span class="math-container">$\frac ab$</span> to its lowest terms (that is, make <span class="math-container">$a$</span> and <span class="math-container">$b$</span> coprime).</p>
<p>Obviously <span class="math-container">$b^\mathrm * = b\sqrt 2 - b = a-b$</span> will be a positive integer such that <span class="math-container">$b^*<b$</span>. </p>
<p>And we now construct a new positive number <span class="math-container">$b^\mathrm {**}= b^\mathrm *\sqrt 2$</span>. We can show this new number is an integer because <span class="math-container">$b^\mathrm*\sqrt 2 = (b\sqrt 2 - b) \sqrt 2 = 2b - b\sqrt 2 = 2b-a$</span>.</p>
<p>But now we have a problem. If <span class="math-container">$b^\mathrm{**} = b^\mathrm*\sqrt 2$</span> is a positive integer as we've just shown, then <span class="math-container">$b$</span> cannot be the smallest positive integer that possesses the defining property of giving an integer when multiplied by <span class="math-container">$\sqrt 2$</span>. We've just found an even smaller positive integer with the same property. </p>
<p>We have arrived at a contradiction. Therefore it is not possible to have an integer <span class="math-container">$b$</span> and hence <span class="math-container">$\sqrt 2$</span> is not rational. </p>
|
4,383,800 | <p>I can already see that the <span class="math-container">$\lim_\limits{n\to\infty}\frac{n^{n-1}}{n!e^n}$</span> converges by graphing it on Desmos, but I have no idea how to algebraically prove that with L’Hopital’s rule or induction. Where could I even start with something like this?</p>
<p>Edit: For context, I came across this limit while studying the series expansion for the Lambert W Function, <span class="math-container">$W(x)= \sum_{n=0}^{\infty}\frac{(-n)^{n-1}x^n}{n!}$</span> . By the ratio test, it is clear that <span class="math-container">$|x|<\frac1e$</span> in order to converge, but I needed to use the Alternating Series Test to see whether this series converges at <span class="math-container">$x= \pm\frac1e$</span>. Finding <span class="math-container">$\lim_\limits{n\to\infty}|a_n|$</span> is the first step of the test.</p>
| trancelocation | 467,003 | <p>You can use the simple fact that</p>
<p><span class="math-container">$$e^n = \sum_{k=0}^{\infty}\frac{n^k}{k!} \stackrel{k=n}{>} \frac{n^n}{n!}$$</span></p>
<p>Hence, you get</p>
<p><span class="math-container">$$0<\frac{n^{n-1}}{n!e^n} < \frac{n^{n-1}}{n!\frac{n^n}{n!}} = \frac 1n$$</span></p>
|
703,031 | <p>In a sequence of integers, $A(n)=A(n-1)-A(n-2)$, where $A(n)$ is the $n$th term in the sequence, $n$ is an integer and $n\ge3$,$A(1)=1$,$A(2)=1$, calculate $S(1000)$, where $S(1000)$ is the sum of the first $1000$ terms.</p>
<p>How to approach these type of questions? Which topics should I study?</p>
| 5xum | 112,884 | <p>$$A(1) + A(2) + A(3) +\dots= \\=A(1) + A(2) + (A(2) - A(1)) + (A(3) - A(2)) + (A(4) - A(3)) + \dots$$</p>
<p>Can you see a pattern?</p>
|
178,823 | <p>How would I prove the following trig identity? </p>
<blockquote>
<p><span class="math-container">$$\frac{ \cos (A+B)}{ \cos A-\cos B}=-\cot \frac{A-B}{2} \cot \frac{A+B}{2} $$</span></p>
</blockquote>
<p>My work thus far has been:
<span class="math-container">$$\dfrac{2\cos\dfrac{A+B}{2} \cos\dfrac{A-B}{2}}{-2\sin\dfrac{A+B}{2} \sin\dfrac{A-B}{2}} =-\cot\dfrac{A+B}{2} \cot\dfrac{A-B}{2} \ .$$</span></p>
| davidlowryduda | 9,754 | <p>First, note that $\dfrac{\cos(A + B)}{\cos A - \cos B} \neq -\cot \left( \dfrac{A-B}{2} \right) \cot \left(\dfrac{A+B}{2} \right)$</p>
<p>In particular, if we use something like $A = \pi/6, B = 2\pi/6$, then the left is $0$ as $\cos(\pi/2) = 0$ and the right side is a product of two nonzero things.</p>
<p>I suspect instead that you would like to prove:</p>
<p>$$\dfrac{\cos A + \cos B}{\cos A - \cos B} = -\cot \left( \dfrac{A-B}{2} \right) \cot \left(\dfrac{A+B}{2} \right)$$</p>
<p><strong>HINTS</strong></p>
<p>And this looks to me like an exercise in the sum-to-product and product-to-sum trigonometric identities (wiki <a href="http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Product-to-sum_and_sum-to-product_identities" rel="nofollow">reference</a>). In fact, if you just apply these identities to the top and the bottom, you'll get the result.</p>
|
225,866 | <p>If I define, for example,</p>
<pre><code>f[OptionsPattern[{}]] := OptionValue[a]
</code></pre>
<p>Then the output for <code>f[a -> 1]</code> is 1.</p>
<p>However, in my code, I have a function that must be called using the syntax <code>f[some parameters][some other parameters]</code>, and I want to add options to the <strong>second</strong> set of square brackets. So I tried:</p>
<pre><code>g[][OptionsPattern[{}]] := OptionValue[a]
</code></pre>
<p>But then, the output for <code>g[][a -> 1]</code> is <code>OptionValue[a]</code> instead of 1. I'm not sure why this is not working. Shouldn't <code>OptionsPattern[{}]</code> match <strong>any</strong> set of options, no matter where they are located?</p>
<p>How can I add options that can be provided in the second set of square brackets instead of the first?</p>
| kglr | 125 | <p>If you <em>have to</em> use <code>Array</code>:</p>
<pre><code>Array[Through @ {x, y, z} @ vl[[#]] &, Length @ vl]
</code></pre>
<blockquote>
<pre><code> {{x[6], y[6], z[6]}, {x[9], y[9], z[9]}, {x[10], y[10], z[10]}}
</code></pre>
</blockquote>
<p>Also:</p>
<pre><code>f = Through /@ # /@ #2 &;
f[{x, y, z}, vl]
</code></pre>
<blockquote>
<pre><code> {{x[6], y[6], z[6]}, {x[9], y[9], z[9]}, {x[10], y[10], z[10]}}
</code></pre>
</blockquote>
|
744,982 | <p>Could anyone let me know if the following linear programming problem can be solved in polynomial time or should be NP-hard?</p>
<p>$\min c^Tx$</p>
<p>s.t. $x^TQx\geq C^2, x\in [0,1]^n,c\in \mathbb{R}_+^n,Q\in\mathbb{R}_+^{n\times n}$ is positive semi-definite matrix.</p>
| AndreaCassioli | 130,183 | <p>I am afraid you can't.</p>
<p>This is a not convex problem due to the quadratic constraint. If your $Q$ matrix were negative semidefinite, then it would have been solvable in polynomial time. </p>
|
744,982 | <p>Could anyone let me know if the following linear programming problem can be solved in polynomial time or should be NP-hard?</p>
<p>$\min c^Tx$</p>
<p>s.t. $x^TQx\geq C^2, x\in [0,1]^n,c\in \mathbb{R}_+^n,Q\in\mathbb{R}_+^{n\times n}$ is positive semi-definite matrix.</p>
| RoyS | 774,804 | <p>My guess is that this problem can be solved in polynomial time.</p>
<p>It is well known that <span class="math-container">$\min_x c'x$</span>, such that <span class="math-container">$x'Qx \geq C$</span> can be solved in polynomial time (since SDP relaxation is tight). Your problem has extra bound constraints <span class="math-container">$x_i$</span> in <span class="math-container">$[0,1]$</span>, which may or may not change the answer. But I think it is quite possible. </p>
|
623,810 | <p>$\omega = y dx + dz$ is a differential form in $\mathbb{R}^3$, then what is ${\rm ker}(\omega)$? Is ${\rm ker}(\omega)$ integrable? Can you teach me about this question in details? Many thanks!</p>
| Branimir Ćaćić | 49,610 | <p>In this context, I presume that $\ker(\omega)$ denotes the subbundle of $T \mathbb{R}^3$ with fibre $\ker(\omega)_p := \ker(\omega_p)$ at each $p \in \mathbb{R}^3$, where $\ker(\omega_p)$ is the kernel of the functional $\omega_p : T_p \mathbb{R}^3 \to \mathbb{R}$. How does this functional look like? Well, setting $x^1 = x$, $x^2 = y$, $x^3 = z$ for notational convenience, the standard basis for $T_p \mathbb{R}^3$ is
$$
\left\{ \left.\frac{\partial}{\partial x^1}\right|_p, \left.\frac{\partial}{\partial x^2}\right|_p, \left.\frac{\partial}{\partial x^3}\right|_p\right\},
$$
which satisfy
$$
dx^i_p \left( \left.\frac{\partial}{\partial x^j}\right|_p \right) = \begin{cases} 1 &\text{if $i=j$,}\\ 0 &\text{if $i \neq j$.} \end{cases}
$$
Now, in general, $\omega = x^2 dx^1 + dx^3$, so that at a given $p = (p^1,p^2,p^3) \in \mathbb{R}^3$,
$$
\omega_p = p^2 dx^1_p + dx^3_p.
$$
Hence, for $v \in T_p \mathbb{R}^3$,
$$
\omega_p(v) = (p^2 dx^1_p + dx^3_p)\left( v^1 \left.\frac{\partial}{\partial x^1}\right|_p + v^2 \left.\frac{\partial}{\partial x^2}\right|_p + v^3 \left.\frac{\partial}{\partial x^3}\right|_p \right) = \;?
$$
In particular, then, what is $\ker(\omega)_p := \ker(\omega_p)$?</p>
<p>Next, let's turn to the question of integrability. Recall that a subbundle $E$ of $T \mathbb{R}^3$ is integrable if and only if for any sections $v$, $w$ of $E$, the Lie bracket $[v,w]$ of $v$ and $w$ is also a section of $E$, where
$$
[v,w] = \left[v^1 \frac{\partial}{\partial x^1} + v^2 \frac{\partial}{\partial x^2} + v^3 \frac{\partial}{\partial x^3}, w^1 \frac{\partial}{\partial x^1} + w^2 \frac{\partial}{\partial x^2} + w^3 \frac{\partial}{\partial x^3} \right]\\ = \sum_{i=1}^3 \sum_{j=1}^3 \left(v^j \frac{\partial w^i}{\partial x^j} - w^j \frac{\partial v^i}{\partial x^j} \right) \frac{\partial}{\partial x^i}.
$$
Now, in the case of $\ker(\omega)$, by construction, by construction,
$$
\Gamma(\ker(\omega)) = \{\xi \in \Gamma(T \mathbb{R}^3) \mid \omega(\xi) = 0\},
$$
i.e., the sections of $\ker(\omega)$ are precisely the vector fields $\xi$ such that $\omega(\xi) = 0$, where, by essentially the same computation as above,
$$
\omega(\xi) = (x^2 dx^1 + dx^3)\left( \xi^1 \frac{\partial}{\partial x^1} + \xi^2 \frac{\partial}{\partial x^2} + \xi^3 \frac{\partial}{\partial x^3} \right) = \; ?
$$
Hence, what you need to do is use this computation to algebraically characterise the vector fields $\xi$ such that $\omega(\xi) = 0$, and then use this to show that for any vector fields $\xi$ and $\eta$ with $\omega(\xi) = 0$ and $\omega(\eta) = 0$, $\omega([\xi,\eta]) = 0$.</p>
|
1,597,891 | <p>Let $G$ be an abelian group of order $75=3\cdot 5^{2}$. Let $Aut(G)$ denote its group of automorphisms. Find all possible order of $Aut(G)$.</p>
<p>My approach is to first study its Sylow 5-subgroup. Since $n_{5}|3$ and $n_{5}\equiv 1\pmod{5}$, $n_{5}=1$. So $G$ has a unique Sylow 5-subgroup, denote $F$. By Sylow's Theorem, $F$ is characteristic. Then $\forall \sigma\in Aut{G}$, $\sigma(F)=F$. Define the canonical homomorphism $Aut(G)\rightarrow Aut(H)$. So $|Aut(G)|=\# (\text{Aut(G) that fixes H pointwise})\times (\text{image of homomorphism})$. </p>
<p>Since the image of the defined homomorphism is a subgroup of $Aut(F)$, then its order is a factor of $Aut(F)=20$. I'm not sure how to compute the number of $Aut(G)$ that fixes $H$. My understanding is this: Since we leave $H$ fixed, all that left to be permuted are the 25 Sylow 3-subgroup. Since each Sylow 3-subgroup is cyclic, it has 2 automorphisms. So altogether $|\text{Aut(G) that fixes H pointwise}|=25\times 3=75$. And all possible order of $Aut(G)$ is $75\cdot x$ where $x$ divides 20. This seems incorrect. Could someone please help me?</p>
| p Groups | 301,282 | <p>$G$ is abelian, so Sylow subgroups are characteristic. Hence $G$ is product of characteristic Sylow subgroups, say $H_3$ and $H_5$. Then $Aut(G)\cong Aut(H_3)\times Aut(H_5)$. </p>
<p>$H_3$ is cyclic, whose automorphism group is well known. $H_5$ is either cyclic or $Z_5\times Z_5$. The automorphism groups in both cases in well known: either $Z_5\times Z_4$ or $GL_2(Z_5)$, the group of $2\times 2$ invertible matrices over $Z_5$ (think on it, or see some book, or on-line reference). You can obtain the order of $Aut(G)$ easily.</p>
|
4,224,417 | <p>I'm trying to learn a bit of Number Theory. And while I understand the definition of congruence relations modulo <span class="math-container">$n$</span> and that they are an equivalence relations, I fail to see the <em>motivation</em> for it. So what is congruence relation <span class="math-container">$\bmod n$</span> <em>intuitively</em>? (The "bold lines" below are my questions that I'm seeking answer to.)</p>
<p>Definition: For <span class="math-container">$a,b \in \mathbb{Z}$</span> and <span class="math-container">$n \in \mathbb{N}$</span>, <span class="math-container">$a\equiv b \bmod n \Leftrightarrow n|(a-b)$</span></p>
<hr />
<p>Okay, so let's start with the definition, what is really the point of "<span class="math-container">$n | (a-b)$</span>?" That <span class="math-container">$a=nq + b$</span>, for some <span class="math-container">$q \in \mathbb{Z}$</span>? So <strong>what do I do with this and why is it so important?</strong></p>
<p>Secondly, if <span class="math-container">$a$</span> and <span class="math-container">$b$</span> leave the same residue or remainder upon division by <span class="math-container">$n$</span> then <span class="math-container">$a \equiv b \bmod n$</span>, again I don't see <strong>why are we so interested in remainders?</strong></p>
<p>And lastly, I keep seeing examples of clocks, days of the week and months. That's good but is that all there's to it?</p>
<p>I have a strong feeling, I'm grossly underestimating congruence relations modulo <span class="math-container">$n$</span>, perhaps that's because I don't have the intuition for it and where should I should use it. So <strong>any intuitive explanations of it and where should one use them</strong> would be really really really nice. I'm desperately trying to figure this out. Thanks in advance.</p>
| Roddy MacPhee | 903,195 | <p>Modulo weakens equality, helps divisibility, and allows dealing with huge numbers. It's relevant to Cryptography, and therefore cybersecurity. It can even help with divisor form:</p>
<p>Assume a number is of form <span class="math-container">$2^p-1$</span> , it's clear that any time <span class="math-container">$2^p\equiv 1\pmod q$</span> that <span class="math-container">$q\mid 2^p-1$</span> . Assume they are both primes, then we get <span class="math-container">$p\mid q-1$</span> any time <span class="math-container">$p>2$</span> ( Fermat's little theorem, and a bit more theory) but that means <span class="math-container">$q=jp+1$</span> with <span class="math-container">$j$</span> even ( <span class="math-container">$j=2k$</span> for some integer <span class="math-container">$k$</span> ) because <span class="math-container">$q$</span> must be odd.</p>
<p>It can also be looked at as linear polynomials with all integer variable, coefficients and constants as <span class="math-container">$y\equiv b\pmod m\implies y=mx+b$</span></p>
<p>Expanding more upon this <span class="math-container">$$y\not\equiv b\pmod m\implies y\ne b$$</span> you can set up <span class="math-container">$b$</span> with properties and prove <span class="math-container">$y$</span> doesn't have a property. This is used in diophantine equations to show or disprove an example could exist with a property.</p>
<p>An example is checking the conjecture that all natural numbers can be expressed as the sum of <span class="math-container">$3$</span> cubes of integers. You can show modulo <span class="math-container">$9$</span> that there are only <span class="math-container">$7$</span> possible sums of remainders , so there are at least <span class="math-container">$2$</span> remainders that will never be possible.</p>
|
2,844,902 | <p>Does $$\int_{[1,z]}\frac{1}{u}du=\log(z)$$ where $z\in\mathbb C$ ? I know that on a closed circle that contain $0$ we have $$\int_C\frac{1}{z}dz=2i\pi=\log(1),$$</p>
<p>but for $$\int_{[1,z]}\frac{1}{u}du=\log(z)$$ I don't really know to compute the integral.</p>
| copper.hat | 27,978 | <p>Since $H$ is constant (and hence bounded) on a trajectory, you know that for any initial condition
that there is some $K$ such that ${1 \over 2} v^2(t) + {1 \over q^4(t)} \le K$
for all $t$.</p>
<p>In particular ${1 \over 2} v^2(t) \le K$ and
$ {1 \over q^4(t)} \le K$ for all $t$,
or
$|v(t)| \le \sqrt{2K}$ and
$|q(t)| \ge {1 \over \sqrt[4]{K}}$ for all $t$.</p>
<p>The existence & uniqueness follow from the usual theorems for ODEs. For example,
Theorem 1 in Kantorovich & Akilov, "Functional Analysis", XVI.4.2, p.487.
The proof is straightforward, but technically tedious.</p>
<p>Pick some initial condition $(q(0),v(0))$, which gives a value $K= H(q(0),v(0)) >0$. Let $D=\{(q,v)| {1 \over 2} v^2 \le 2K, {1 \over q^4} \le 2K \}$. Let $f((q,v)) = (v,{4 \over q^5})$. Let $M= \sup_{(q,v) \in D} f((q,v))$, and
$L=\sup_{(q,v) \in D} \|Df((q,v)) \|$.</p>
<p>Let $D'=\{(q,v)| {1 \over 2} v^2 \le K, {1 \over q^4} \le K \} \subset D$ and
choose $\delta>0$ such that $B_\infty((q,v)), \delta) \subset D$ for any
$(q,v) \in D'$ (this is what will allow us to extend the solution indefinitely.)</p>
<p>Suppose $(q(t_0),v(t_0))\in D'$. Then
the above theorem shows that there is a unique solution defined on the
interval $[t_0, t_0+\eta]$ where $\eta < \min ({\delta \over M}, {1 \over L})$.
Since $(q(t_0+\eta),v(t_0+\eta))\in D'$, we see that there is a unique
solution for all $t \ge t_0$.</p>
<p>In particular, there is a solution defined for $t \in [0,\infty)$.</p>
<p>The same analysis applies to the reverse time system $\dot{(q,v)} = - f((q,v))$,
hence we see the solution is defined on all of $\mathbb{R}$.</p>
<p><strong>Note</strong>: Some versions of the above theorem show that there is a unique solution
defined on $[t_0-\eta, t_0+\eta]$ in which case there is no need for to look
at the reverse time dynamics. In fact, the proof in K&A can be easily modified
to obtain this result (I presume they were concerned with $t \ge t_0$ which is why they
did not include the $t<t_0$ situation).</p>
<p><strong>Note</strong>: Note that $D'$ consists of two connected slabs, one with $q>0$ and one with $q<0$. In particular, if $q(0)>0$, then $\dot{v(t)}>0$ for all $t$ and so $v$ is monotonically increasing. One can continue this qualitative analysis to
derive more characteristics of the solution.</p>
|
4,073,757 | <p>Q: A coin is tossed untill k heads has appeared. If a mathematician knows how many heads appeared, can he figure out what is the probability that the coin was tossed <span class="math-container">$n$</span> times?</p>
<p>What I tried: The number of heads debends of the number of tosses. So I tried Bayes' theoream <span class="math-container">$P(X=k|N=n)P(N=n)/P(X=k)$</span> where <span class="math-container">$X$</span> is random variable for heads and <span class="math-container">$N$</span> for tosses. <span class="math-container">$P(X=k|N=n)$</span> is binomial distributions mass probability function, but i don't know what <span class="math-container">$P(N=n)$</span> and <span class="math-container">$P(X=k)$</span> are.</p>
<p>Because the last toss is known to be heads, we only focus on the the tosses before the last one. There is <span class="math-container">$\binom {n-1}{k-1}$</span> combinations for <span class="math-container">$k-1$</span> heads to appear before the last one. So the probability is</p>
<p><span class="math-container">$$P(N=n|X=k) = \binom{n-1}{k-1}p^k(1-p)^{n-k}$$</span></p>
<p>I believe this can be done also with Bayes Theoream. Can someone wiser show how it is done?</p>
| X. Li | 417,726 | <p>The coin toss process can be described in the context of independent and identically distributed (i.i.d.) Bernoulli trials.</p>
<p>Suppose we define a random variable corresponding to the number of trials required to have <span class="math-container">$k$</span> successes (i.e. <span class="math-container">$k$</span> heads in this setting). We say <span class="math-container">$N \sim NegBin(k, p)$</span> where <span class="math-container">$p$</span> is the probability of getting a head.</p>
<p><em>Sample space</em>: <span class="math-container">$\{k, k+1, k+2, \dots\}$</span></p>
<p><em>Probability mass function (pmf)</em>: For <span class="math-container">$n = k, k+1, k+2, \dots,$</span>
<span class="math-container">\begin{align*}
f(n; k, p) = P(N = n) &= {n - 1 \choose k - 1} p^{k-1} (1-p)^{n-k} \cdot p\\
&= {n - 1 \choose k - 1} p^k (1-p)^{n-k}.
\end{align*}</span>
Reasoning: If <span class="math-container">$n$</span> is the number of coin tosses (when <span class="math-container">$k$</span> heads were observed), then the first <span class="math-container">$(n-1)$</span> tosses resulted in <span class="math-container">$(k-1)$</span> heads and the last toss was a head.</p>
<p>Remark: More information on <strong>Negative binomial distribution</strong> can be found <a href="https://en.wikipedia.org/wiki/Negative_binomial_distribution" rel="nofollow noreferrer">here</a>. Be careful about the different formulations (see <em>1.3 Alternative formulations</em>).</p>
|
1,088,338 | <p>There are at least a few things a person can do to contribute to the mathematics community without necessarily obtaining novel results, for example:</p>
<ul>
<li>Organizing known results into a coherent narrative in the form of lecture notes or a textbook</li>
<li>Contributing code to open-source mathematical software</li>
</ul>
<p>What are some other ways to make auxiliary contributions to mathematics?</p>
| Horst Grünbusch | 88,601 | <p>Find problems to solve. These can be open mathematical questions but (more important) new fields of application that may also need new mathematical concepts. Calculus e.g. was founded because it was needed for physics. So any scientist who dares to show his problems to mathematicians helps mathematics as well. </p>
|
2,628,220 | <p>Let $(a_{n})_{n \in \mathbb N_{0}}$ be a sequence in $\mathbb Z$, defined as follows:
$a_{0}:=0,
a_{1}:=2,
a_{n+1}:= 4(a_{n}-a_{n-1}) \forall n \in \mathbb N$. </p>
<p>Required to prove: $a_{n}=n2^{n} \forall n \in \mathbb N_{0}$</p>
<p>I have gone about it in the following: </p>
<p>Induction start: $n=0$ (condition fulfilled)</p>
<p>Induction premise: $a_{n}=n2^{n}$ for a specific $n \in \mathbb N_{0}$</p>
<p>Induction step: </p>
<p>$a_{n+1}=4(a_{n}-a_{n-1})$, and here the first problem arises, since I can say that (given the premise)
$4(a_{n}-a_{n-1})=4(n2^{n}-a_{n-1})$, yet how do I get rid of the $a_{n-1}$? Surely stating the $a_{n-1}=(n-1)2^{n-1}$ is false, given that my premise is only based on an $n$ and not $n-1$. </p>
| Cuija Gaming | 524,563 | <p>Show that your premise holds for $n = 0$ and $n = 1$.</p>
<p>Then the induction step is: $A(n)~ ∧ ~ A(n+1) → A (n+2)$</p>
|
1,707,132 | <blockquote>
<p>Let $X$ be a contractible space (i.e., the identity map is homotopic to the constant map). Show that $X$ is simply connected.</p>
</blockquote>
<p>Let $F$ be the homotopy between $\mathrm{id}_X$ and $x_0$, that is $F:X\times [0,1]\to X$ is a continuous map such that
$$ F(x,0)=x,\quad F(x,1)=x_0$$
for all $x\in X$.
Next let $f:[0,1]\to X$ be a loop based $x_0$, I need to show that $f$ is path homotopic to the constant path $e_{x_0}$, The continuous map $G:[0,1]\times [0,1]\to X$ defined by $G(s,t)=F(f(s),t))$ is a homotopy between $f$ and $e_{x_0}$, but it may not be a path homotopy (i.e., $G(0,t)=G(1,t)=x_0$ for all $t\in [0,1]$), how to get around this?</p>
| Ivin Babu | 704,464 | <p><img src="https://i.stack.imgur.com/JH7s6.jpg" alt="enter image description here" />
Using this lemma given in Munkres Topology ( which is not too difficult to understand once you've gone through the proof) can be used to show that any loop based at <span class="math-container">$x_0$</span> is path homotopic to the constant loop at <span class="math-container">$x_0$</span>.<br />
Taking h as the identity map and k as the constant map the lemma gives the identity<br />
<span class="math-container">$k_*$</span>= <span class="math-container">$\hat α $</span>ο<span class="math-container">$h_*$</span><br />
This says that if f is a loop based at <span class="math-container">$x_0$</span><br />
[<span class="math-container">$\bar α$</span>]<span class="math-container">$\ast$</span>[<span class="math-container">$(hof)$</span>] <span class="math-container">$\ast$</span> [<span class="math-container">$α$</span>] = [<span class="math-container">$kof$</span>] which is the constant map based at <span class="math-container">$x_0$</span>. <br />
Now since <span class="math-container">$α$</span> itself is a loop based at <span class="math-container">$x_0$</span> replacing f in the above statement by <span class="math-container">$α$</span> means that it is path homotopic to the constant map. Hence f itself is path homotopic to the constant map</p>
<p><img src="https://i.stack.imgur.com/LOvZd.jpg" alt="enter image description here" /><img src="https://i.stack.imgur.com/3uMcW.jpg" alt="enter image description here" />.<br />
Or we can use the idea of homotopic types.<br />
Spaces X and Y are said to be of same homotopic type if there exists continuous functions <span class="math-container">$f:X \to Y$</span> and <span class="math-container">$g:Y\to X$</span> such that <span class="math-container">$fog$</span> and <span class="math-container">$gof$</span> are homotopic to respective identity maps.
Since X is contractible there exists a constant map to which the identity map is homotopic. Hence taking Y as the singleton set containing image of the constant map, X and Y are of same homotopic type and hence their fundamental groups are isomorphic.</p>
|
195,333 | <p>I'm looking to make a graph with the x-axis reversed and a frame and tick marks that are thicker than default. However, the x-axis tick marks do not maintain the specified thickness once I reverse the x-axis. I can't restore the thickness of the tick marks using TicksStyle or FrameTicksStyle. How can I get around this?</p>
<pre><code>ListLinePlot[{},
Frame -> True,
FrameStyle ->
{{Black, Thickness[0.005]},
{Black, Thickness[0.005]},
{Black, Thickness[0.005]},
{Black, Thickness[0.005]}},
ScalingFunctions -> {"Reverse", Identity}
]
</code></pre>
<p>Notice how the tick marks on the y-axis are made thick (as desired), but not along the x-axis:</p>
<p><a href="https://i.stack.imgur.com/0a3gc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0a3gc.png" alt="enter image description here"></a></p>
| MassDefect | 42,264 | <p>If your tick marks are being overridden, it might be that you're using <code>Ticks</code> rather than <code>FrameTicks</code>.</p>
<p>This is how I usually go about making my own tick marks. Unfortunately, that seems to happen a lot more often than I would like.</p>
<pre><code>ticks[min_, max_, stepsz_, majorstep_, minorlength_, majorlength_,
labels_] :=
Table[{
i,
If[
labels \[And] Mod[Rationalize[i], Rationalize[majorstep]] == 0,
ToString[NumberForm[i, {Infinity, 1}]],
""
],
If[
Mod[Rationalize[i], Rationalize[majorstep]] == 0,
{majorlength, 0},
{minorlength, 0}
]
},
{i, min, max, stepsz}
]
</code></pre>
<p>Both <code>Ticks</code> and <code>FrameTicks</code> expect a list in the form <code>{ {x1, "x1", {innerlength, outerlength}}, {x2, "x2", {innerlength, outerlength}}, ...}</code>, so my <code>Table</code> constructs a list with that format.</p>
<p>All of the arguments to the function should be numbers, except for <code>labels</code> which should be a boolean (<code>True</code> if you want tick labels on that axis, <code>False</code> otherwise). </p>
<p>One thing to watch out for is the <code>NumberForm</code> inside of <code>ToString</code>. I do that because otherwise MMA likes to output <code>1.</code> instead of <code>1.0</code>. So I'm currently forcing it to use 1 decimal place. The number of decimal places you want will probably vary from plot to plot. You could add it in as another argument to the function if you want.</p>
<p>Another tip that might be useful, is I have coded the outside tick length as zero, and my function only allows you to specify the inside length. If you want your ticks on the outside for some graph, you can change that.</p>
<p>This is how you would use the function:</p>
<pre><code>ListLinePlot[
{},
Frame -> True,
FrameTicks -> {{
ticks[-1, 1, 0.1, 0.5, 0.01, 0.02, True],
ticks[-1, 1, 0.1, 0.5, 0.01, 0.02, False]},
{ticks[-1, 1, 0.1, 0.5, 0.01, 0.02, True],
ticks[-1, 1, 0.1, 0.5, 0.01, 0.02, False]}},
FrameStyle -> {{Black, Thickness[0.005]}, {Black,
Thickness[0.005]}, {Black, Thickness[0.005]}, {Black,
Thickness[0.005]}},
ImageSize -> 400,
ScalingFunctions -> {"Reverse", Identity}
]
</code></pre>
<p><a href="https://i.stack.imgur.com/E7HJ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E7HJ0.png" alt="Example of manually adding frame ticks to plot."></a></p>
<p><code>FrameTicks -> {{left, right}, {bottom, top}}</code></p>
<p>I ended up using my custom ticks function for the left and right sides because Mathematica always shrinks my tick marks when I export (which is the reason I so often have to create my own ticks). This has the added benefit of making sure all the tick marks are the same length, but you could easily leave the left and right sides as <code>Automatic</code> if you like.</p>
|
2,666,568 | <p>I have a dynamical system: $\dot{\mathbf x}$= A$\mathbf x$ with $\mathbf x$=
$\bigl( \begin{smallmatrix} x \\ y\end{smallmatrix} \bigr)$ and A =
$\bigl( \begin{smallmatrix} 3 & 0 \\ \beta & 3 \end{smallmatrix} \bigr). \beta$ real, time-independent.</p>
<p>I calculated the eigenvalue $\lambda$ = 3 with the algebraic multiplicity of 2.</p>
<p>The <strong>first question</strong> is about eigenvectors when $\beta = 0$ and when $\beta \neq$ 0:</p>
<p>1) when $\beta$ = 0, I have $\bigl( \begin{smallmatrix} 0 & 0 \\ \beta & 0 \end{smallmatrix} \bigr) \bigl( \begin{smallmatrix} x \\ y\end{smallmatrix} \bigr)$ = $\bigl( \begin{smallmatrix} 0 & 0 \\ 0 & 0 \end{smallmatrix} \bigr) \bigl( \begin{smallmatrix} x \\ y\end{smallmatrix} \bigr)$ = $\bigl( \begin{smallmatrix} 0 \\ 0\end{smallmatrix} \bigr)$.
Does this allow for eigenvectors calculation? Does it tell me anything at all?</p>
<p>2) when $\beta \neq$ 0, I have $\bigl( \begin{smallmatrix} 0 & 0 \\ \beta & 0 \end{smallmatrix} \bigr) \bigl( \begin{smallmatrix} x \\ y\end{smallmatrix} \bigr)$ = $\bigl( \begin{smallmatrix} 0 \\ 0\end{smallmatrix} \bigr)$, so my eigenvector is $\bigl( \begin{smallmatrix} 0 \\ 1\end{smallmatrix} \bigr)$ and A is defective? Is there any other eigenvector?</p>
<p>The <strong>other question</strong> is further on case 2) when $\beta$ = 3. I am to find any fixed points + their stability, but first I am wondering whether I did the above correctly. I am not sure how to approach it, thought about a trajectory expression, but I am confused by the defective A.</p>
<p>Thanks!</p>
| Carlos | 207,930 | <p>The eigenvectors for $\beta\ne0$ are $[0,1]^T$, linear dependent, therefore they don’t form a basis (not diagonalizable), thus defective.</p>
<p>The eigenvectors for $\beta=0$ are $[0,1]^T$, $[1,0]^T$, i.e. ,the standard basis. </p>
<p>The fixed points do not change for $\beta\ne0$, and they are clearly unstable since the eigenvalues are positive.</p>
|
1,093,396 | <p>I've been working on a problem from a foundation exam which seems totally straightforward but for some reason I've become stuck:</p>
<p>Let $f: \mathbb{ R } \rightarrow \mathbb{ R } ^n$ be a differentiable mapping with $f^\prime (t) \ne 0$ for all $t \in \mathbb{ R } $, and let $p \in \mathbb{ R } ^n$ be a point NOT in $f(\mathbb{ R }) $. </p>
<ol>
<li>Show that there is a point $q = f(t)$ on the curve $f(\mathbb{ R }) $ which is closest to $p$. </li>
<li>Show that the vector $r := (p - q)$ is orthogonal to the curve at $q$. </li>
</ol>
<p>Hint: Consider the function $t \mapsto |p - f(t)|$ and its derivative. </p>
<p>I can solve the second part of the problem assuming that I've found the point $q$ requested in the first part: consider the square difference function $\varphi(t) = |p - f(t)|^2 = (p - f(t)) \cdot (p - f(t))$, derive and at $q = f(t_0)$ we will have $\varphi ^\prime(t_0) = 0$. How do we prove the existence of the point $q$? The square distance function $\varphi(t)$ is a function from $\mathbb{ R } \rightarrow \mathbb{ R } $, so maybe something like Rolle's Theorem? Not too sure about this. Any help will be greatly appreciated! </p>
| kremerd | 205,133 | <p>The first statement is actually wrong! Consider for example the function $f\colon\mathbb R\to\mathbb R^2,\; t\mapsto (0,e^t)$ with the point $p=(0,0)$.</p>
<p>The problem here is that the domain of $f$ is not compact. If you consider a function on a closed interval $f\colon[a,b]\to\mathbb R^n$ the statement becomes true.</p>
|
4,231,509 | <p>I'm trying to prove that the group <span class="math-container">$(\mathbb{R}^*, \cdot)$</span> is not cyclic (similar to [1]). My efforts until now culminated into the following sentence:</p>
<blockquote>
<p>If <span class="math-container">$(\mathbb{R}^*,\cdot)$</span> is cyclic, then <span class="math-container">$\exists x \in \mathbb{R}^*$</span> such that <span class="math-container">$x \cdot x \neq x \in \mathbb{R}^*$</span>.</p>
</blockquote>
<p>The assumption written above is only not true for the neutral element on <span class="math-container">$\mathbb{R}^*$</span>. Are there any follow-ups that I should do to improve that sentence?</p>
<p>[1] <a href="https://math.stackexchange.com/questions/1491510/show-that-q-and-r-are-not-cyclic-groups">Show that (Q, +) and (R, +) are not cyclic groups.</a></p>
| Robert Shore | 640,080 | <p>Note that for any <span class="math-container">$n \in \Bbb N, -1$</span> has no <span class="math-container">$n$</span>th root in <span class="math-container">$\Bbb R^*$</span> except (when <span class="math-container">$n$</span> is odd) <span class="math-container">$-1$</span> itself. Therefore, if <span class="math-container">$-1 \neq x \in \Bbb R^*, -1 \notin \langle x \rangle$</span>. Since <span class="math-container">$-1$</span> also does not generate <span class="math-container">$\Bbb R^*, \Bbb R^*$</span> is not cyclic.</p>
|
1,722,287 | <p>So far I know that when matrices A and B are multiplied, with B on the right, the result, AB, is a linear combination of the columns of A, but I'm not sure what to do with this. </p>
| xxxxxxxxx | 252,194 | <p>The image of $B$ is a subspace of dimension $\mathrm{rank}(B)$. Left multiplication by $A$ transforms this into a new subspace, which is the image of $AB$ having dimension $\mathrm{rank}(AB)$; this linear transformation cannot increase the dimension of the subspace.</p>
<p>Put in other words, $\mathrm{rank}(B)$ is the dimension of the space generated by the rows of $B$, and $\mathrm{rank}(AB)$ is the dimension of the space generated by the rows of $AB$. The rows of $AB$ are linear combinations of the rows of $B$, so the dimension of this space is restricted by $\mathrm{rank}(B)$ (the columns of $AB$ are also linear combinations of the columns of $A$ as you mention, but this fact is less helpful here).</p>
|
1,722,287 | <p>So far I know that when matrices A and B are multiplied, with B on the right, the result, AB, is a linear combination of the columns of A, but I'm not sure what to do with this. </p>
| copper.hat | 27,978 | <p>If $AB$ has rank $r$ then there are vectors $v_1,...,v_r$ such that
$ABv_1,...,AB v_r$ are linearly independent.</p>
<p>Hence $Bv_1,...,B v_r$ must be linearly independent (or an immediate contradiction), and so $B$ must have rank $\ge r$.</p>
|
1,007,399 | <p>I came across following problem</p>
<blockquote>
<p>Evaluate $$\int\frac{1}{1+x^6} \,dx$$</p>
</blockquote>
<p>When I asked my teacher for hint he said first evaluate</p>
<blockquote>
<p>$$\int\frac{1}{1+x^4} \,dx$$</p>
</blockquote>
<p>I've tried to factorize $1+x^6$ as</p>
<p>$$1+x^6=(x^2 + 1)(x^4 - x^2 + 1)$$
and then writing</p>
<p>$$I=\int\frac{1}{1+x^6} \,dx=\int\frac{1}{(x^2 + 1)(x^4 - x^2 + 1)} \,dx=\int\frac{1+x^2-x^2}{(x^2 + 1)(x^4 - x^2 + 1)} \,dx$$
$$I=\int\frac{1}{x^4 - x^2 + 1} \,dx-\int\frac{x^2}{(x^2 + 1)(x^4 - x^2 + 1)} \,dx$$</p>
<p>However $$x^4-x^2+1=\left(x^2-\frac12\right)^2+\frac{3}{4}$$
But I can't see how it helps</p>
<p>I've also tried to reverse engineer the <a href="http://www.wolframalpha.com/input/?i=integrate+%281%29%2F%281%2Bx%5E6%29" rel="noreferrer">solution given by Wolfram Alpha</a></p>
<p>And I need to have terms similar to<br>
$$\frac{x^2-1}{x^4-x^2+1} \quad , \quad \frac{1}{1+x^2} \quad , \quad \frac{1}{(x+c)^2+1}\quad , \quad \frac{1}{(x+c)^2+1}$$ in integrand, How can I transform my cute looking integrand into these huge terms?</p>
<p>Since in exams I will neither have access to WA nor time to reverse engineer the solution moreover it does not seem intuitive,is there any way to solve this problem with some nice tricks or maybe substitutions?</p>
| Dylan | 135,643 | <p>Here's a nice "trick" my former professor taught me</p>
<p>$$ \int\frac{dx}{1+x^6} = \frac{1}{2} \int \frac{(1-x^2+x^4)+x^2+(1-x^4)}{(1+x^2)(1-x^2+x^4)} dx \\
= \frac{1}{2}\int \frac{dx}{1+x^2} + \frac{1}{2} \int \frac{x^2}{1+x^6} dx + \frac{1}{2} \int \frac{1-x^2}{1-x^2+x^4} dx \\
= \frac{1}{2}\int \frac{dx}{1+x^2} + \frac{1}{2} \int \frac{x^2}{1+x^6} dx - \frac{1}{2} \int \frac{1-\frac{1}{x^2}}{x^2-1+\frac{1}{x^2}} dx $$</p>
<p>The first integral is simply the arctangent of $x$. The second can be solved by substituting $u = x^3$. The third can be solved by substituting $t = x + \frac{1}{x}$</p>
|
1,262,322 | <p>Suppose that virus transmision in 500 acts of intercourse are mutually independent events and that the probability of transmission in any one act is $\frac{1}{500}$. What is the probability of infection?</p>
<p>So I do know that one way to solve this is to find the probability of complement of the event we are trying to solve. Letting $C_1,C_2,C_3...C_{500}$ denote the events that a virus does not occur during encounters 1,2,....500. The probability of no infection is:</p>
<p>$$P(C_1\cap C_2 \cap....\cap C_{500}) = (1 - (\frac{1}{500}))^{500} = 0.37$$</p>
<p>then to find the probability of infection I would just do : $1 - 0.37 = 0.63$</p>
<p>but my question is how would I find the probability not using the complement? I would have thought since the events are independent and each with probability of $\frac{1}{500}$ that if I multiplied each independet event I could obtain the value, but that is not the case. What am I forgetting to consider if I wanted to calculate this way? I'm asking more so to have a fuller understanding of both sides of the coin.</p>
<p>Edit: I think I may have figured out what I'm missing in my thinking. In the case of trying to figure out the probability of infection I have to take into account that infection could occur on the first transmission, or the second, or the third,...etc. Also transmission could occur on every interaction or on a few interactions but not all. So in each of these scenarios I would encounter some sort of combination of probabilities like $(\frac{499}{500})(\frac{499}{500})(\frac{1}{500})(\frac{499}{500})......(\frac{1}{500})$ as an example of one possible combination.</p>
| Karl | 203,893 | <p>You could compute the geometric series:
$$\frac{1}{500}+\left(\frac{499}{500}\right)\left(\frac{1}{500}\right)+\left(\frac{499}{500}\right)^2\left(\frac{1}{500}\right)+,...,+\left(\frac{499}{500}\right)^{499}\left(\frac{1}{500}\right)\\=\frac{\left(\frac{1}{500}\right)\left(1-\left(\frac{499}{500}\right)^{500}\right)}{1-\left(\frac{499}{500}\right)} $$</p>
<p>This is the sum of the probability of getting infected first time, on the second time up to the last time.</p>
|
85,470 | <p>We decided to do secret Santa in our office. And this brought up a whole heap of problems that nobody could think of solutions for - bear with me here.. this is an important problem.</p>
<p>We have 4 people in our office - each with a partner that will be at our Christmas meal.</p>
<p>Steve,
Christine,
Mark,
Mary,
Ken,
Ann,
Paul(me),
Vicki</p>
<p>Desired outcome</p>
<blockquote>
<p>Nobody can know who is buying a present for anybody else. But we each
want to know who we are buying our present for before going to the
Christmas party. And we don't want to be buying presents for our partners.
Partners are not in the office.</p>
</blockquote>
<p>Obvious solution is to put all the names in the hat - go around the office and draw two cards.</p>
<p>And yes - sure enough I drew myself and Mark drew his partner. (we swapped)</p>
<p>With that information I could work out that Steve had a 1/3 chance of having Vicki(he didn't have himself or Christine - nor the two cards I had acquired Ann or Mary) and I knew that Mark was buying my present. Unacceptable result.</p>
<p>Ken asked the question: "What are the chances that we will pick ourselves or our partner?"</p>
<p>So I had a stab at working that out.</p>
<p>First card drawn -> 2/8
Second card drawn -> 12/56</p>
<p>Adding them together makes 28/56 i.e. 1/2.</p>
<p>i.e. This method won't ever work... half chances of drawing somebody you know means we'll be drawing all year before we get a solution that works.</p>
<p>My first thought was that we attach two cards to our backs... put on blindfolds and stumble around in the dark grabbing the first cards we came across... However this is a little unpractical and I'm pretty certain we'd end up knowing who grabbed what anyway.</p>
<p>Does anybody have a solution for distributing cards that results in our desired outcome?</p>
<hr>
<p><em><strong>I'd prefer a solution without a third party..</em></strong></p>
| Daniel | 20,082 | <p>You take your own name out and your partners name out of the hat. You then draw a card which you keep. Hand the hat to your partner. They then get to draw a card, that they keep. You can now put your card and your partners card back in the hat. Hand the hat to someone else. Rinse and repteat.</p>
|
85,470 | <p>We decided to do secret Santa in our office. And this brought up a whole heap of problems that nobody could think of solutions for - bear with me here.. this is an important problem.</p>
<p>We have 4 people in our office - each with a partner that will be at our Christmas meal.</p>
<p>Steve,
Christine,
Mark,
Mary,
Ken,
Ann,
Paul(me),
Vicki</p>
<p>Desired outcome</p>
<blockquote>
<p>Nobody can know who is buying a present for anybody else. But we each
want to know who we are buying our present for before going to the
Christmas party. And we don't want to be buying presents for our partners.
Partners are not in the office.</p>
</blockquote>
<p>Obvious solution is to put all the names in the hat - go around the office and draw two cards.</p>
<p>And yes - sure enough I drew myself and Mark drew his partner. (we swapped)</p>
<p>With that information I could work out that Steve had a 1/3 chance of having Vicki(he didn't have himself or Christine - nor the two cards I had acquired Ann or Mary) and I knew that Mark was buying my present. Unacceptable result.</p>
<p>Ken asked the question: "What are the chances that we will pick ourselves or our partner?"</p>
<p>So I had a stab at working that out.</p>
<p>First card drawn -> 2/8
Second card drawn -> 12/56</p>
<p>Adding them together makes 28/56 i.e. 1/2.</p>
<p>i.e. This method won't ever work... half chances of drawing somebody you know means we'll be drawing all year before we get a solution that works.</p>
<p>My first thought was that we attach two cards to our backs... put on blindfolds and stumble around in the dark grabbing the first cards we came across... However this is a little unpractical and I'm pretty certain we'd end up knowing who grabbed what anyway.</p>
<p>Does anybody have a solution for distributing cards that results in our desired outcome?</p>
<hr>
<p><em><strong>I'd prefer a solution without a third party..</em></strong></p>
| Wolfgang Brehm | 223,307 | <p><strong>Edit:</strong> I just realized this method does not produce all possible derangements. Each pairing is random, but not as independent as it could be, because this method forces the derangement to be a single cycle.</p>
<p>Dr. Hannah Fry describes in her Book "The Indisputable Existence of Santa Claus" the following method:</p>
<p>You make as many cards as there are people. Each card has the same number twice written on it and each card has a different number. You then turn the cards around, shuffle and arrange them in a line. Then you cut them in half so that you now have two lines of cards with equal numbers each facing each other. Then, still hidden, you shift one of the lines by one. Each person can now pick one pair of cards each from one of the lines. The card from the first line tells them which number they are and the card from the second line tells them whom they are supposed to give a gift to. The only thing that remains is to tell everyone what number they are, best by filling in a list with a mapping from number to name.</p>
<p>For example:</p>
<pre><code>1.
|1| |2| |3| |4| |5| |6|
|1| |2| |3| |4| |5| |6|
2.
|3| |6| |4| |2| |5| |1|
|3| |6| |4| |2| |5| |1|
3.
|3| |6| |4| |2| |5| |1|
|1| |3| |6| |4| |2| |5|
<span class="math-container">```</span>
</code></pre>
|
1,798,261 | <p>what is multilinear coefficient? I heard it a couple of times and I tried to google it, all I am getting is multiple linear regression.
I am confused at this point. </p>
| Sri-Amirthan Theivendran | 302,692 | <p>The subspace is the null space of the matrix
\begin{bmatrix}
1&1&1\\
\end{bmatrix}
and hence is a $2$ dimensional subspace by the rank nullity theorem. One can check that $(1,-1,0)^T$ and $(0,1,-1)^T$ are two linearly independent vectors in the null space so that they form a basis for $W$.</p>
|
685,682 | <p>Should I worry about the following appeIarance of circularity in ZFC set theory? In constructing the universe of sets, you start with the empty set and then keep taking the power set over and over.</p>
<p>But this process continues through all the ordinals. The ordinals are there in the beginning to allow the construction of the universe, but in the end we discover that ordinals are certain kinds of sets.</p>
<p>What gives?</p>
<p>Dave</p>
| Asaf Karagila | 622 | <p>There is no circularity.</p>
<p>When a universe is given, all the sets there exists. Including the ordinals. However when we are given a universe of $\sf ZFC$ we can prove that the sets $V_\alpha$ form a strictly increasing and definable hierarchy, and every set is a member of some $V_\alpha$.</p>
<p>On the other hand, though, suppose you are given a universe where all the axioms of $\sf ZFC$ hold except the axiom of regularity, in that case you can actually say that you construct the universe of $\sf ZFC$ using the $V_\alpha$'s, but these sets and the ordinals already exist, you just show that $V=\bigcup_{\alpha\in\sf Ord}V_\alpha$ is definable in the given universe, and that it satisfies all the axioms of $\sf ZFC$.</p>
|
685,682 | <p>Should I worry about the following appeIarance of circularity in ZFC set theory? In constructing the universe of sets, you start with the empty set and then keep taking the power set over and over.</p>
<p>But this process continues through all the ordinals. The ordinals are there in the beginning to allow the construction of the universe, but in the end we discover that ordinals are certain kinds of sets.</p>
<p>What gives?</p>
<p>Dave</p>
| mbsq | 20,990 | <p>Logically, you just prove things about the cumulative hierarchy from the ZFC axioms and it's not circular.</p>
<p>Conceptually, the way I like to think of it, the iterated powerset operation and the ordinals that you use to iterate build each other as you go. It's only by applying powerset to $V_\omega$ that we get an uncountable set. Using the well-orderings that you have, Replacement then translates those into ordinals to keep iterating along. As you go, you build bigger and bigger cardinalities and this allows the construction to continue forever.</p>
|
3,534,254 | <blockquote>
<p><span class="math-container">$r\gt0$</span>, Compute
<span class="math-container">$$\int_0^{2\pi}\frac{\cos^2\theta }{ |re^{i\theta} -z|^2}d\theta$$</span>
when <span class="math-container">$|z|\ne r$</span></p>
</blockquote>
<p>The problem is related to Poisson kernel and harmonic function, but I don't know how to start, <span class="math-container">$\cos^2\theta =1/2 (1+\cos 2\theta )$</span>.</p>
| Quanto | 686,284 | <p>Express <span class="math-container">$z$</span> also in polar form, <span class="math-container">$z = s^{i\alpha}$</span>. Then, </p>
<p><span class="math-container">$$|re^{i\theta} -z|^2=r^2+s^2-2rs\cos(\theta-\alpha)$$</span></p>
<p>and, with the variable change <span class="math-container">$t= \theta - \alpha$</span>, the integral reads,</p>
<p><span class="math-container">$$I=\int_0^{2\pi}\frac{\cos^2\theta }{ |re^{i\theta} -z|^2}d\theta
=\int_0^{2\pi}\frac{\cos^2(t+\alpha)}{ r^2+s^2-2rs\cos t}dt$$</span></p>
<p>Expand the numerator,
<span class="math-container">$$\cos^2(t+\alpha)= \frac12+\frac12 \cos2\alpha \cos2t+\frac12\sin2\alpha\sin 2t$$</span></p>
<p>to decompose the integral into three manageable pieces,
<span class="math-container">$$I=I_1+I_2+I_3\tag 1$$</span></p>
<p>where </p>
<p><span class="math-container">$$I_1=\frac12\int_0^{2\pi}\frac{ dt}{ r^2+s^2-2rs\cos t}=\frac{\pi}{|r^2-s^2|}
$$</span></p>
<p><span class="math-container">$$I_2=\frac12\int_0^{2\pi}\frac{\cos2\alpha\cos 2t\> dt}{ r^2+s^2-2rs\cos t}
=\frac {\pi\cos2\alpha}{2r^2s^2|r^2-s^2|}\left(r^4+s^4-|r^4-s^4|\right)$$</span></p>
<p><span class="math-container">$$I_3= \sin2\alpha\int_0^{2\pi}\frac{\cos t\sin tdt}{ r^2+s^2-2rs\cos t}
=0$$</span></p>
<p>(see derivations at end.) Plug the results into (1) to obtain</p>
<p><span class="math-container">$$I=\int_0^{2\pi}\frac{\cos^2\theta d\theta }{ |re^{i\theta} -z|^2}
=\frac{\pi}{|r^2-s^2|}\left( 1+\cos2\alpha\frac{r^4+s^4-|r^4-s^4|}{2r^2s^2}\right)$$</span></p>
<p>Note that the results for <span class="math-container">$r>s$</span> and <span class="math-container">$r<s$</span> are respectively,</p>
<p><span class="math-container">$$I_{r>s} =\frac{\pi}{r^2-s^2}\left( 1+\frac{s^2}{r^2}\cos2\alpha \right),\>\>\>\>\>
I_{r<s} =\frac{\pi}{s^2-r^2}\left( 1+\frac{r^2}{s^2}\cos2\alpha \right)$$</span></p>
<hr>
<p>PS: Use <span class="math-container">$u = \tan\frac t2$</span> to integrate <span class="math-container">$I_1$</span> and use the result in <span class="math-container">$I_2$</span>,</p>
<p><span class="math-container">$$I_1=\frac12\int_0^{2\pi}\frac{ dt}{ r^2+s^2-2rs\cos t}
=\int_0^{\infty}\frac{du}{ (r-s)r^2+(r+s)^2u^2}$$</span>
<span class="math-container">$$=\frac{2}{r^2-s^2}\tan^{-1}\left(\frac{r+s}{r-s}\right)\bigg|_0^\infty
=\frac{\pi}{|r^2-s^2|}
$$</span></p>
<p><span class="math-container">$$I_2=\frac12\int_0^{2\pi}\frac{\cos2\alpha\cos2t\> dt}{ r^2+s^2-2rs\cos t}
=\cos2\alpha\left(\int_0^{2\pi}\frac{\cos^2t\> dt}{ r^2+s^2-2rs\cos t}-I_1\right)$$</span>
<span class="math-container">$$=\cos2\alpha\left[\left(\frac{(r^2+s^2)^2}{2r^2s^2}-1\right) I_1 -\frac{1}{4r^2s^2} \int_0^{2\pi} (r^2+s^2-2rs\cos t)dt\right]$$</span></p>
<p><span class="math-container">$$=\cos2\alpha\left[\frac{(r^4+s^4)^2}{2r^2s^2}\frac{\pi}{|r^2-s^2|}
-\frac{(r^2+s^2)\pi}{2r^2s^2}\right]$$</span></p>
<p><span class="math-container">$$=\frac {\pi\cos2\alpha}{2r^2s^2|r^2-s^2|}\left(r^4+s^4-|r^4-s^4|\right)\tag 3
$$</span></p>
|
2,403,741 | <blockquote>
<p>Solve for $\theta$ the following equation.
$$\sqrt {3} \cos \theta - 3 \sin \theta = 4 \sin 2\theta \cos 3\theta.$$</p>
</blockquote>
<p>I tried writing sin and cos expansions but it is becoming too long.Please help me.</p>
| John Hughes | 114,036 | <p>Look at positive integer triples whose product is 36:</p>
<pre><code>1 1 36
1 2 18
1 3 12
1 4 9
1 6 6
2 2 9
2 3 6
3 3 4
</code></pre>
<p>For each, compute the sum:</p>
<pre><code>1 1 36 -> 38
1 2 18 -> 21
1 3 12 -> 16
1 4 9 -> 14
1 6 6 -> 13
2 2 9 -> 13
2 3 6 -> 11
3 3 4 -> 10
</code></pre>
<p>The only sum that appears twice is 13. For any other sum, you'd know the actual factors, but for sum 13, you cannot know. </p>
<p>I think your main problem here was failing to read the problem carefully, but I cannot be certain of that. </p>
|
76,558 | <p>Reading books and papers on complexity theory, I am struck by the extreme degree to which proofs are stated in an intuitive, hand-wavy way. The alternative is to give a lot of details about the coding of complicated Turing machines
which simulate each other. Even proofs which basically take one line for a person to understand would take pages and pages if you actually specified the Turing machines.</p>
<p>Have formal verifications of complexity theory results been written? If so, to what extent does this require coding all the horrible Turing Machine details and how much does this blow up the proofs by?</p>
| Timothy Chow | 3,106 | <p>The same comment and question could be applied to just about any area of mathematics. To my knowledge, nobody has really worked on formalizing computational complexity theory. Formalization is still hard work; an experienced person will take about a week to formalize about a page of an undergraduate textbook. Therefore, people have mostly focused on "flashy" theorems, for which you can get some kind of recognition for the work that you put into writing up the formal proof. Formalizing routine undergraduate material is a time-consuming task for which almost nobody will thank you (or pay you).</p>
<p>If you do write a formal proof, then of course you need to give all the details. However, sometimes you can do better than the most naive approach of literally mimicking the original human proof. For example, when Gonthier formalized the proof of the four-color theorem, he figured out ways to slicken the proof and make it easier to formalize, as he explained in his <a href="http://www.ams.org/notices/200811/tx081101382p.pdf">Notices article</a>. Most likely, if a serious effort were made to formalize large chunks of computational complexity theory, shortcuts would be devised so that (to address your specific question) Cook- or Karp-reductions from one problem to another could be coded up in a more human-friendly form, with the computer automating certain parts of the process. This is not a trivial matter, though, which is why writing formal proofs is still too tedious for most mathematicians to want to contemplate.</p>
|
76,558 | <p>Reading books and papers on complexity theory, I am struck by the extreme degree to which proofs are stated in an intuitive, hand-wavy way. The alternative is to give a lot of details about the coding of complicated Turing machines
which simulate each other. Even proofs which basically take one line for a person to understand would take pages and pages if you actually specified the Turing machines.</p>
<p>Have formal verifications of complexity theory results been written? If so, to what extent does this require coding all the horrible Turing Machine details and how much does this blow up the proofs by?</p>
| Kaveh | 7,507 | <p>Generally complexity theorist prefer to use as little formalism as possible. $\mathsf{IP}=\mathsf{PSpace}$ is on the list <a href="http://www.cs.ru.nl/~freek/100/" rel="nofollow">here</a> but it doesn't seem that it has been verified with a proof assistant. </p>
<p>I doubt that complexity theorists would be interested in writing formal proofs verifiable by proof assistants unless there is very good incentive to do it. Making proofs more formal might make it easier for proof assistants to verify them, but it makes them much less readable by humans.</p>
<p>But if you want more detailed analysis of the proofs, some results and theorems have been studied in bounded reverse mathematics, you can find more on this in the papers by Steve Cook and some of his students, in particular Nguyen's thesis. Also Alexander Razborov has a few papers where he studies the required theories to prove complexity results like switching lemma and Smolensky's proof of Razborov-Smolensky.</p>
|
978,384 | <p>The following picture is constructed by connecting each corner of a square with the midpoint of a side from the square that is not adjacent to the corner. These lines create the following red octagon:</p>
<p><img src="https://i.stack.imgur.com/PZyGa.jpg" alt="enter image description here"></p>
<p>The question is, what is the ratio between the area of the octagon and the area of the square. One is supposed to find the solution without a ruler. </p>
<p>By removing some lines, I find it easy to see that the ratio between the yellow area and the square is 1/4. But I am not sure if this helps.</p>
<p><img src="https://i.stack.imgur.com/lpIZF.jpg" alt="enter image description here"></p>
| Shadow | 256,351 | <p>Let us name all the points on the outer square like this:</p>
<p><a href="https://i.stack.imgur.com/PjeU9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PjeU9.png" alt="enter image description here"></a></p>
<p>So, AB, CD, EF, AD are side of the square of length say $\mathrm{a}$ and E, F, G and H are midpoints of the sides of the square. Now consider line segments AF, GC, BH, ED. AF and GC intersect BH and ED to form another square.</p>
<p><a href="https://i.stack.imgur.com/jPM1C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jPM1C.png" alt="enter image description here"></a></p>
<p>The area of this square is $\frac{a^2}{5}$ <a href="https://math.stackexchange.com/questions/2518341/area-of-a-square-inside-a-square-created-by-connecting-point-opposite-midpoint">as proved from answers to this question</a>. So, side of this square is $\frac{a}{\sqrt{5}}$. The sides of the square act as span of the octagon (S).(See <a href="https://en.wikipedia.org/wiki/Octagon" rel="nofollow noreferrer">octagon</a>).</p>
<p><a href="https://i.stack.imgur.com/a297o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a297o.png" alt="enter image description here"></a> Line HC, AE, GD, FB not drawn for clarity.</p>
<p>Area of octagon in terms of span = $\mathrm{2(\sqrt{2}-1)S^2 = 2(\sqrt{2}-1)\frac{a^2}{5} = \frac{2(\sqrt{2}-1)}{5}{a^2}}$</p>
<p>Therefore, ratio between the area of the octagon and the area of the square = $\mathrm{\frac{2(\sqrt{2}-1)}{5}}$</p>
|
754,888 | <p>The letters that can be used are A, I, L, S, T. </p>
<p>The word must start and end with a consonant. Exactly two vowels must be used. The vowels can't be adjacent.</p>
| MJD | 25,554 | <p><a href="http://cr.yp.to/papers/powers-ams.pdf" rel="nofollow">This paper</a> of Daniel J. Bernstein, “Detecting Perfect Powers in Essentially Linear Time” describes a fast algorithm which does exactly what you asked for: given a positive integer $n$, it finds positive integers $x$ and $k$ with $n = x^k$ and $k$ as large as possible. It does so in an amount of time that is (almost) proportional to the number of digits in $n$, which is almost as fast as one could hope for.</p>
<p>Note that this does not require factoring $n$, which would take much longer.</p>
|
2,797,902 | <p>AFAIK, every mathematical theory (by which I mean e.g. the theory of groups, topologies, or vector spaces), started out (historically speaking) by formulating a set of axioms that generalize a specific structure, or a specific set of structures. </p>
<p>For example, when people think of a “field” they AFAIK usually think of $\mathbb R$, or $\mathbb C$. A topology started out as a concept defined on $\mathbb R^n$ if I’m not mistaken. </p>
<p>But I’ve also seen cases where a certain structure has a natural topological structure, such as certain sets of propositions in first order logic. As far as I know, the people who formulated the axioms of a topology had no idea of this application. <strong>And the topological structure of a set of FOL statements is certainly conceptually vastly different from one on $\mathbb R^n$, certainly not two structures I would have expected to have such a deep commonality.</strong></p>
<p><strong>I would like to make a list of examples of mathematical structures that</strong></p>
<ol>
<li><p>Are interesting and well-behaved structures (e.g. not mere pathological counter examples)</p></li>
<li><p>satisfy the axioms of some mathematical theory in an interesting and nontrivial way,</p></li>
<li><p>But whose emergence is (conceptually/historically) very different from the structure of which those axioms were originally intended as a generalization.</p></li>
</ol>
| Ethan Bolker | 72,858 | <p>Several significant examples from mathematical physics:</p>
<ul>
<li><p>The usefulness of Hilbert space in the formalization of quantum
mechanics.</p></li>
<li><p>Riemannian manifolds as the appropriate language for general
relativity.</p></li>
<li><p><a href="https://en.wikipedia.org/wiki/Calabi%E2%80%93Yau_manifold" rel="nofollow noreferrer">Calabi -Yau manifolds</a> come up in string theory.</p></li>
</ul>
<p>Calculus for Newtonian mechanics doesn't count because the wish to formalize mechanics was much of what led Newton to invent calculus.</p>
|
2,797,902 | <p>AFAIK, every mathematical theory (by which I mean e.g. the theory of groups, topologies, or vector spaces), started out (historically speaking) by formulating a set of axioms that generalize a specific structure, or a specific set of structures. </p>
<p>For example, when people think of a “field” they AFAIK usually think of $\mathbb R$, or $\mathbb C$. A topology started out as a concept defined on $\mathbb R^n$ if I’m not mistaken. </p>
<p>But I’ve also seen cases where a certain structure has a natural topological structure, such as certain sets of propositions in first order logic. As far as I know, the people who formulated the axioms of a topology had no idea of this application. <strong>And the topological structure of a set of FOL statements is certainly conceptually vastly different from one on $\mathbb R^n$, certainly not two structures I would have expected to have such a deep commonality.</strong></p>
<p><strong>I would like to make a list of examples of mathematical structures that</strong></p>
<ol>
<li><p>Are interesting and well-behaved structures (e.g. not mere pathological counter examples)</p></li>
<li><p>satisfy the axioms of some mathematical theory in an interesting and nontrivial way,</p></li>
<li><p>But whose emergence is (conceptually/historically) very different from the structure of which those axioms were originally intended as a generalization.</p></li>
</ol>
| Ethan Bolker | 72,858 | <p>Number theory (the "structure of the integers") had no applications for years - a fact that particularly pleased G. H. Hardy. </p>
<p>Now it's central to cryptography: prime factorization, discrete logarithms, elliptic curves.</p>
<p>See <a href="https://crypto.stackexchange.com/questions/59537/how-come-public-key-cryptography-wasnt-discovered-earlier">https://crypto.stackexchange.com/questions/59537/how-come-public-key-cryptography-wasnt-discovered-earlier</a></p>
<p>(Not sure this counts.)</p>
|
2,623,324 | <p>Assume that the measure space is finite for this to make sense. Also, we know that $L^p$ spaces satisfy log convexity, that is -
$$\|f\|_r \leq \|f\|_p^\theta \|f\|_q^{1-\theta}$$
where $\frac{1}{r}=\frac{\theta}{p} +\frac{1-\theta}{q}$.
The text which I am following says 'Indeed this is trivial when $q=\infty$, and the general case then follows by convexity'. I understand that it is true when $q=\infty$, however I am unable to use that it is true for $q=\infty$ for proving the general case. I have found a proof which uses log-convexity and the fact that $\lim_{p\rightarrow 0}\|f\|_p^p=\mu(\text{supp}f)$. Is there some way that I can do this by using that it is true for $q=\infty$?</p>
| Atmos | 516,446 | <p>Use that
$$
\left(n+1\right)^{1/\sqrt{n}}=e^{\left(1/\sqrt{n}\right)\ln\left(1+n\right)}
$$
And
$$
\ln\left(1+n\right)=\ln\left(n\right)+\ln\left(1+\frac{1}{n}\right)
$$
So it becomes
$$
\left(n+1\right)^{1/\sqrt{n}}=e^{\frac{\ln(n)}{\sqrt{n}}+\frac{\ln\left(1+\frac{1}{n}\right)}{\sqrt{n}}}
$$
Using exponential properties
$$
\left(n+1\right)^{1/\sqrt{n}}=e^{\frac{\ln(n)}{\sqrt{n}}}\times e^{\frac{\ln\left(1+\frac{1}{n}\right)}{\sqrt{n}}}
$$
First term tends to $e^{0}=1$ because the power $\sqrt{n}$ is " stronger " than logarithm. And for the second use that
$$
\ln\left(1+\frac{1}{n}\right)=\frac{1}{n}+o\left(\frac{1}{n}\right)
$$
Hence
$$
\frac{\displaystyle \ln\left(1+\frac{1}{n}\right)}{\sqrt{n}}=\frac{1}{n^{3/2}}+o\left(\frac{1}{n^{3/2}}\right)
$$
Hence</p>
<blockquote>
<p>$$
\left(n+1\right)^{1/\sqrt{n}}\underset{(+\infty)}{\sim}e^{1/n^{3/2}}\underset{n \rightarrow +\infty}{\rightarrow}1
$$</p>
</blockquote>
|
2,449,581 | <p>There is a brick wall that forms a rough triangle shape and at each level, the amount of bricks used is two bricks less than the previous layer. Is there a formula we can use to calculate the amount of bricks used in the wall, given the amount of bricks at the bottom and top levels?</p>
| Alex F | 485,563 | <p>$S = (a_1 + a_n)/2*(a_n-a_1 + 2)/2$</p>
<p>Sum of the elements of an arithmetic progression is $S = (a_1+a_n)*n/2$. In case you do not now what $n$ is equal to, you can use the following formula:<br>
$S = (a_1 + a_n)/2*(a_n-a_1 + x)/x$, where $a_1$ is the first element, $a_n$ is the last element, $x$ is the "step", the number by which you increment each element of the progression in order to get its next element.</p>
|
4,218,729 | <p>You have a database of <span class="math-container">$25,000$</span> potential criminals. The probability that this database includes the art thief is <span class="math-container">$0.1$</span>. In a stroke of luck, you found a DNA sample of this thief from the crime scene. You compare this sample with a database of <span class="math-container">$25,000$</span> men. And lo and behold, a match is found! You are well aware that DNA matches are not always perfect: if you pick two different persons at random, the chance that their DNA samples would match using the current testing techniques is <span class="math-container">$1$</span> in <span class="math-container">$10,000$</span>. What's the probability that the database includes the art thief, given that a DNA match has been found?</p>
<p>I'm sure this question uses Bayes' theorem where <span class="math-container">$A=$</span> the database includes the art thief, <span class="math-container">$B=$</span> a DNA match has been found.
I need to find <span class="math-container">$P(A|B)= P(B|A)*P(A)/P(B)$</span>
to calculate <span class="math-container">$P(B)$</span>, there are different cases,
1). the database does include the thief, and a person matches with the DNA from the scene.
2). the database doesn't include the thief, but a person matches with the DNA.</p>
<p>I'm not really sure how to calculate 1) and 2). Please help me.</p>
<p>I also want to know if the probability of two people matching is <span class="math-container">$1/10000$</span>, is the probability of two people not matching <span class="math-container">$9999/10000$</span>?</p>
| lonza leggiera | 632,373 | <p>Given the wording of the question, I think your formulation of the problem is reasonable. However you're not really given enough information to evaluate <span class="math-container">$\ P(B)\ $</span> properly. Was the sample from the crime scene merely tested against random entries in the database <em>until</em> a match was found, for instance, or was it tested against <em>every</em> entry in the database? In the latter case, <em>the exact number</em> of matches found is vital information which should be used to evaluate the posterior probability that the thief's DNA is in the database. Since you're not given that information, I think you can reasonably assume that something like the former procedure was used. In that case, <span class="math-container">$\ B=$</span> <em>at least</em> one entry in the database matches the thief's DNA.</p>
<p>If you also assume that when the thief's DNA is in the database it is certain to match the DNA from the crime scene, then you have
<span class="math-container">$$
P\big(B\,\big|A\big)=1\ .
$$</span>
On the other hand, when the thief's DNA is not in the database, each entry in the database presumably has an independent probability of <span class="math-container">$\ \frac{1}{10000}\ $</span> of matching the DNA from the crime scene, and a probability of <span class="math-container">$\ \frac{9999}{10000}\ $</span> of not matching it. Therefore,
<span class="math-container">\begin{align}
P\big(B\,\big|A^c\big)&=1-P\big(B^c\,\big|A^c\big)\\
&=1-\Big(\frac{9999}{10000}\Big)^{25000}\\
&\approx0.918\ ,\\
P(B)&=P(B\,|A)P(A)+P\big(B\,|A^c\big)P\big(A^c\big)\\
&\approx1\times0.1+0.918\times0.9\\
&=0.9262\ ,\\
P(A|B)&=\frac{P(B\,|A)P(A)}{P(B)}\\
&\approx\frac{0.1}{0.9262}\\
&\approx{0.108}\ .
\end{align}</span></p>
<p>For completeness, here is the calculation for the case when the thief's DNA was tested against the whole database, and exactly <span class="math-container">$\ n\ $</span> matches were found. Call this event <span class="math-container">$\ B_n\ $</span>.</p>
<p>If the thief's DNA profile is in the database, then the probability that exactly <span class="math-container">$\ n\ge1\ $</span> matches will be found is just the probability that exactly <span class="math-container">$\ n-1\ $</span> matches will be found with the <span class="math-container">$\ 24999\ $</span> other potential criminals in the database. Thus,
<span class="math-container">$$
P\big(B_n\big|A\big)={24999\choose n-1}\frac{1}{10000^{n-1}}\Big(\frac{9999}{10000}\Big)^{25000-n}\ .
$$</span>
If the thief's DNA profile is not in the database, then the probability that exactly <span class="math-container">$\ n\ge1\ $</span> matches will be found is just the probability that exactly <span class="math-container">$\ n\ $</span> matches will be found with the <span class="math-container">$\ 25000\ $</span> potential criminals in the database (none of whom is the thief). Thus
<span class="math-container">$$
P\big(B_n\big|A^c\big)={25000\choose n}\frac{1}{10000^n}\Big(\frac{9999}{10000}\Big)^{25000-n}\ .
$$</span>
With a little bit of elementary arithmetic, it follows that
<span class="math-container">\begin{align}
\frac{P\big(B_n\big|A\big)}{P\big(B_n\big|A^c\big)}&=\frac{n}{2.5}\ ,\\
\frac{P\big(B_n\big|A\big)P(A)}{P\big(B_n\big|A^c\big)P\big(A^c\big)}&=\frac{n}{9\times2.5}\\
&=\frac{n}{22.5}\ ,
\end{align}</span>
and hence
<span class="math-container">\begin{align}
P\big(A\,\big|B_n\big)&=\frac{P\big(B_n\,\big|A\big)P(A)}{P\big(B_n\big)}\\
&=\frac{P\big(B_n\,\big|A\big)P(A)}{P\big(B_n\,\big|A\big)P(A)+P\big(B_n\,\big|A^c\big)P(A^c)}\\
&=\frac{P\big(B_n\big|A\big)P(A)}{P\big(B_n\big|A^c\big)P\big(A^c\big)}\Bigg(\frac{P\big(B_n\big|A\big)P(A)}{P\big(B_n\big|A^c\big)P\big(A^c\big)}+1\Bigg)^{-1}\\
&=\frac{n}{22.5+n}\ .
\end{align}</span>
For <span class="math-container">$\ n=0\ $</span>, <span class="math-container">$\ P\big(B_0\,\big|\,A\big)=0\ $</span>, and <span class="math-container">$\ P\big(B_0\big)\ne0\ $</span>, so <span class="math-container">$\ P\big(A\,\big|\,B_0\big)=0\ $</span> also.</p>
|
2,867,479 | <p>From <a href="https://math.stackexchange.com/questions/2867457">ETS Major Field Test in Mathematics</a></p>
<blockquote>
<p>A student is given an exam consisting of
8 essay questions divided into 4 groups of
2 questions each. The student is required to
select a set of 6 questions to answer,
including at least 1 question from each of
the 4 groups. How many sets of questions
satisfy this requirement? </p>
</blockquote>
<p>I'm thinking $$\binom{2}{1}^4 \binom{4}{2}$$</p>
<p>because we have to pick 1 from each of the 4 groups of 2 and then from the remaining 4 questions we pick 2.</p>
| BCLC | 140,308 | <p>$$\binom{8}{6}=28$$</p>
<p>Then let's take away the choices where we don't cover all 4 groups. There are only 4. Why? To not cover all 4 groups but to choose 6 means to pick all questions but 2 questions of the same group. That is, from 4 groups, pick 1 group to exclude or equivalently pick 3 groups to include $$\binom{4}{3}=\binom{4}{1}=4$$</p>
<p>$$\therefore \binom{8}{6} - \binom{4}{3} = \binom{8}{6} - \binom{4}{1} = 28-4=24$$</p>
<p>Please suggest how my original approach could have been improved. I think the essence of the original approach is that we first pick 1 from each, and I guess that could be done in 16 ways. I'm just not sure how to reach 24 from there. We could still go to 96 but then I'm not sure how to reach 24 from 96. I guess something about how order doesn't matter twice so we would divide 96 by 2!2!.</p>
|
2,867,479 | <p>From <a href="https://math.stackexchange.com/questions/2867457">ETS Major Field Test in Mathematics</a></p>
<blockquote>
<p>A student is given an exam consisting of
8 essay questions divided into 4 groups of
2 questions each. The student is required to
select a set of 6 questions to answer,
including at least 1 question from each of
the 4 groups. How many sets of questions
satisfy this requirement? </p>
</blockquote>
<p>I'm thinking $$\binom{2}{1}^4 \binom{4}{2}$$</p>
<p>because we have to pick 1 from each of the 4 groups of 2 and then from the remaining 4 questions we pick 2.</p>
| Xiaonan | 336,263 | <p>Because we only have four groups and you need to pick up 6 balls which means you need to pick up two groups of balls and one ball each from the other two groups so the final result would be</p>
<p>${4\choose2} \times 2 \times 2$ which is 24.</p>
|
2,844,060 | <p>How to find the points of discontinuity of the following function $$f(x) = \lim_{n\to \infty} \sum_{r=1}^n \frac{\lfloor2rx\rfloor}{n^2}$$ </p>
| Jakobian | 476,484 | <p>Using Stolz theorem:
$$ \lim_{n\to \infty} \sum_{r=1}^n \frac{\lfloor 2rx\rfloor}{n^2} = \lim_{n\to \infty} \frac{\lfloor2nx\rfloor}{n^2-(n-1)^2} = \lim_{n\to \infty} \frac{\lfloor2nx\rfloor}{2n-1} = x $$
Note that for the last limit you use squeeze theorem.</p>
<p>So the function is continuous</p>
|
1,183,185 | <p>i) Show that for a particle moving with velocity $v(t), $if $ v(t)·v′(t) = 0$ for all $t$ then the speed $v$ is constant.
</p>
<p>I did $(v(t))^2=|v(t)|^2=(v(t)\bullet(v(t)))$. </p>
<p>Therefore $\frac{d}{dt}(v(t))^2=2v(t)$</p>
<p>Also, $\frac{d}{dt}(v(t)\bullet(v(t))=2(v(t)·v′(t))$ </p>
<p>I'm stuck here.</p>
<p>ii) A particle of mass m with position vector r(t) at time t is acted on by a total force
F (t) = λr(t) × v(t),
where λ is a constant and v(t) is the velocity of the particle. Show that the speed v of the particle is con- stant. (Note that Newton’s second law of motion in its vector form is F = ma.)</p>
<p>Therefore, ma(t)=λ(r(t) × v(t)) after which I don't know what to do.</p>
| heropup | 118,193 | <p>$$\begin{align*} \Pr[A] &= \Pr\left[\left(t_0 < \min_i X_i\right) \cap \left(\max_i X_i \le t_1\right)\right] \\ &= \Pr[t_0 < X_{(1)} \le X_{(N)} \le t_1] \\ &= \Pr[t_0 < X_1, X_2, \ldots, X_N \le t_1] \\ &= \Pr\left[\bigcap_{i=1}^N t_0 < X_i \le t_1 \right] \\ &\overset{\rm ind}{=} \prod_{i=1}^N \Pr[t_0 < X_i \le t_1] \\ &\overset{\rm i.d.}{=} \Pr[t_0 < X_1 \le t_1]^N \\ &= \left( \Pr[X_1 \le t_1] - \Pr[X_1 \le t_0] \right)^N \\ &= \left(F_X(t_1) - F_X(t_0)\right)^N. \end{align*}$$</p>
|
1,002,719 | <p>If we have</p>
<p>$f: \{1, 2, 3\} \to \{1, 2, 3\}$</p>
<p>and</p>
<p>$f \circ f = id_{\{1,2,3\}}$</p>
<p>is the following then always true for every function?</p>
<p>$f = id_{\{1,2,3\}}$</p>
| egreg | 62,967 | <p>Hint: consider $f$ defined by $f(1)=2$, $f(2)=1$ and $f(3)=3$.</p>
|
3,623,277 | <p>Show that the cycles <span class="math-container">$(1, 2, \ldots, n)$</span>, <span class="math-container">$(n, \ldots, 2, 1)$</span> are inverse permutations. </p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> <span class="math-container">$ $</span> It's always true mod <span class="math-container">$3,\,$</span> so <a href="https://math.stackexchange.com/a/1864763/242">by CRT</a> we need only combine all roots <span class="math-container">$\{0,\pm1\}$</span> mod <span class="math-container">$5$</span> and <span class="math-container">$7,\,$</span> and <span class="math-container">$\,x\equiv a\pmod{\!5},\,x\equiv b\pmod{\!7}\!\iff\! x\equiv b+14(b-a)\pmod{\!35}.\,$</span> For <span class="math-container">$\,a,b\in \{0,\pm1\}$</span> this yields <span class="math-container">$\,x\equiv \pm \{0,1,6,14,15\}\pmod{\!35}$</span></p>
|
2,359,372 | <blockquote>
<p>Given that
$$\log_a(3x-4a)+\log_a(3x)=\frac2{\log_2a}+\log_a(1-2a)$$
where $0<a<\frac12$, find $x$.</p>
</blockquote>
<p>My question is how do we find the value of $x$ but we don't know the exact value of $a$? </p>
| Hypergeometricx | 168,053 | <p>Put $u=3x-2a$ to exploit the symmetry and note that $\frac 1{\log_2}a=\log_a 2$:
$$\begin{align}
\log_a(3x-4a)+\log_a(3x)&=\frac2{\log_2a}+\log_a(1-2a)\\
\log_a (\underbrace{3x-4a}_{u-2a})(\underbrace{3x}_{u+2a})&=\log_a 2^2(1-2a)\\
(u-2a)(u+2a)&=4(1-2a)\\
u^2-4a^2&=4(1-2a)\\
u^2&=4(a-1)^2\\
u=3x-2a&=\pm 2(a-1)\\
x&=\frac {4a-2}3 \text{ or }\frac 23 \\
\end{align}$$</p>
<p>Note that
$$0<a<\frac 12\\
0<4a<2\\
-2<4a-2<0\\
-\frac 23<\frac {4a-2}3<0$$</p>
<p>Hence $$\color{red}{-\frac 23<x<0, \;\; x=\frac 12}$$</p>
|
1,855,650 | <p>Need to solve:</p>
<p>$$2^x+2^{-x} = 2$$</p>
<p>I can't use substitution in this case. Which is the best approach?</p>
<p>Event in this form I do not have any clue:</p>
<p>$$2^x+\frac{1}{2^x} = 2$$</p>
| Zain Patel | 161,779 | <p>Elucidate the problem by using the substitution $u = 2^x$, then you have $$u + \frac{1}{u} = 2$$</p>
<p>Multiply throughout by $u \neq 0$ to get $$u^2 +1 = 2u \iff u^2 - 2u + 1 = 0$$</p>
<p>This is an easy quadratic to solve, you should get $u = 1$ and hence you need only solve $2^x = 1 \iff x = 0$. </p>
|
1,855,650 | <p>Need to solve:</p>
<p>$$2^x+2^{-x} = 2$$</p>
<p>I can't use substitution in this case. Which is the best approach?</p>
<p>Event in this form I do not have any clue:</p>
<p>$$2^x+\frac{1}{2^x} = 2$$</p>
| H. Potter | 289,192 | <p>Substitute $y=2^x$. Hence, the equation is: $y+y^{-1}=2$ which is equivalent to: $y^2-2y+1=0$. Can you take it from here ? Once you find which $y$('s) satisfy the equation, try to find $x$ such that $2^x=y$.</p>
|
1,415,752 | <p>I test my answer using wolfram alpha pro but it gets a different result to what I am getting. This is homework.</p>
<p>My result is z= 2(y-1)</p>
<p>partial derivative with respect to y is </p>
<pre><code> x.y^x-1
</code></pre>
<p>partial derivative with respect to x is
ln(y).y^x</p>
<p>ln(1) is zero, so the x term is zero, leaving 2^(2-1) = 2</p>
<p>so z = 2(y-1) </p>
| 5xum | 112,884 | <p>It is undefined.</p>
<p>The function $x\mapsto \sqrt{1-x}$ is only defined for $1-x>0$, so only for $x<1$.</p>
<p>In your case, the limit only takes undefined values, since it is a limit when $x$ is <strong>decreasing</strong> to $1$.</p>
<p>The limit $$\lim_{x\to 1^-}\sqrt{1-x}$$
on the other hand <em>does</em> exist and is equal to $0$.</p>
|
161,029 | <p>I have not seen a problem like this so I have no idea what to do.</p>
<p>Find an equation of the tangent to the curve at the given point by two methods, without elimiating parameter and with.</p>
<p>$$x = 1 + \ln t,\;\; y = t^2 + 2;\;\; (1, 3)$$</p>
<p>I know that $$\dfrac{dy}{dx} = \dfrac{\; 2t\; }{\dfrac{1}{t}}$$</p>
<p>But this give a very wrong answer. I am not sure what a parameter is or how to eliminate it.</p>
| Mohamed | 33,307 | <p>2nd method: eliminating parameter:</p>
<p>$x=1+ \ln t , y=t^2+2 \Leftrightarrow t=\exp(x-1), y=2+\exp(2x-2)$</p>
<p>Consider the function: $f(x)=2+ \exp(2x-2)$, then: $f'(x)=2 \exp(2x-2)$</p>
<p>The tangent at $x=1$ has equation: $Y=f'(1)(X-1)+ f(1)$, thus: $Y=2(X-1)+3$, thus:$$Y=2X+1$$</p>
|
206,712 | <p>Warning: very new to Mathematica
Currently I am trying to solve for <code>x</code>, but the variable <code>K</code> changes. So I'd like a value for <code>x</code>, for every <code>K</code>. However, this equation below gives me 2 complex numbers and 2 real, of which 1 real value is positive. I'm not sure how possible it would be to get an output of either all possible solutions (both complex and real) or just the one positive value that I want. </p>
<pre><code>((x^4) / (3906250*K)) - (1 - x) == 0
</code></pre>
<p>Different values for K:</p>
<pre><code>{5.16 E - 16, 5.29 E - 16, 5.43 E - 16, 5.57 E - 16, 5.72 E - 16,
5.87 E - 16, 6.02 E - 16}
</code></pre>
| kglr | 125 | <p>try <a href="https://reference.wolfram.com/language/ref/Part.html" rel="nofollow noreferrer"><code>Part</code></a>:</p>
<pre><code>data[[All, All, {2, 3}]]
</code></pre>
<blockquote>
<p>{{{x1, y1}, {x1, y1}, {x1, y1}, {x1, y1}, {x1, y1}}, {{x2, y2}, {x2,
y2}, {x2, y2}}, {{x3, y3}, {x3, y3}, {x3, y3}, {x3, y3}}}</p>
</blockquote>
|
206,712 | <p>Warning: very new to Mathematica
Currently I am trying to solve for <code>x</code>, but the variable <code>K</code> changes. So I'd like a value for <code>x</code>, for every <code>K</code>. However, this equation below gives me 2 complex numbers and 2 real, of which 1 real value is positive. I'm not sure how possible it would be to get an output of either all possible solutions (both complex and real) or just the one positive value that I want. </p>
<pre><code>((x^4) / (3906250*K)) - (1 - x) == 0
</code></pre>
<p>Different values for K:</p>
<pre><code>{5.16 E - 16, 5.29 E - 16, 5.43 E - 16, 5.57 E - 16, 5.72 E - 16,
5.87 E - 16, 6.02 E - 16}
</code></pre>
| Coolwater | 9,754 | <p>You can write <code>positions = data[[All, All, {2, 3}]]</code>, or by adjusting your code:</p>
<pre><code>positions = Table[{data[[j]][[i]][[2]], data[[j]][[i]][[3]]},
{j, 1, Length[data]}, {i, 1, Length[data[[j]]]}]
</code></pre>
|
192,883 | <p>Can anyone please give an example of why the following definition of $\displaystyle{\lim_{x \to a} f(x) =L}$ is NOT correct?:</p>
<p>$\forall$ $\delta >0$ $\exists$ $\epsilon>0$ such that if $0<|x-a|<\delta$ then $|f(x)-L|<\epsilon$</p>
<p>I've been trying to solve this for a while, and I think it would give me a greater understanding of why the limit definition is what it is, because this alternative definition seems quite logical and similar to the real one, yet it supposedly shouldn't work.</p>
| Ilmari Karonen | 9,602 | <p>There are two problems with your "backwards" definition, which I'll illustrate with examples:</p>
<ol>
<li><p>Let $f(x) = \sin x$ and let $a$, $L$ and $\delta$ be arbitrary real numbers. Then $\epsilon = |L| + 2$ satisfies your definition.</p></li>
<li><p>Let $f(x) = 1/x$ (for $x \ne 0$, and let $f(0) = 0$, just to make $f(x)$ defined everywhere), and let $a = 1$. Then, for any $L$ (including $L = 1$), your definition fails for all $\delta \ge 1$, since for any $\epsilon$ we can choose $x=1/(L+\epsilon)$ if $L+\epsilon > 1$ and $x=1$ otherwise, so that in either case $f(x)-L \ge \epsilon$.</p></li>
</ol>
|
192,883 | <p>Can anyone please give an example of why the following definition of $\displaystyle{\lim_{x \to a} f(x) =L}$ is NOT correct?:</p>
<p>$\forall$ $\delta >0$ $\exists$ $\epsilon>0$ such that if $0<|x-a|<\delta$ then $|f(x)-L|<\epsilon$</p>
<p>I've been trying to solve this for a while, and I think it would give me a greater understanding of why the limit definition is what it is, because this alternative definition seems quite logical and similar to the real one, yet it supposedly shouldn't work.</p>
| TonyK | 1,508 | <p>Your condition can be translated into words as:</p>
<p>$f$ is bounded in every open ball around $a$</p>
<p>This is quite different from being continuous!</p>
<p>PS Not every function satisfies this condition, but the ones that don't are rather fierce. (In fact, in a strict sense, <em>almost no</em> function satisfies the condition, but the functions arising in analysis normally do.)</p>
|
440,242 | <p>I'm pretty sure almost all mathematicians have been in a situation where they found an interesting problem; they thought of many different ideas to tackle the problem, but in all of these ideas, there was something missing- either the "middle" part of the argument or the "end" part of the argument. They were stuck and couldn't figure out what to do.</p>
<blockquote>
<ol>
<li>In such a situation what do you do?</li>
<li>Is the reason for the "missing part" the incompleteness in the theory of the topic that the problem is related to? What can be done to find the "missing part"?</li>
</ol>
</blockquote>
<p>For tenure-track/tenure professors, maybe this is not a big deal because they have "enough" time and can let the problem "stew" in the "back-burner" of their mind, but what about limited-time positions, e.g. PhD students, postdocs, etc., where the student/employee has to prove their capability to do "independent" research so that they can be hired for their next position? I think for these people it is quite a bit of a problem because they can't really afford to spend a "lot" of time thinking about the same problem.</p>
| Phil Harmsworth | 106,467 | <p>I think there's some good advice on how to conduct research in J.E. Littlewood's "The Mathematician's Art of Work", included in <em>Littlewood's miscellany</em>, CUP, 1986.</p>
<blockquote>
<p>"A <em>sine qua non</em> is an intense conscious curiosity about the subject, with a craving to exercise the mind on it, quite like physical hunger ... Given the strong drive, it communicates itself in some form to the subconscious, which does all the real work, and would seem to be always on duty. Lacking the drive, one sticks."</p>
</blockquote>
<blockquote>
<p>"<em>Minor depressions</em> will occur, and most of a mathematician's life is spent in frustration, punctuated with rare inspirations. A beginner can't expect quick results; if they are quick they are pretty sure to be poor."</p>
</blockquote>
<p>Littlewood includes a section of research strategy, too long to quote here, which I believe is very worthwhile reading.</p>
|
3,424,259 | <p>The system in question is <span class="math-container">$$\begin{cases} x_1 -x_2 + x_3 = -1 \\ -3x_1 +5x_2 + 3x_3 = 7 \\ 2x_1 -x_2 + 5x_3 = 4 \end{cases}$$</span></p>
<p>After writing this in matrix-form and performing row-operations we can show that</p>
<p><span class="math-container">$$
\begin{matrix}
-1 & 4 & 8 &| 11 \\
0 & 1 & 3 &|6\\
0 & 1 & 3&|10 \\
\end{matrix}
$$</span></p>
<p>Substitute back our variables and we get</p>
<p><span class="math-container">$$\begin{cases} x_2 + 3x_3 = 6 \\ x_2 +3x_3 = 10 \end{cases}$$</span></p>
<p>Which is a contradictory statement that shows our system has no solutions. Is this a suffciently rigorous way of answering the question : 'Does this system have a solution?'.</p>
| José Carlos Santos | 446,262 | <p>Yes, that is sufficient, since at that point you can assert that, no matter what the values of <span class="math-container">$x_2$</span>, and <span class="math-container">$x_3$</span> are, the number <span class="math-container">$x_2+3x_3$</span> cannot be equal to both <span class="math-container">$6$</span> and <span class="math-container">$10$</span>.</p>
|
3,424,259 | <p>The system in question is <span class="math-container">$$\begin{cases} x_1 -x_2 + x_3 = -1 \\ -3x_1 +5x_2 + 3x_3 = 7 \\ 2x_1 -x_2 + 5x_3 = 4 \end{cases}$$</span></p>
<p>After writing this in matrix-form and performing row-operations we can show that</p>
<p><span class="math-container">$$
\begin{matrix}
-1 & 4 & 8 &| 11 \\
0 & 1 & 3 &|6\\
0 & 1 & 3&|10 \\
\end{matrix}
$$</span></p>
<p>Substitute back our variables and we get</p>
<p><span class="math-container">$$\begin{cases} x_2 + 3x_3 = 6 \\ x_2 +3x_3 = 10 \end{cases}$$</span></p>
<p>Which is a contradictory statement that shows our system has no solutions. Is this a suffciently rigorous way of answering the question : 'Does this system have a solution?'.</p>
| AlvinL | 229,673 | <p>Using the <a href="https://www.encyclopediaofmath.org/index.php/Kronecker-Capelli_theorem" rel="nofollow noreferrer">Kronecker-Capelli</a> criterion, the system of equations is solvable if and only if the rank of the system matrix is equal to the rank of the augmented matrix. The rank of the system matrix is clearly <span class="math-container">$2$</span> and of the augmented matrix is <span class="math-container">$3$</span>. Therefore, there can be no solution.</p>
|
3,306,341 | <p>Let <span class="math-container">$f\in Hom(R,R')$</span> be a surjective map and let <span class="math-container">$I$</span> be an ideal of <span class="math-container">$R$</span></p>
<p>Assume that <span class="math-container">$Ker(f)\subseteq I$</span> , prove that <span class="math-container">$f^{-1}(f(I))=I$</span></p>
<p>My work , </p>
<p><span class="math-container">$Ker(f)\subseteq I \Rightarrow f^{-1}(\left \{ 0_{R'} \right \})\subseteq I \Rightarrow \left \{ 0_{R'} \right \} \subseteq f(I)\Rightarrow f^{-1}(\left \{ 0_{R'} \right \})\subseteq f^{-1}(f(I))\Rightarrow Ker(f)\subseteq f^{-1}(f(I))$</span></p>
<p>Here I'm stucked</p>
<p>(<span class="math-container">$R$</span> and <span class="math-container">$R'$</span> are commutative and unitaty rings and <span class="math-container">$f$</span> is unital)</p>
| k.stm | 42,242 | <p>More generally, show that <span class="math-container">$f^{-1}(f(I)) = \ker f + I$</span>. For <span class="math-container">$x ∈ R$</span>,
<span class="math-container">\begin{align*}
x ∈ f^{-1}(f(I)) &\iff f(x) ∈ f(I) \\
&\iff ∃y ∈ I\colon~f(x) = f(y) \\
&\iff ∃y ∈ I\colon~x - y ∈ \ker f\\
&\iff …?
\end{align*}</span></p>
|
668,959 | <p>I don't know if this is an already existing conjecture, or has been proven: There is at least one prime number between <span class="math-container">$N$</span> and <span class="math-container">$N-\sqrt{N}$</span>.</p>
<p>Some examples:
<span class="math-container">$N=100$</span></p>
<p><span class="math-container">$\sqrt{N}=10$</span>
Between and 90 and 100, there is a prime: 97</p>
<p><span class="math-container">$N=36$</span></p>
<p><span class="math-container">$\sqrt{N}=6$</span>
Between 30 and 36, there is a prime: 31</p>
<p><span class="math-container">$N=64$</span></p>
<p><span class="math-container">$\sqrt{N}=8$</span>
Between 56 and 64, there are the primes: 59 and 61</p>
<p>N=12</p>
<p><span class="math-container">$\sqrt{N}=3.46..$</span>
Between 8.54 and 12, there is a prime: 11</p>
<p>If this hasn't been brought up before, I'm calling this the Dwyer Conjecture.</p>
| Will Jagy | 10,400 | <p>the usual form of such things is to say that something is true for sufficiently large numbers. That is likely to be the case, but actual proofs, including the "sufficiently large" condition, have still not reached what you need. The best results are something like $x + x^{0.525};$ for large enough positive real number $x,$ there is guaranteed to be a prime between $x$ and $x + x^{0.525}.$</p>
<p><a href="http://en.wikipedia.org/wiki/Prime_gap#Upper_bounds">http://en.wikipedia.org/wiki/Prime_gap#Upper_bounds</a> </p>
<p>So, while everyone believes there is a prime between $x$ and $x + \sqrt x$ for sufficiently large $x,$ we may never be sure. Some very believable conjectures suggest that "sufficiently large" is $x \geq 5504. $ Does not make it true, just reasonable. I can tell you that the $x + \sqrt x$ thing is true for $5504 \leq x \leq 4 \cdot 10^{18}.$</p>
|
1,680,269 | <p>Here $\mathbb{Z}_{n}^{*}$ means $\mathbb{Z}_{n}-{[0]_{n}}$</p>
<p>My attempt:</p>
<p>$(\leftarrow )$</p>
<p>$p$ is a prime, then, for every $[x]_{n},[y]_{n},[z]_{n}$ $\in (\mathbb{Z}_{n}^{*},.)$ are verified the following:</p>
<p>1) $[x]_{n}.([y]_{n}.[z]_{n}) = ([x]_{n}.[y]_{n}).[z]_{n}$, since from the operation . we have $[a]_{n}.[b]_{n}=[a.b]_{n}$ and . is associative in $\mathbb{Z}$.</p>
<p>2) There is an element $e$ such that $[x]_{n}.e = e.[x]_{n} = [x]_{n}$, since $ [x]_{n}.[1]_{n} = [x.1]_{n} = [x]_{n} = [x.1]_{n} = [x]_{n}[1]_{n}$</p>
<p>But I don't know how to check the inverse property, neither how to do the $(\rightarrow)$ part. </p>
<p>Thanks!</p>
| GiantTortoise1729 | 219,849 | <p>Given a non-empty set $A$ of a metric space $X$, note the function $$\textrm{dist}_A(x) = \inf_{a\in A} |a-x|$$ is continuous and $\textrm{dist}_A(x) = 0$ iff $x \in \textrm{cl}(A)$. Also recall continuous functions on compact sets attain their infimums. Hopefully you can put the pieces together.</p>
|
2,166,917 | <p>$20$ questions in a test. The probability of getting correct first $10$ questions is $1$. The probability of getting correct next $5$ questions is $\frac 13$. The probability of getting correct last $5$ questions is $\frac 15$. What is the probability of getting exactly $11$ questions correctly?</p>
<p>This is the question. I don't know how to calculate this question.</p>
<p>I tried $$1* {}^5C_1*{\frac 13}\left(\frac 23\right)^4*\left(\frac 45\right)^5+1*\left(\frac 23\right)^5*{}^5C_1*\left(\frac 15\right)\left(\frac 45\right)^4$$</p>
<p>But I am not sure the answer.</p>
<p>What if asking futher about expectation and variance? that will be a mess</p>
| user247327 | 247,327 | <p>The fact that "The probability of getting correct first 10 questions is 1" means that you need to get exactly one of the last 10 questions correct.</p>
<p>You can do that in either of two ways:
1) Get exactly one of the next 5 questions correct and get all of last 5 incorrect.
or
2) Get all of the next 5 questions incorrect and get exactly one of the last 5 correct.</p>
<p>1: The probability of getting any of the next 5 questions correct is 1/3, the probability of getting any incorrect is 2/3. The probability of getting exactly one correct and the other 4 incorrect, using the binomial probability formula, is $5(1/3)(2/3)^4= \frac{80}{243}$. The probability of getting any one of the last 5 questions correct is 1/5, the probability of getting a question incorrect is 4/5. The probability of getting all 5 wrong is $(4/5)^5= \frac{1024}{3125}**.</p>
<p>The probability of (1), of getting exactly one of the next 5 questions correct <strong>and</strong> getting all 5 of the last 5 questions incorrect is $\frac{80}{243}\frac{1024}{3125}$</p>
<p>2) The probability of getting all 5 of the next 5 questions incorrect is $(2/3)^5= \frac{32}{243}$ and the probability of getting exactly one of the last 5 correct is $5(1/5)(4/5)^4= \frac{1024}{3125}$.</p>
<p>The probability of (2), of getting all of the next 5 questions wrong and exactly one of the last 5 correct, is $\frac{32}{243}\frac{1024}{3125}$.</p>
<p>The probability of one of those two happening is the sum of those two probabilities.</p>
|
2,231,949 | <p>To find the minimal polynomial of $i\sqrt{-1+2\sqrt{3}}$, I need to prove that
$x^4-2x^2-11$ is irreducible over $\Bbb Q$. And I am stuck. Could someone please help? Thanks so much!</p>
| ancient mathematician | 414,424 | <p>As there are no rational roots the only possible factorisation is into two quadratics whose coefficients we may assume to be integral by Gauss. As 11 is prime and as there is no $X^3$ term we must have
$$
(X^2 +\alpha X +\epsilon)(X^2 -\alpha X -11\epsilon)
$$
where $\epsilon=\pm1$.
Now look at the $X$ term and get $\alpha=0$, and a contradiction.</p>
|
3,328,387 | <p>Suppose I have two positive semi-definite <span class="math-container">$n$</span>-by-<span class="math-container">$n$</span> matrices <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and an <span class="math-container">$n$</span>-by-<span class="math-container">$n$</span> identity matrix <span class="math-container">$I$</span>, and I'm looking for a way to compute, approximate or bound the following quantity:</p>
<p><span class="math-container">$$(A\otimes I + I \otimes A)^{-1} \text{vec}B$$</span></p>
<p>Concretely I'm dealing with matrices with <span class="math-container">$n$</span> ranging from <span class="math-container">$100$</span> to <span class="math-container">$4000$</span>, so <span class="math-container">$A$</span> is easy to invert, while <span class="math-container">$A \otimes I$</span> is too large, so need a way to compute this using operations on <span class="math-container">$n$</span>-by-<span class="math-container">$n$</span> matrices</p>
<p>Additionally, I found the following to give a decent approximation when <span class="math-container">$A$</span>, <span class="math-container">$B$</span> can be Kronecker-factored, wondering if there's a reason for that.</p>
<p><span class="math-container">$$0.5 A^{-0.5} B A^{-0.5}$$</span></p>
<p>Any tips or literature pointers are appreciated!</p>
| Rodrigo de Azevedo | 339,790 | <p>Given symmetric matrices <span class="math-container">$\mathrm A \succ \mathrm O_n$</span> and <span class="math-container">$\mathrm B \succeq \mathrm O_n$</span>, let</p>
<p><span class="math-container">$$\mathrm X (t) := \int_0^t e^{-\tau \mathrm A} \mathrm B e^{-\tau \mathrm A} \,\mathrm d \tau$$</span></p>
<p>Using integration by parts,</p>
<p><span class="math-container">$$\mathrm A \mathrm X (t) = \cdots = -\dot{\mathrm X} (t) + \mathrm B - \mathrm X (t) \mathrm A$$</span></p>
<p>and, thus, we obtain the following matrix differential equation</p>
<p><span class="math-container">$$\boxed{\dot{\mathrm X} = - \mathrm A \mathrm X - \mathrm X \mathrm A + \mathrm B}$$</span></p>
<p>with initial condition <span class="math-container">$\mathrm X (0) = \mathrm O_n$</span>. Assuming <span class="math-container">$\rm X$</span> converges to a steady-state, <span class="math-container">$\dot{\mathrm X} = \mathrm O_n$</span>, we obtain the following linear matrix equation</p>
<p><span class="math-container">$$\mathrm A \mathrm X + \mathrm X \mathrm A = \mathrm B$$</span></p>
<p>whose solution we denote by <span class="math-container">$\bar{\rm X}$</span>. This is the <em>desired</em> solution.</p>
<p>Using a <em>numerical</em> ODE solver, we then integrate the matrix differential equation above and find an <em>approximate</em> value of matrix <span class="math-container">$\bar{\rm X}$</span>. Since matrix <span class="math-container">$\rm X (t)$</span> is symmetric for all <span class="math-container">$t \geq 0$</span>, we can <a href="https://en.wikipedia.org/wiki/Vectorization_(mathematics)#Half-vectorization" rel="nofollow noreferrer">half-vectorize</a> both sides of the matrix differential equation in order to obtain a <span class="math-container">$\binom{n+1}{2}$</span>-dimensional state vector (rather than a <span class="math-container">$n^2$</span>-dimensional state vector, which would be wasteful).</p>
<hr />
<h3>Appendix</h3>
<p>Note that the solution of the matrix differential equation is given by</p>
<p><span class="math-container">$$\mathrm X (t) = \bar{\mathrm X} - e^{-t \mathrm A} \bar{\mathrm X} e^{-t \mathrm A}$$</span></p>
<p>Since <span class="math-container">$\mathrm A \succ \mathrm O_n$</span>,</p>
<p><span class="math-container">$$\lim_{t \to \infty} \mathrm X (t) = \bar{\mathrm X}$$</span></p>
<p>If matrix <span class="math-container">$\rm A$</span> were allowed to have an eigenvalue at zero, it would be unpleasant.</p>
|
720,969 | <p>While studying real analysis, I got confused on the following issue.</p>
<p>Suppose we construct real numbers as equivalence classes of cauchy sequences. Let $x = (a_n)$ and $y= (b_n)$ be two cauchy sequences, representing real numbers $x$ and $y$.</p>
<p>Addition operation $x+y$ is defined as $x+y = (a_n + b_n)$.</p>
<p>To check if this operation is well defined, we substitute $x = (a_n)$ with some real number $x' = (c_n)$ and verify that $x+y = x'+y$. We also repeat it for $y$. i.e. we verify that $x+y = x+y'$.</p>
<p><strong>Question:</strong></p>
<blockquote>
<p>Instead of checking that $x+y = x+y'$ and $x'+y = x+y$ seperately, would it suffice to check that $x+y = x' + y'$ in a single operation in order to show that addition is well defined for real numbers. Would it hurt to checking well definedness? Can any one explain me the logic behind ?</p>
</blockquote>
| Jose Antonio | 84,164 | <p>You can certainly check that the operation is well-defined by checking $x+y=x'+y'$ where $x$ and $x'$ are equivalent and $y$ and $y'$ are equivalent. </p>
<p>Let $(x_n)$, $(x_n')$, $(y_n)$ and $(y'_n)$ where $x_n-x_n'\to 0$ and $y_n-y_n' \to 0$, i.e., are equivalents. </p>
<p>So in ordered to prove the claim we need to show that $(x_n+y_n) -(x_n'+y_n')\to 0$, i.e., $x_n+y_n$ is equivalent to $x_n'+y_n'$. Given $\varepsilon>0$, choose $N$ such that $d(x_n,x'_n)<\varepsilon/2$ and $d(y_n,y_n')<\varepsilon/2$ at the same time for all $n\ge N$. Thus </p>
<p>\begin{align}|x_n+y_n-(x_n'+y_n')|=|x_n-x_n'+y_n-y_n'|\\\le |x_n-x_n'|+|y_n-y_n'|\\< \varepsilon \end{align}</p>
<p>Therefore, $(x_n+y_n)$ and $(x_n'+y_n')$ are equivalent and the addition is independent of choice of representatives.</p>
|
1,550,841 | <p>$\int e^{2\theta}\ \sin 3\theta\ d\theta$</p>
<p>After Integrating by parts a second time, It seems that the problem will repeat for ever. Am I doing something wrong. I would love for someone to show me using the method I am using in a clean and clear fashion. Thanks. <a href="https://i.stack.imgur.com/FipiE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FipiE.jpg" alt="enter image description here"></a></p>
| Mirko | 188,367 | <p>You got $\displaystyle\int e^{2\theta}\sin(3\theta)d\theta=...=\sin(3\theta)\frac12e^{2\theta}-\frac32\Bigl(\frac12e^{2\theta}\cos(3\theta)+\frac32\int e^{2\theta}\sin(3\theta)d\theta\Bigr)$. </p>
<p>Now you could denote the integral you are looking for by $x$: </p>
<p>$\displaystyle x=\int e^{2\theta}\sin(3\theta)d\theta$, then copy your equation as </p>
<p>$\displaystyle x=\sin(3\theta)\frac12e^{2\theta}-\frac32\Bigl(\frac12e^{2\theta}\cos(3\theta)+\frac32x\Bigr)$. </p>
<p>Then solve for $x$. </p>
<p>$\displaystyle x=\sin(3\theta)\frac12e^{2\theta}-\frac34e^{2\theta}\cos(3\theta)-\frac94x$. </p>
<p>$\displaystyle \frac{13}4x=\sin(3\theta)\frac12e^{2\theta}-\frac34e^{2\theta}\cos(3\theta)$. </p>
<p>$\displaystyle \int e^{2\theta}\sin(3\theta)d\theta=x=\frac4{13}\Bigl(\sin(3\theta)\frac12e^{2\theta}-\frac34e^{2\theta}\cos(3\theta)\Bigr)=
\frac2{13} \sin(3\theta) e^{2\theta}-\frac3{13}e^{2\theta}\cos(3\theta)+C$. </p>
<p>This is what you claimed was the answer in another comment, and which is confirmed too by <a href="http://www.wolframalpha.com/input/?i=integral+e%5E%7B2%5Ctheta%7D%5Csin%283%5Ctheta%29d%5Ctheta" rel="nofollow">wolframalpha</a> </p>
|
34,557 | <p>I just came across a spam answer which is extremely vulgar (sexual). I flagged it for moderator attention, and then it occurred to me that I could edit it and erase its contents by blanking it till a moderator gets to look at it. Is this an acceptable thing to do?</p>
| peterh | 111,704 | <p>Rude/abuse/spam deletions are causing -100 rep loss to the OP and the post content is defaced. People having access to the deleted content (OP + mods + 10k+ users) can still see it, but even they need an extra click to see its edit history.</p>
<p><a href="https://i.stack.imgur.com/EMWKL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EMWKL.png" alt="enter image description here" /></a></p>
<p>These deletions can be caused by at least 6 spam/rude/abusive flags, and possibly also by mod. The deletion will happen as if it had been done by the community bot. Mods get notified and they decide, what to do to the perpetrator. The most likely consequence is either suspension, or account destruction (if it is a low-rep spammer).</p>
<p>Defacing the post would remove evidence. Vote down, flag and wait. If you get back your lost rep, you know that it is done. Normally it happens quickly.</p>
|
2,239,240 | <p>I'm looking to do some independent reading and I haven't been able to find rough prerequisites for Differential Topology at the level of Milnor or Guillemin and Pollack.</p>
<p>Is a semester of analysis (Pugh) and a semester of topology (Munkres) enough to make sense of most of it or should I take a second semester of analysis first?</p>
| Hagen von Eitzen | 39,174 | <p>By the rules of precedence, we compute
$$ (2+3)^2 = 5^2 = 25.$$
By the same rules, we compute
$$2^2+2\cdot 2\cdot 3+3^2=4+12+9=25. $$
The very fact that both computations produce the same result justifies us to write down the interesting fact
$$ (2+3)^2=2^2+2\cdot 2\cdot 3+3^2.$$</p>
|
107,399 | <p>Let's say we have a set a\of associations:</p>
<pre><code>dataset = {
<|"type" -> "a", "subtype" -> "I", "value" -> 1|>,
<|"type" -> "a", "subtype" -> "II", "value" -> 2|>,
<|"type" -> "b", "subtype" -> "I", "value" -> 1|>,
<|"type" -> "b", "subtype" -> "II", "value" -> 2|>
}
</code></pre>
<p>where every entry is unique in terms of <code>{#type, #subtype}</code>, </p>
<h3>I'd like to build a nested association for more handy querying, e.g. I would like to have:</h3>
<pre><code>nested["a", "II", "value"]
</code></pre>
<blockquote>
<pre><code>2
</code></pre>
</blockquote>
<h3>I can start with</h3>
<pre><code>GroupBy[dataset, {#type &, #subtype &}]
</code></pre>
<blockquote>
<pre><code><|
"a" -> <|
"I" -> {<|"type" -> "a", "subtype" -> "I", "value" -> 1|>},
"II" -> {<|"type" -> "a", "subtype" -> "II", "value" -> 2|>}|>,
"b" -> <|
"I" -> {<|"type" -> "b", "subtype" -> "I", "value" -> 3|>},
"II" -> {<|"type" -> "b", "subtype" -> "II", "value" -> 4|>}
|>|>
</code></pre>
</blockquote>
<p>But <code>nested["a", "I"]</code> points to a <strong>list with one association</strong>, what is expected but I would like to drop that list. </p>
<p>It seems that the third argument of <code>GroupBy</code> isn't generalized to handle nested grouping... </p>
<p>So basically I would like to have <code>... "I" -> <|"type" -> "a", ...</code>.</p>
<h3>What is a generic way to go?</h3>
<p>I can do:</p>
<ul>
<li><p>nested <code>GroupBy</code>: </p>
<pre><code>GroupBy[dataset, #type &, GroupBy[#, #subtype &, First] &]
</code></pre></li>
<li><p><code>Map</code> later:</p>
<pre><code>GroupBy[dataset, {#type &, #subtype &}] // Map[First, #, {-3}] &
</code></pre></li>
</ul>
<p>But the first is not handy in general while the second is ugly (and not general either). </p>
<hr>
<p>Acceptable outputs are:</p>
<pre><code><|
"a" -> <|
"I" -> <|"type" -> "a", "subtype" -> "I", "value" -> 1|>,
...
|>
</code></pre>
<p>or</p>
<pre><code><|
"a" -> <|
"I" -> <|"value" -> 1|>,
...
|>
</code></pre>
<p>or</p>
<pre><code><|
"a" -> <|
"I" -> 1 ,
...|>
</code></pre>
<p>but the first is the most desired one because we may have more that one ("value") key left.</p>
| Edmund | 19,542 | <p>Instead of a nested association solution, would <code>Query</code> and <code>Select</code> be acceptable.</p>
<pre><code>Query[Select[#type == "a" && #subtype == "I" &], "value"]@dataset
(* {1} *)
</code></pre>
<p>This form is more descriptive on what is happening and does not require reshaping of the list of associations. </p>
<p>If your data is such that there is only ever one item intersecting a particular <code>"type"</code> and <code>"subtype"</code> then tack on <code>First</code>.</p>
<pre><code>First@Query[Select[#type == "a" && #subtype == "I" &], "value"]@
dataset
(* 1 *)
</code></pre>
<p>Hope this helps.</p>
<hr>
<p><strong>Extension</strong> </p>
<p>You can extend this to a more general case in which you parametrise the filter by both key and value.</p>
<pre><code>filterBy[filter_] := Function[Evaluate[And @@ ReleaseHold[Hold[Slot][First@#] == Last@# & /@ filter]]]
</code></pre>
<p>then with </p>
<pre><code>target = {{"type", "a"}, {"subtype", "I"}};
Query[SelectFirst[filterBy[target]], "value"]@dataset
(* 1 *)
</code></pre>
|
107,399 | <p>Let's say we have a set a\of associations:</p>
<pre><code>dataset = {
<|"type" -> "a", "subtype" -> "I", "value" -> 1|>,
<|"type" -> "a", "subtype" -> "II", "value" -> 2|>,
<|"type" -> "b", "subtype" -> "I", "value" -> 1|>,
<|"type" -> "b", "subtype" -> "II", "value" -> 2|>
}
</code></pre>
<p>where every entry is unique in terms of <code>{#type, #subtype}</code>, </p>
<h3>I'd like to build a nested association for more handy querying, e.g. I would like to have:</h3>
<pre><code>nested["a", "II", "value"]
</code></pre>
<blockquote>
<pre><code>2
</code></pre>
</blockquote>
<h3>I can start with</h3>
<pre><code>GroupBy[dataset, {#type &, #subtype &}]
</code></pre>
<blockquote>
<pre><code><|
"a" -> <|
"I" -> {<|"type" -> "a", "subtype" -> "I", "value" -> 1|>},
"II" -> {<|"type" -> "a", "subtype" -> "II", "value" -> 2|>}|>,
"b" -> <|
"I" -> {<|"type" -> "b", "subtype" -> "I", "value" -> 3|>},
"II" -> {<|"type" -> "b", "subtype" -> "II", "value" -> 4|>}
|>|>
</code></pre>
</blockquote>
<p>But <code>nested["a", "I"]</code> points to a <strong>list with one association</strong>, what is expected but I would like to drop that list. </p>
<p>It seems that the third argument of <code>GroupBy</code> isn't generalized to handle nested grouping... </p>
<p>So basically I would like to have <code>... "I" -> <|"type" -> "a", ...</code>.</p>
<h3>What is a generic way to go?</h3>
<p>I can do:</p>
<ul>
<li><p>nested <code>GroupBy</code>: </p>
<pre><code>GroupBy[dataset, #type &, GroupBy[#, #subtype &, First] &]
</code></pre></li>
<li><p><code>Map</code> later:</p>
<pre><code>GroupBy[dataset, {#type &, #subtype &}] // Map[First, #, {-3}] &
</code></pre></li>
</ul>
<p>But the first is not handy in general while the second is ugly (and not general either). </p>
<hr>
<p>Acceptable outputs are:</p>
<pre><code><|
"a" -> <|
"I" -> <|"type" -> "a", "subtype" -> "I", "value" -> 1|>,
...
|>
</code></pre>
<p>or</p>
<pre><code><|
"a" -> <|
"I" -> <|"value" -> 1|>,
...
|>
</code></pre>
<p>or</p>
<pre><code><|
"a" -> <|
"I" -> 1 ,
...|>
</code></pre>
<p>but the first is the most desired one because we may have more that one ("value") key left.</p>
| Mr.Wizard | 121 | <p>I believed this question to be a duplicate of <a href="https://mathematica.stackexchange.com/q/86578/121">Create Nested List from tabular data</a> and proposed, with minor variation, the same answer:</p>
<pre><code>fn[x_List] := GroupBy[x, First -> Rest, fn]
fn[{a_}] := a
nested = fn[dataset]
</code></pre>
<blockquote>
<pre><code><|"a" -> <|"I" -> <|"value" -> 1|>, "II" -> <|"value" -> 2|>|>,
"b" -> <|"I" -> <|"value" -> 1|>, "II" -> <|"value" -> 2|>|>|>
</code></pre>
</blockquote>
<pre><code>nested["a", "II", "value"]
</code></pre>
<blockquote>
<pre><code>2
</code></pre>
</blockquote>
<p>WReach suggested a different reading of this question however. To that end I propose:</p>
<pre><code>ClearAll[fn]
fn[p_, r___][x_List] := GroupBy[x, Lookup[p] -> KeyDrop[p], fn[r]]
fn[][{x_}] := x
nested = dataset // fn["type", "subtype"]
</code></pre>
<blockquote>
<pre><code><|"a" -> <|"I" -> <|"value" -> 1|>, "II" -> <|"value" -> 2|>|>,
"b" -> <|"I" -> <|"value" -> 1|>, "II" -> <|"value" -> 2|>|>|>
</code></pre>
</blockquote>
<pre><code>nested["a", "II", "value"]
</code></pre>
<blockquote>
<pre><code>2
</code></pre>
</blockquote>
|
1,613,185 | <p>There are five red balls and five green balls in a bag. Two balls are taken out at random. What is the probability that both the balls are of the same colour</p>
| drhab | 75,923 | <p>Hint:</p>
<p>If you have taken one ball out then in the bag there are $4$ balls that have the color of the drawn ball and $5$ that have <em>not</em> the color of the drawn ball.</p>
|
2,916,037 | <p>Some cute results have every digit doubled. </p>
<p>\begin{align}
99225500774400 = {} & \frac{40!}{31!} \\[8pt]
33554433 = {} & 2^{25} +1 \\[8pt]
222277 = {} & -22^{2^2}+77^3 \\[8pt]
8811551199 = {} & 95^5 + 64^5 \\[8pt]
7755660000 = {} & 95^5 + 65^4 \\[8pt]
334444448888 = {} & 6942^3 - 10^8 \\[8pt]
11881133 = {} & 26^5 - 3^5 \\[8pt]
0.0011223344556677\ldots = {} & \frac 1 {891} \\[8pt]
22116600446655008888446677444 = {} & 148716510336462^2 \\[8pt]
\end{align}</p>
<p>The last is from <a href="http://oeis.org/A079036" rel="nofollow noreferrer">A079036</a>. What are other cute examples?</p>
| joriki | 6,622 | <p>$$\frac15=0.001100110011\ldots_2\;,$$</p>
<p>$$\frac1{20}=0.001100110011\ldots_3\;,$$</p>
<p>and generally</p>
<p>$$\frac1{101(10-1)}=0.001100110011\ldots$$</p>
<p>in all bases.</p>
|
2,055,559 | <blockquote>
<p>Let <span class="math-container">$a,b,c$</span> be the length of sides of a triangle then prove that:</p>
<p><span class="math-container">$a^2b(a-b)+b^2c(b-c)+c^2a(c-a)\ge0$</span></p>
</blockquote>
<p>Please help me!!!</p>
| math110 | 58,742 | <p>$$a^2b(a-b)+b^2c(b-c)+c^2a(c-a)
=\dfrac{1}{2}[(a+b-c)(b+c-a)(a-b)^2+(b+c-a)(a+c-b)(b-c)^2+(a+c-b)(a+b-c)(c-a)^2]
≥0$$</p>
|
2,140,192 | <p>I want to show that $C^1[0,1]$ isn't a Banach Space with the norm:</p>
<p>$$||f||=\max\limits_{y\in[0,1]}|f(y)|$$</p>
<p>Therefore, I want to show that the sequence $\left \{ |x-\frac{1}{2}|^{1+\frac{1}{n}} \right \}$ converges to $|x-\frac{1}{2}|$, but I can't find $N$ in the definition of convergence. Could anyone give me an idea?</p>
<p>Thank You. </p>
| Martin Argerami | 22,857 | <p>You have, using the Mean Value Theorem, the inequality
$$
1-e^t\leq |t|,\ \ \ \ t\leq0.
$$
Then, for $t\in[0,1]$ and $x>0$,
$$
|t^{1+x}-t|=|t|\,|t^x-1|\leq |t|\,|e^{x\log t}-1|\leq x\,|t\log t|\leq \frac xe.
$$
So
$$
\left|\,\left|x-\tfrac12\right|^{1+\tfrac1n}-\left| x-\tfrac12\right|\,\right |\leq\frac1{ne}
$$</p>
|
89,188 | <p>Norimatsu's lemma says that on a smooth projective complex variety $X$ of dimension $n$, then we have $H^i(X,\mathcal O_X(-A-E))=0$ for $i < n$ when $A$ is an ample divisor and $E$ is a simple normal crossings (SNC) divisor. Does this statement remain true if $E$ is an effective divisor with SNC support? In other words, if $\sum E_i$ is SNC, do we still have these vanishings for $\mathcal O_X(-A-\sum a_i E_i)$ where $a_i >0$? Or is there a counterexample?</p>
| Sándor Kovács | 10,076 | <p>Let $X$ be an arbitrary smooth projective surface, $A$ an arbitrary ample divisor and $E\subset X$ a smooth proper curve. Consider the short exact sequence
$$
0\to \mathscr O_X(-A-(a+1)E)\to \mathscr O_X(-A-aE)\to \mathscr O_X(-A-aE)|_E\to 0
$$
If $H^0(X, \mathscr O_X(-A-aE))=0$, then
$H^1(X, \mathscr O_X(-A-(a+1)E))= 0$ <em>only if</em>
$H^0(E, \mathscr O_X(-A-aE)|_E)= 0$. Therefore, if $(A+aE)\cdot E\ll 0$, then
the desired vanishing will fail.</p>
<p>So, for example if $E$ is such that $E^2<0$, then this will happen for $a\gg 0$. </p>
<p>If, as in Francesco's example, $X$ is a Del Pezzo surface, $A=-K_X$, and $E$ is an exceptional curve, then $(-A-aE)\cdot E=K_X\cdot E -aE^2=-1+a$ and hence $\mathscr O_X(-A-aE)|_E\simeq \mathscr O_E(a-1)$, so if $a>0$, then $H^0(E, \mathscr O_X(-A-aE)|_E)\neq 0$ and hence this gives another proof that in Francesco's example the desired vanishing fails as soon as the coefficient of $E$ is at least $2$. </p>
<hr>
<p>A similar example works in higher dimensions as long as $-E|_E$ is ample, for instance if it is the exceptional divisor of the blow up of a point (probably even under more general conditions). In other words, you cannot expect the vanishing you would like unless $E$ has some semi-positivity property (say $E$ is nef), but in that case you probably get it directly (since for example if $E$ is nef, then $A+aE$ is ample).</p>
|
2,484 | <p>It's been quite a while since I was tutoring a high school student and even longer since not a gifted one.</p>
<p>However, this time, something was amiss. I have asked him to show me how he does some exercise, and then another and the only thing I wanted to do was to shout:</p>
<blockquote>
<p><strong>You are doing it <em>wrong</em>!</strong></p>
</blockquote>
<p>I hope I didn't let it show and tried to work it out in small steps. However, it was like he didn't wanted to learn, he just wanted to get the problem set done.</p>
<p>One mistake he would frequently do, is to mess the signs in trigonometric reductions (e.g. $\cos(90^\circ + x)$ would be $\sin x$ instead of $-\sin x$). In fact, he memorized all the formulas, but couldn't recall them properly. I've tried to teach him how to recover formulas from graphs of $\sin$ and $\cos$, but the response was along the lines of "I don't need to understand it" and "I would like to do it without thinking".</p>
<p>For example, there was a pair of tasks 1) prove that $X(\alpha) = Y(\alpha)$ and 2) calculate $X(30^\circ)$, where $Y$ was simple and $X$ was not. However, in the class they covered such calculations, so he evaluated the $X$ directly. Yet, pointing out the simpler way didn't changed his approach.</p>
<p>Another example could be an expression with non-round numbers, where I suggested to hide them behind $\alpha$ and its variations (esp. since such substitution made the necessary formulas and reductions easily visible). Nevertheless, he tirelessly rewrote them all the way through calculations and clung to their concreteness like to some kind of lifeline.</p>
<p>I am aware that an experienced tutor would do much better than I did, but that's not what I'm after. What shook me most strongly was this strange quality of his approach to mathematics that would produce a strong feeling of dissatisfaction (similar to the one you have when seeing something ugly). What I would like to ask about is:</p>
<p><strong>How can I change such a student's approach to math?</strong></p>
<p>It seems like a Catch-22. He sees math as horrendous, but won't learn otherwise, because he doesn't want to deal with it. On the other hand, when he does, it's so wrong that no wonder he sees math this way.</p>
<p>For a change I wanted to show him something engaging, e.g. how math allows us to build nice things using 3D animation software, but he would not make a connection. It was just a few cool pics useful only as long as they can be used to raise his status (e.g. his friends think them awesome).</p>
<p>I have no illusions on whether I could teach him to enjoy math. But, is it possible to make him not hurt mathematics? Right now, I admit defeat.</p>
| Toscho | 439 | <ol>
<li>JvR's answer is very good.</li>
<li>I'd like to point out a missing aspect:</li>
</ol>
<h1>Emotion</h1>
<p>Your student has developped highly negative emotions concerning math. He has lost motivation and is in despair concerning math. He feels, he'll never understand math also he knows, that it's important. He does <em>not</em> understand, what he's doing at the moment, but he also doesn't want to do something new because of two reasons:</p>
<ol>
<li>Learning something new has the risk of forgetting or loosing the little he thinks he knows or understand. (That's wrong, but can't see it in the right way.)</li>
<li>Trying to learn something new has the risk of failure. That'd increase the negative feelings towards math.</li>
<li>The new stuff is gonna be harder than the current stuff. As he doesn't even understand the current stuff well, he won't understand the next stuff at all.</li>
</ol>
<p>You cannot win against these emotions. You have to reduce them.</p>
<h2>Bring in good emotions</h2>
<p>Forget about the current subject, if you hamper with it, give him the recipes he wants, but don't try to change him.</p>
<p>Take the next subject or an alternate subject to teach him something new, that <em>really works for him</em>. He needs success with math. The way of solving, you choose to teach him, need not be the "right" way. He need not understand it fully. It only must work and bring him from a given task to its solution.</p>
<h2>Let him stay with that successful way for a while</h2>
<p>He must gain confidence in math. He must gain confidence in you. He must gain confidence in himself being able to do math. He only has this one working way. Don't take that away from him. His school teacher is already doing that regularly.</p>
<p>When he's so confident, that he thinks himself, something else -- some development might be due, then you can show him better ways. He'll trust you then.</p>
|
3,141,510 | <p>Calculate sum
<span class="math-container">$$ \sum_{k=2}^{2^{2^n}} \frac{1}{2^{\lfloor \log_2k \rfloor} \cdot 4^{\lfloor \log_2(\log_2k )\rfloor}} $$</span></p>
<p>I hope to solve this in use of Iverson notation:</p>
<h2>my try</h2>
<p><span class="math-container">$$ \sum_{k=2}^{2^{2^n}} \frac{1}{2^{\lfloor \log_2k \rfloor} \cdot 4^{\lfloor \log_2(\log_2k )\rfloor}} = \sum_{k,l,m}2^{-l}4^{-m} [2^l \le k < 2^{l+1}][2^{2^m} \le k < 2^{2^m+1}] $$</span></p>
<p>and now:
<span class="math-container">$$ [2^l \le k < 2^{l+1}][2^{2^m} \le k < 2^{2^m+1}] \neq 0 $$</span> if and only if <span class="math-container">$$2^l \le k < 2^{l+1} \wedge 2^{2^m} \le k < 2^{2^m+1} $$</span></p>
<p>I can assume that <span class="math-container">$l$</span> is const (we know value of <span class="math-container">$l$</span>) and treat <span class="math-container">$m$</span> as variable depence from <span class="math-container">$l$</span>. Ok so: <br>
<span class="math-container">$$2^l \le 2^{2^m} \wedge 2^{2^m+1} \le 2^{l+1} $$</span>
but it gives me that <span class="math-container">$l=2^m$</span>
I think that it is not true (but also I don't see mistake). Even if it is true, how can be it finished? </p>
| TheSilverDoe | 594,484 | <p>Let's write
<span class="math-container">$$\sum_{k=2}^{2^{2^n}} \frac{1}{2^{\lfloor \log_2(k) \rfloor}4^{\lfloor \log_2(\log_2(k))\rfloor}} = \sum_{i=0}^{n-1} \sum_{k=2^{2^i}}^{2^{2^{i+1}}-1} \frac{1}{2^{\lfloor \log_2(k) \rfloor}4^{\lfloor \log_2(\log_2(k))\rfloor}} + \frac{1}{2^{2^n}4^{n}}$$</span> </p>
<p><span class="math-container">$$= \sum_{i=0}^{n-1} \sum_{k=2^{2^i}}^{2^{2^{i+1}}-1} \frac{1}{2^{\lfloor \log_2(k) \rfloor}4^{i}} + \frac{1}{2^{2^n}4^{n}}$$</span></p>
<p>Moreover for all <span class="math-container">$i=0, ..., n-1$</span>,
<span class="math-container">$$\sum_{k=2^{2^i}}^{2^{2^{i+1}}-1} \frac{1}{2^{\lfloor \log_2(k) \rfloor}} = \sum_{j=2^i}^{2^{i+1}-1} \sum_{k=2^j}^{2^{j+1}-1} \frac{1}{2^{\lfloor \log_2(k) \rfloor}} = \sum_{j=2^i}^{2^{i+1}-1} \sum_{k=2^j}^{2^{j+1}-1} \frac{1}{2^j} = \sum_{j=2^i}^{2^{i+1}-1} \frac{2^j}{2^j} = 2^i $$</span></p>
<p>You deduce that
<span class="math-container">$$S = \sum_{i=0}^{n-1} \frac{1}{2^i} + \frac{1}{2^{2^n}4^{n}} = 2 - \frac{1}{2^{n-1}} + \frac{1}{2^{2^n}4^{n}}$$</span></p>
|
2,805,192 | <p><a href="https://i.stack.imgur.com/OhGi4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OhGi4.png" alt="enter image description here"></a></p>
<p>Definition:</p>
<p><a href="https://i.stack.imgur.com/mV4NU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mV4NU.png" alt="enter image description here"></a></p>
<p>Call the set in the hint E. I've already proven that E is finite and X can be covered by finitely many neighborhoods of radius $\delta$.</p>
<p>What I'm not sure of is the next part, where I n need to show that E is dense. I want to show that every point of X can be a point of E. This happens because for a point x $\in$ X such that x $\in$ $N_\delta(q)$ for some q $\in$ E, we can than choose another smaller $\delta$ so that E also counts x amongst its member.</p>
<p>So for $\delta = 1/n \ (n = 1,2,3,...)$, every point of X can be a point of E. Hence E is dense.</p>
<p>This argument seems to go in the direction of the hint, but I'm not really sure about it. Since wouldn't we need to prove that E is dense for a fixed $\delta$?</p>
| Henno Brandsma | 4,280 | <p>The sketched argument is as follows: let $\delta >0$ be given.
Do the recursion: pick $x_0 \in X$ and having picked $x_0,\ldots,x_n$, pick $x_{n+1}$ such that $$\forall i\le n: d(x_i, x_{n+1}) \ge \delta \tag 1$$</p>
<p>If this process never stops, it's easy to verify that the set $\{x_n: n \ge 0\}$ is infinite and has no limit point.</p>
<p>So for each fixed $\delta>0$ there is a finite stage $n$ where this process breaks down, so we have $x_0,\ldots,x_n$ but for <em>all</em> $x \in X$ we have that $d(x, x_i)< \delta$ for some $i \in \{0,\ldots,n\}$ (or we could have picked $x$ as the $x_{n+1}$ and the process would have gone on). This can be reformulated, as your text does, as </p>
<blockquote>
<p>If $(X,d)$ satisfies the hypothesis that every infinite set has a limit point, then for every $\delta>0$ there are $x_0,\ldots, x_n \in X$ such that $X = \cup \{B(x_i, \delta): i = 0,\ldots, n\}$.</p>
</blockquote>
<p>(or in general topology terms: a limit point compact metric space is totally bounded.)</p>
<p>Now apply the above lemma to $\delta=1,\frac12, \frac13,\ldots$ and every time collect the promised centres of the balls as the (finite) subset $F_n$ of $X$ (for $\delta=\frac1n$), and define $D = \cup_n F_n$. The claim now is that $D$ is dense: to see this, pick any $x\in X$ and any $\varepsilon>0$ and we want to show that $B(x,\varepsilon) \cap D \neq \emptyset$: pick $N$ so large that $\frac1N < \varepsilon$ and note that $x \in B(x_i, \frac1N)$ for some $x_i \in F_N$, so that $x_i \in D$ and $d(x,x_i) < \frac1N < \varepsilon$ and so $D \cap B(x,\varepsilon) \neq \emptyset$ is witnessed by this $x_i$.</p>
|
8,816 | <p>What is the result of multiplying several (or perhaps an infinite number) of binomial distributions together?</p>
<p>To clarify, an example.</p>
<p>Suppose that a bunch of people are playing a game with k (to start) weighted coins, such that heads appears with probability p < 1. When the players play a round, they flip all their coins. For each heads, they get a coin to flip in the next round. This is repeated every round until they have a round with no heads.</p>
<p>How would I calculate the probability distribution of the number or coins a player will have after n rounds? Especially if n is extremely high and p extremely close to 1?</p>
| BCnrd | 3,927 | <p>This is really a comment on Pete's comment for Mikhail's answer, but I am making it an answer because it raises a question which I think should be more widely known.</p>
<p>The construction of Aut-scheme uses the entire Hilbert scheme, which has countably many components (due to varying Hilbert polynomials), and it is not obvious how many of these intervene in the construction. Mikhail's answer illustrates the basic example showing it can be infinite. But this leads to the following problem (suggested by what is known about Neron-Severi groups, whose construction encounters the same issue via construction of Pic schemes in terms of Hilbert schemes, at least in the original Grothendieck construction): </p>
<p>Q. Does the Aut scheme of a proper scheme over an alg. closed field have finitely generated component group? (This is a more basic question than finiteness, which is really Pete's comment to Mikhail's answer.) </p>
<p>Incredibly, even for smooth projective varieties over C this appears to be a wide open problem!! I have mentioned this to several experts in alg. geom. (including Oort), and nobody has an idea. If anyone has an idea or a solution, please let me know right away. </p>
|
1,459,334 | <p>How can we show that additive inverse of a real number equals the number multiplied by -1, i.e. how can we show that $(-1)*u = -u$ for all real numbers $u$?</p>
| Community | -1 | <p>$-u$ is the unique additive inverse of $u$. By distributivity we have
$$(-1)u+u=(-1)u+(1)u=(-1+1)u=0u=0$$
and hence $(-1)u=-u.$ To prove $0u=0$, observe $0u=(0+0)u=0u+0u$ and the additive identity element is unique.</p>
|
1,459,334 | <p>How can we show that additive inverse of a real number equals the number multiplied by -1, i.e. how can we show that $(-1)*u = -u$ for all real numbers $u$?</p>
| cr001 | 254,175 | <p>$(-1)*u = (-1)*u + u - u = (-1)*u + (1)*u - u = (-1+1)*u - u = -u$</p>
|
2,431,375 | <p>A continuous function $f$ on $[a,b]$, differentiable in $(a,b)$, has only 1 point where its derivative vanishes. What is true about this function?</p>
<p>A. $f$ cannot have an even number of extrema.</p>
<p>B. $f$ cannot have a maximum at one endpoint and minimum at the other.</p>
<p>C. $f$ might be monotonically increasing.</p>
<p>I think $A$ is true, since it has three extrema, but the right answer should be $C$, why?</p>
| gen-ℤ ready to perish | 347,062 | <p>I argue that the truthfulness of A depends on if “extrema” means “relative extrema,” “global extrema,” or both. After all, a function has to have a minimum and maximum value on an internal. For example, if $f(x)=x$ for $a\leq x\leq b$ only, then the extrema of $f$ are simply $a$ and $b$.</p>
<p>The prior example also proved that $B$ is false for $a$ and $b$ not equal to one another.</p>
<p>The function $f(x)=x$ for $a\leq x\leq b$ only also satisfies C, as its derivative is always positive.</p>
|
232,562 | <p>Ax-Grothendieck Theorem states that if $\mathbf K$ is an algebraically closed field, then any injective polynomial map $P:\mathbf K^n\longrightarrow \mathbf K^n$ is bijective.</p>
<blockquote><b>Question 1.</b> What does the inverse map of $P$ look like ? What kind of map is that ?</blockquote>
<p>$P^{-1}$ need not be polynomial, as the example $x^p$ in $\mathbf F_p^{alg}$ shows.</p>
<blockquote><b>Question 2.</b> Are there conditions under which $P^{-1}$ is polynomial ?</blockquote>
| Tsemo Aristide | 80,891 | <p>$P$ has a polynomial inverse implies that the Jacobian of $P$ is a constant function. There is a conjecture known as the Jacobian conjecture which says that if the characteristic of $K$ is zero, $P$ has a polynomial inverse if and only if its Jacobian is a non zero constant.</p>
<p><a href="https://en.wikipedia.org/wiki/Jacobian_conjecture" rel="nofollow">https://en.wikipedia.org/wiki/Jacobian_conjecture</a></p>
|
186,878 | <p>Is there a way to find the geoposition of a given distance from start in a <code>GeoPath</code>? I want to mark equidistant positions along a track, for example, a mark every 500 km along the path given by</p>
<pre><code>path=GeoGraphics[
GeoPath[{
Entity["City", {"Boston", "Massachusetts", "UnitedStates"}],
Entity["City", {"Rochester", "NewYork", "UnitedStates"}],
Entity["City", {"Chicago", "Illinois", "UnitedStates"}]
}, "Rhumb"]
]
</code></pre>
<p>Is there a way to find the pos that gives <code>GeoDistance[Entity["City", {"Boston", "Massachusetts", "UnitedStates"}],pos]==500 km</code> along the path? </p>
| ZaMoC | 46,583 | <p>Here is a function for finding GeoPositions between 2 cities with certain step</p>
<pre><code>city1 = Entity["City", {"Boston", "Massachusetts", "UnitedStates"}];
city2 = Entity["City", {"Rochester", "NewYork", "UnitedStates"}];
city3 = Entity["City", {"Chicago", "Illinois", "UnitedStates"}];
geopath[c1_, c2_, step_] := Module[{s = First@GeoDistance[c1, c2],
a = GeoDirection[c1, c2]},
GeoPath[NestList[GeoDestination[#,{1000 step,a}]&,c1,Floor[s/step]],"Rhumb"]]
</code></pre>
<p>Now if you want to find positions from city1 to city2 every 100 km, type</p>
<pre><code>geopath[city1, city2, 100]
</code></pre>
<p>and you will get the positions </p>
<blockquote>
<p>GeoPath[{Entity["City", {"Boston", "Massachusetts", "UnitedStates"}],
GeoPosition[{42.5124, -72.2106}], GeoPosition[{42.6928, -73.4044}],
GeoPosition[{42.8732, -74.6017}], GeoPosition[{43.0535, -75.8026}],
GeoPosition[{43.2338, -77.0069}]}, "Rhumb"]</p>
</blockquote>
|
1,611,506 | <blockquote>
<p>$$\int (2x^2+1)e^{x^2} \, dx$$</p>
</blockquote>
<p>It's part of my homework, and I have tried a few things but it seems to lead to more difficult integrals. I'd appreciate a hint more than an answer but all help is valued.</p>
| Future | 299,525 | <p>Let $u = xe^{x^2}$. Then $du = (2x^2+1) e^{x^2}$.
Thus,</p>
<p>$$\int (2x^2 + 1)e^{x^2} = \int du = u + C = xe^{x^2} + C$$</p>
|
3,455,009 | <p>In the proof of the expectation of the binomial distribution,</p>
<p><span class="math-container">$$E[X]=\sum_{k=0}^{n}k \binom{n}{k}p^kq^{n-k}=p\frac{d}{dp}(p+q)^n=pn(p+q)^{n-1}=np$$</span></p>
<p>Why is <span class="math-container">$\sum_{k=0}^{n}k \binom{n}{k}p^kq^{n-k}= p\frac{d}{dp}(p+q)^n$</span>?</p>
<p>I know that by the binomial theorem <span class="math-container">$(p+q)^n=\sum_{k=0}^{n}\binom{n}{k}p^kq^{n-k}$</span>. </p>
| user | 505,767 | <p>We have for <span class="math-container">$z\neq 0$</span></p>
<p><span class="math-container">$$(z+1)^3=z^3 \iff \left(1+\frac1z\right)^3=1 \implies 1+\frac1z=1,e^{i\frac23 \pi},e^{-i\frac23 \pi}$$</span></p>
<p>therefore</p>
<p><span class="math-container">$$\frac1z =-1+e^{i\frac23 \pi},-1+e^{-i\frac23 \pi} \implies z=\frac{\bar z}{z\bar z}=\frac1{2}(-1+e^{-i\frac23 \pi}),\frac1{2}(-1+e^{i\frac23 \pi})$$</span></p>
<p>that is</p>
<p><span class="math-container">$$z_1= -\frac32-i\frac32 \sqrt 3, \quad z_2= -\frac32+i\frac32 \sqrt 3$$</span></p>
|
19,880 | <p>I want to write down $\ln(\cos(x))$ Maclaurin polynomial of degree 6. I'm having trouble understanding what I need to do, let alone explain why it's true rigorously.</p>
<p>The known expansions of $\ln(1+x)$ and $\cos(x)$ gives:</p>
<p>$$\forall x \gt -1,\ \ln(1+x)=\sum_{n=1}^{k} (-1)^{n-1}\frac{x^n}{n} + R_{k}(x)=x-\frac{x^2}{2}+\frac{x^3}{3}+R_{3}(x)$$
$$\cos(x)=\sum_{n=0}^{k} (-1)^{n}\frac{x^{2n}}{(2n)!} + T_{2k}(x)=1-\frac{x^2}{2}+\frac{x^4}{24}+T_{4}(x)$$</p>
<p>Writing $\ln(1+x)$ with $t=x-1$ gives:</p>
<p>$$\forall t \gt 0,\ \ln(t)=\sum_{n=1}^{k} (-1)^{n-1}\frac{(t+1)^n}{n} + R_{k}(t)$$</p>
<p>But now I'm clueless. </p>
<ul>
<li>Do I just 'plug' $\cos(x)$ expansion in $\ln(t)$? Can I even do that?</li>
<li>Isn't it a problem that $\ln(x)$ isn't defined for $x\leq 0$ but $|\cos(x)| \leq 1$?</li>
</ul>
| Ofir | 2,125 | <p>Since you know the polynomial of $\ln(1+t)$ and you know that $\cos(x)= 1 + (-\frac {x^2}{2!}+\frac{x^4}{4!}+x^4\epsilon(x))$ then you can "plug" it in the polynomial of $\ln(1+t)$. You now have that $\ln(\cos(x)) = P(x) + R(x)$ where P is a polynomial and R is the error such that $\lim _{x\rightarrow 0} \frac{R(x)}{x^k} = 0$ and $k=deg(P)$. </p>
<p>Every function that has a Taylor polynomial actually has a unique polynomial (for a fixed degree).
If $f(x)=P_1 (x) +x^k \epsilon_1 (x) = P_2 (x)+x^k \epsilon_2 (x)$ are two representations, then taking the limit as x approaches 0 shows that $P_1 (0)=P_2 (0)=a_0$. subtract $a_0$ from both sides, divide by x, and again take the limit as x approaches 0 to see that the coefficients of x in $P_1$ and $P_2$ are the same, and continue this by induction.</p>
<p>This shows that the polynomial that you found is the Taylor polynomial.</p>
<p>In you case, when x is near zero, then $\cos(x)$ is near 1, so $\ln(\cos(x))$ is well defined.</p>
|
19,880 | <p>I want to write down $\ln(\cos(x))$ Maclaurin polynomial of degree 6. I'm having trouble understanding what I need to do, let alone explain why it's true rigorously.</p>
<p>The known expansions of $\ln(1+x)$ and $\cos(x)$ gives:</p>
<p>$$\forall x \gt -1,\ \ln(1+x)=\sum_{n=1}^{k} (-1)^{n-1}\frac{x^n}{n} + R_{k}(x)=x-\frac{x^2}{2}+\frac{x^3}{3}+R_{3}(x)$$
$$\cos(x)=\sum_{n=0}^{k} (-1)^{n}\frac{x^{2n}}{(2n)!} + T_{2k}(x)=1-\frac{x^2}{2}+\frac{x^4}{24}+T_{4}(x)$$</p>
<p>Writing $\ln(1+x)$ with $t=x-1$ gives:</p>
<p>$$\forall t \gt 0,\ \ln(t)=\sum_{n=1}^{k} (-1)^{n-1}\frac{(t+1)^n}{n} + R_{k}(t)$$</p>
<p>But now I'm clueless. </p>
<ul>
<li>Do I just 'plug' $\cos(x)$ expansion in $\ln(t)$? Can I even do that?</li>
<li>Isn't it a problem that $\ln(x)$ isn't defined for $x\leq 0$ but $|\cos(x)| \leq 1$?</li>
</ul>
| J126 | 2,838 | <p>Another way to do the problem is to realize that</p>
<p>$$
\frac{d}{dx}\ln(\cos x)=-\tan x
$$</p>
<p>And so the series will just be $\ln(\cos 0)=0$ plus a slightly adjusted series for $-\tan x$. This also might help you see that the series is valid, since you are doing it out the long way.</p>
|
1,973,686 | <p>I am stuck on two questions :</p>
<ol>
<li>If $f,g\in C[0,1]$ where $C[0,1]$ is the set of all continuous functions in $[0,1]$ then is the mapping $id:(C[0,1],d_2)\to (C[0,1],d_1)$ continuous ? where $id$ denotes the identity mapping.</li>
</ol>
<p>where $d_2(f,g)=(\int _0^1 |f(t)-g(t)|^2dt )^{\frac{1}{2}} $ and $d_1(f,g)=(\int _0^1 |f(t)-g(t)|dt )$
?</p>
<p>2.If $f\in L^2(\Bbb R)$ does it imply that $f\in L^1(\Bbb R)$.</p>
<p><strong>My try</strong>:</p>
<p>1.The first question reduces to proving the fact that if $(\int _0^1 |f(t)-g(t)|^2dt )^{\frac{1}{2}} <\infty$ does it imply that $(\int _0^1 |f(t)-g(t)|dt )<\infty$ which I am unable to prove.</p>
<p>I am also unable to conclude anything for the 2nd one.</p>
<p>Please give some hints</p>
| Alexis Olson | 11,246 | <p>@marwalix got the first question.</p>
<p>For the second, consider the function $f(x) = \min\left\{1,\frac{1}{x}\right\}$.</p>
<p>$$\int_{\Bbb R} |f|^2 = \int_{-\infty}^{-1} \frac{dx}{x^2} + \int_{-1}^1 dx + \int_{1}^{\infty} \frac{dx}{x^2}= 1+2+1 = 4$$</p>
<p>but $\int_{\Bbb R}|f|$ doesn't converge.</p>
|
2,663,537 | <p>Suppose G is a group with x and y as elements. Show that $(xy)^2 = x^2 y^2$ if and only if x and y commute.</p>
<p>My very basic thought is that we expand such that $xxyy = xxyy$, then multiply each side by $x^{-1}$ and $y^{-1}$, such that $x^{-1} y^{-1} xxyy = xxyy x^{-1}$ , and therefore $xy=xy$.</p>
<p>I realize that this looks like a disproportionate amount of work for such a simple step, but that is what past instruction has looked like and that is perhaps why I am confused. Moreover, "if and only if" clauses have always been tricky for me since I took Foundations of Math years ago, but if I remember correctly, the goal here should be to basically do the proof from right to left and then left to right, so to speak. Anyhow, I think that I am overthinking this problem.</p>
| operatorerror | 210,391 | <p>If $x$ and $y$ commute, clearly we have
$$
xyxy=xxyy=x^2y^2
$$
if instead
$$
xyxy=x^2y^2
$$
then hitting the left side with $x^{-1}$ and the right with $y^{-1}$ yields
$$
xy=yx
$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.