qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,422,666
<p>I've just run into this problem, and was able to go as far, and understand the induction step up to the bolded section. The last part I found in the back of my book, italicized, I can't understand.</p> <p>Use induction to prove: $n^2\geq 2n + 1$ for all $n\in \mathbb{N}$ and $n\geq 3$.</p> <p>Base: $3^2\geq 6+1$.</p> <p>Induction Hypothesis: Assume that $k^2\geq 2k+1$ for $k\geq 3$.</p> <p>Induction Step: $(k + 1)^2=k^2+2k+1\geq 2k+1 + 2k + 1 = 2(k + 1) + 2k \geq 2(k + 1) + 1$</p> <p>I went as far as $(k + 1)^2 \geq 2(k+1) + 1$?</p> <p>Any help, much appreciated</p>
2'5 9'2
11,123
<p>You have <em>assumed</em> that $\color{blue}{k^2}\geq\color{#af5500}{2k+1}$ for some specific $k$. So $$\begin{align}(k+1)^2&amp;=\color{blue}{k^2}+2k+1\\ &amp;\geq\color{#af5500}{2k+1}+2k+1\\&amp;=2(k+1)+2k\\&amp;\geq2(k+1)+1\end{align}$$ where the last inequality is because $2k&gt;1$.</p>
4,621,227
<p>This is a very soft and potentially naive question, but I've always wondered about this seemingly common phenomenon where a theorem has some method of proof which makes the statement easy to prove, but where other methods of proof are incredibly difficult.</p> <p>For example, proving that every vector space has a basis (this may be a bad example). This is almost always done via an existence proof with Zorn's lemma applied to the poset of linearly independent subsets ordered on set inclusion. However, if one were to suppose there exists a vector space <span class="math-container">$V$</span> with no basis, it seems (to me) that coming up with a contradiction given so few assumptions would be incredibly challenging.</p> <p>With that said, I had a few questions:</p> <ol> <li>Are there any other examples of theorems like this?</li> <li>Is this phenomenon simply due to the logical structure of the statements themselves, or is it something deeper? Is this something one can quantize in some way? That is, is there any formal way to study the structure of a statement, and determine which method of proof is ideal, and which is not ideal?</li> <li>With (1) in mind, are there ever any efforts to come up with proofs of the same theorem using multiple methods for the sake of interest?</li> </ol>
Qiaochu Yuan
232
<p>Very often the first proof of a result which appears in the literature is extremely messy because the mathematician who proved it is working at the very edge of what is possible with the tools of the day; then it gets simplified over time as other mathematicians better understand what is going on and develop better machinery for streamlining the proofs. These first proofs are typically not presented to students because they are terrible, but the disadvantage of not knowing them is that you don't see how valuable the machinery that streamlines the modern proofs is.</p> <p>There are many examples of this sort of thing, some of which you can find <a href="https://mathoverflow.net/questions/43820/extremely-messy-proofs">at this MO question</a>; here's one that I came across while writing a <a href="https://qchu.wordpress.com/2020/11/01/meditation-on-the-sylow-theorems-i/" rel="noreferrer">blog post about the Sylow theorems</a>. It is about</p> <blockquote> <p><strong><a href="https://en.wikipedia.org/wiki/Cauchy%27s_theorem_(group_theory)" rel="noreferrer">Cauchy's theorem</a></strong>: if a finite group <span class="math-container">$G$</span> has the property that its order <span class="math-container">$|G|$</span> is divisible by a prime <span class="math-container">$p$</span>, then <span class="math-container">$G$</span> has an element of order <span class="math-container">$p$</span>.</p> </blockquote> <p>There is an extremely slick proof of this theorem which comes from consider the set of solutions to the equation</p> <p><span class="math-container">$$\{ (g_1, \dots g_p) \in G^p : g_1 g_2 \dots g_p = e \}$$</span></p> <p>and then considering the action of the cyclic group by rotation <span class="math-container">$(g_1, g_2 \dots g_{p-1}, g_p) \mapsto (g_2, g_3, \dots g_p, g_1)$</span>, which you can see in the link. It takes maybe three sentences to give.</p> <p>By contrast, Cauchy's original proof took 9 pages. He does it by explicitly constructing the <a href="https://en.wikipedia.org/wiki/Sylow_theorems" rel="noreferrer">Sylow <span class="math-container">$p$</span>-subgroups</a> of the symmetric group, then (I believe Cauchy was working at a time when &quot;finite group&quot; always meant &quot;finite group of permutations&quot; so for him all finite groups were already embedded into symmetric groups) using a clever counting argument to show that if a finite group <span class="math-container">$G$</span> has the property that <span class="math-container">$p \mid |G|$</span> and also embeds into another finite group which has Sylow <span class="math-container">$p$</span>-subgroups, then <span class="math-container">$G$</span> has an element of order <span class="math-container">$p$</span>; you can see the details in the link. I give a very abbreviated sketch of the proof; the full construction of the Sylow <span class="math-container">$p$</span>-subgroups of the symmetric group is very tedious (I have never seen anyone give it in full, and tried doing it in a follow-up blog post but gave up because it was too tedious).</p> <p>This is a good example of what I mean; Cauchy was working at a very early time in group theory before anyone had even defined an abstract group, and people just didn't understand group theory that well yet. There was not even the notion of a quotient group at the time. Once group theory was better understood better proofs were possible. Actually I have no idea who the above slick proof of Cauchy's theorem is due to nor how many decades it took after Cauchy's original proof for someone to find it.</p> <p>Cauchy's original proof does have the advantage that it is much closer to being a proof of the <a href="https://en.wikipedia.org/wiki/Sylow_theorems#Theorems" rel="noreferrer">first Sylow theorem</a>. It has a generalization due to Frobenius which shows that if a finite group <span class="math-container">$G$</span> embeds into a finite group <span class="math-container">$H$</span> which has a Sylow <span class="math-container">$p$</span>-subgroup, then <span class="math-container">$G$</span> must have a Sylow <span class="math-container">$p$</span>-subgroup. And then you can prove Sylow I by exhibiting the Sylow <span class="math-container">$p$</span>-subgroups of the symmetric groups, or somewhat more easily, the general linear groups <span class="math-container">$GL_n(\mathbb{F}_p)$</span>, then invoking <a href="https://en.wikipedia.org/wiki/Cayley%27s_theorem" rel="noreferrer">Cayley's theorem</a>.</p>
486,239
<p>I was really stuck and tried many times to differentiate the following series, and tried to convince myself that the differential form of a triangular wave is the square wave.</p> <p>But I couldn't work it out as I found those sins and cos dont match up</p> <p>Square wave has this form </p> <p><img src="https://i.stack.imgur.com/ch26L.gif" alt="square wave"></p> <p>triangular wave has the following form</p> <p><img src="https://i.stack.imgur.com/3tD46.gif" alt="triangular"></p> <p>Can anyone show me how you differentiate triangular wave to get square? they are both sines.... I would imagine after differentiation you get a cos series for triangular wave. Thanks every one for helping!</p>
Anthony Carapetis
28,513
<p>Start with some graphical reasoning. When you plot these square and triangular waves, you'll notice that they need to be $\pi/2$ out of phase in order for the square wave to match up with the slope of the triangular wave; there will also be some vertical scaling you need to apply. This phase shift will explain why they are both $\sin$ series, and the scaling explains the change from $8$ to $4$.</p> <p>Alternatively, just compute the derivative of the triangular wave series and show that it is a transformed square wave.</p>
486,239
<p>I was really stuck and tried many times to differentiate the following series, and tried to convince myself that the differential form of a triangular wave is the square wave.</p> <p>But I couldn't work it out as I found those sins and cos dont match up</p> <p>Square wave has this form </p> <p><img src="https://i.stack.imgur.com/ch26L.gif" alt="square wave"></p> <p>triangular wave has the following form</p> <p><img src="https://i.stack.imgur.com/3tD46.gif" alt="triangular"></p> <p>Can anyone show me how you differentiate triangular wave to get square? they are both sines.... I would imagine after differentiation you get a cos series for triangular wave. Thanks every one for helping!</p>
Ross Millikan
1,827
<p>The key observation is that a sine wave is the same as a cosine wave, but shifted by $\frac \pi 2$ As the triangle wave is odd, the derivative of the square wave is even (plot it) so should be a sum of cosines.</p>
237,960
<p>How could I solve this problem?</p> <blockquote> <p>Find the first digit of $2^{4242}$ without using a calculator.</p> </blockquote> <p>I know how to find the last digit with modular arithmetic, but I can't use that here.</p>
picakhu
4,728
<p>Presenting an alternate method (no logs, but needs the knowledge that doubling time is ~ $70$/rate) </p> <p>Starting with </p> <p>$$2^{10}=1024$$ which is $$1000 \times 1.024$$ or a 2.4% increase. </p> <p>Then, $$70/2.4 \approx 29$$ implies that $$2^{290}\sim 2 \times 10^k$$ for some k. </p> <p>Then, $$2^{4242} = {2^{290}}^{14} \times 2^{182}$$</p> <p>So, it would suffice to calculate $$2^{16} \approx 1.6 \times 10^l$$ and to get $2^{182}$, first note that $1.024^{29} \approx 2$ so, $1.024^{18} \approx 1.5$ (pure hand waving, but sounds logical) So, from that $$2^{182} \approx 6 \times 10^m$$ and thus we can get the fist digit to be close to $1.6 \times 6 &gt; 9$</p>
3,783,186
<p>I am trying to prove that <span class="math-container">$$2≤\int_{-1}^1 \sqrt{1+x^6} \,dx ≤ 2\sqrt{2} $$</span> I learned that the equation <span class="math-container">$${d\over dx}\int_{g(x)}^{h(x)} f(t)\,dt = f(h(x))h'(x) - f(g(x))g'(x) $$</span> is true due to Fundamental Theorem of Calculus and Chain Rule, and I was thinking about taking the derivative to all side of the inequality, but I am not sure that it is the correct way to prove this. Can I ask for a help to prove the inequality correctly? Any help would be appreciated! Thanks!</p>
mathguy23123
367,767
<p>So there are two inequalities to be proved. You can use that <span class="math-container">$\sqrt{1+x^6} \leq \sqrt{2}$</span> for all <span class="math-container">$x \in [-1,1]$</span> for the upper bound, as it follows <span class="math-container">$\int_{[-1,1]} \sqrt{1+x^6} dx\leq \int_{[-1,1]} \sqrt{2} dx\leq 2 \sqrt{2}$</span>. The lower bound follows very similarly.</p>
2,709,878
<p>How do I know that an equation will have an extraneous solution? </p> <p>For example in this question: 2log9(x) = log9(2) + log9(x + 24)</p>
lab bhattacharjee
33,337
<p>First identify where each of the terms remains defined.</p> <p>For real domain, we need $x,x+24&gt;0\implies x&gt;0$</p> <p>$$\implies\log_9(x^2)=\log_92+\log_9(x+24)=\log_92(x+24)$$</p> <p>$$\iff x^2=2(x+24)$$</p>
4,474,095
<p>There is one thing I can't grasp about the proof given in the Linear Algebra Done Right book by Sheldon Axler (attached below).</p> <p>In the last part it says that <span class="math-container">$(T - \lambda_1I)...(T - \lambda_mI)v = 0$</span>, hence <span class="math-container">$T - \lambda_jI$</span> is not injective for some <span class="math-container">$j$</span>.</p> <p>What I don't understand is why the following reasoning is not correct:</p> <ul> <li>the factors in the equation can be reordered.</li> <li>suppose <span class="math-container">$\lambda_j$</span> is the only eigenvalue <span class="math-container">$T$</span> has. Let's put it at the end: <span class="math-container">$(T - \lambda_1I)...(T - \lambda_mI)(T - \lambda_jI)v = 0$</span></li> <li>the only way for the expression above to be equal to <span class="math-container">$0$</span> is if <span class="math-container">$(T - \lambda_jI)v = 0$</span> (because <span class="math-container">$\lambda_j$</span> is the only eigenvalue, so the other <span class="math-container">$T - \lambda_iI$</span> are injective).</li> <li>hence <span class="math-container">$v$</span> is an eigenvector of <span class="math-container">$T$</span> corresponding to the eigenvalue <span class="math-container">$\lambda_j$</span>. But <span class="math-container">$v$</span> was chosen arbitrarily, so it can't be true.</li> </ul> <p>I know that my logic is flawed but I can't see where. Would appreciate it if someone pointed out to me where I'm wrong.</p> <p><a href="https://i.stack.imgur.com/Iwl4U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iwl4U.png" alt="proof" /></a></p>
CyclotomicField
464,974
<p>If <span class="math-container">$\lambda_j$</span> is the only eigenvalue then <span class="math-container">$(T−\lambda_j)^n$</span> is the full expression and reordering doesn't change anything.</p>
123,712
<p>How can I prove that the function $$ f(x) = \left\{\begin{array}{l l} x &amp;\text{if }x \in \mathbb{Q} \\ -x &amp; \text{if } x \notin \mathbb{Q} \end{array} \right. $$ is discontinuous for $x \neq 0$, using $\epsilon$'s and $\delta$'s?</p> <p>I see that is truth. But I cannot prove using only $\epsilon$'s and $\delta$'s.</p>
Davide Giraudo
9,849
<p>Let $x&gt;0$, and assume that we can find a $\delta&gt;0$ such that if $|y-x|\leq\delta$ then $|f(x)-f(y)|&lt;x/2$. If $x$ is rational, it means $|x-f(y)|&lt;x/2$ so $x/2&lt;f(y)&lt;3x/2$ for all $y\in (x-\delta,x+\delta)$. In particular, $y\in\mathbb Q$ for all $y\in (x-\delta,x+\delta)$. If $x$ i irrational, then $|x+f(y)|&lt;x+2$ so $x/2&lt;-f(y)&lt;3x+2$ and $y$ has to be irrational if it is in $(x-\delta,x+\delta)$.</p> <p>Do you see why it leads to a contradiction?</p>
1,344,001
<p>I need to solve an expression of this kind (solve for $x$):</p> <p>$e^{\pi i x} -e^{-\pi ix} = 2yi$</p> <p>Both $x$ and $y$ are real numbers, $y$ is given. I have no clue on how to solve it analytically.</p> <p>All I know is that I can rewrite this as:</p> <p>$\sin(x\pi) = y$</p> <p>so:</p> <p>$x=\frac{\arcsin(y)}{\pi}$</p> <p>But I don't know how to generate the complex solutions from this form (neither form actually).</p>
corindo
40,305
<p>$\sin(x\pi) = y$. And you have </p> <p>$$ x = \frac{(-1)^{k+1} \arcsin y + k\pi}{\pi} = \frac{(-1)^{k+1} \arcsin y}{\pi} + k, \quad k \in \mathbb{Z} $$</p>
2,430,529
<p>How should I define a vector, that has equal angles to vectors $\vec{i}, \vec{i} + \vec{j}$ and $\vec{i} + \vec{j} + \vec{k}$?</p> <p>After looking at the problem in a graphical way, I tried taking average from $\vec{i}$ and $\vec{i} + \vec{j} + \vec{k}$, modifying the $\vec{i}$ vector to be of same length than the $\vec{i} + \vec{j} + \vec{k}$. Unfortunately the answer does not seem to be correct. </p>
user247327
247,327
<p>The length is, of course, irrelevant to the angles. Take the vector to be ai+ bj+ ck. The angle it makes with i is $cos^{-1}\left(\frac{a}{\sqrt{a^2+ b^2+ c^2}}\right)$. The angle it makes with i+ j is $cos^{-1}\left(\frac{a+ b}{\sqrt{2}/sqrt{a^2+ b^2+ c^2}}\right)$. The angle it makes with i+ j+ k is $cos^{-1}\left(\frac{a+ b}{\sqrt{3}sqrt{a^2+ b^2+ c^2}}\right)$. Setting those equal gives you two equations. Since the length is irrelevant solving for two of a, b, and c in terms of the third is sufficient.</p>
416,407
<blockquote> <p>What examples are there of habitual but unnecessary uses of the axiom of choice, in any area of mathematics except topology?</p> </blockquote> <p>I'm interested in standard proofs that use the axiom of choice, but where choice can be eliminated via some judicious and maybe not quite obvious rephrasing. I'm less interested in proofs that were originally proved using choice and where it took some significant new idea to remove the dependence on choice.</p> <p>I exclude topology because I already know lots of topological examples. For instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive mathematics</a> gives choicey and choice-free proofs of a standard result (Theorem 1.4): every open cover of a compact metric space has a Lebesgue number. Todd Trimble told me about some other topological examples, e.g. a compact subspace of a Hausdorff space is closed, or the product of two compact spaces is compact. There are more besides.</p> <p>One example per answer, please. And please sketch both the habitual proof using choice and the alternative proof that doesn't use choice.</p> <p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from topology.</p> <p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space <span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p> <p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some <span class="math-container">$\varepsilon_x &gt; 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}), \ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p> <p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span> such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon &gt; 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n, \varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
Ege Erdil
131,052
<p>I've seen Tychonoff's theorem be used to prove that the <span class="math-container">$ p $</span>-adic integers are compact. The proof is easy: there is a natural embedding</p> <p><span class="math-container">$$ \mathbb Z_p \to \prod_{k=1}^{\infty} (\mathbb Z/p^k \mathbb Z) $$</span></p> <p>whose image is closed, and the infinite product is compact by Tychonoff, so in particular we deduce that <span class="math-container">$ \mathbb Z_p $</span> is compact. (This strategy is used in general to show other profinite objects are compact, for instance, infinite Galois groups under the Krull topology.)</p> <p>The use of Tychonoff (and by extension the axiom of choice) is unnecessary: we can simply adapt the usual proof of Heine-Borel over <span class="math-container">$ \mathbb R $</span> to show that <span class="math-container">$ \mathbb Z_p $</span> is compact. If there is an infinite open cover with no finite subcover, we can find an infinite descending chain of closed balls in <span class="math-container">$ \mathbb Z_p $</span> intersecting at a single point that need infinitely many open balls to cover them, and since an open ball including the single point will cover all sufficiently small closed balls including that point, we get a contradiction.</p>
416,407
<blockquote> <p>What examples are there of habitual but unnecessary uses of the axiom of choice, in any area of mathematics except topology?</p> </blockquote> <p>I'm interested in standard proofs that use the axiom of choice, but where choice can be eliminated via some judicious and maybe not quite obvious rephrasing. I'm less interested in proofs that were originally proved using choice and where it took some significant new idea to remove the dependence on choice.</p> <p>I exclude topology because I already know lots of topological examples. For instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive mathematics</a> gives choicey and choice-free proofs of a standard result (Theorem 1.4): every open cover of a compact metric space has a Lebesgue number. Todd Trimble told me about some other topological examples, e.g. a compact subspace of a Hausdorff space is closed, or the product of two compact spaces is compact. There are more besides.</p> <p>One example per answer, please. And please sketch both the habitual proof using choice and the alternative proof that doesn't use choice.</p> <p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from topology.</p> <p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space <span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p> <p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some <span class="math-container">$\varepsilon_x &gt; 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}), \ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p> <p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span> such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon &gt; 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n, \varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
Samuel Mimram
39,955
<p><a href="https://arxiv.org/abs/math/0605779" rel="nofollow noreferrer">You can divide sets by two</a> (or more): in classical ZF, if <span class="math-container">$A\sqcup A\simeq B\sqcup B$</span> then <span class="math-container">$A \simeq B$</span>.</p>
1,481,106
<p>I am having a difficult time solving this problem. I have tried this several different ways, and I get a different result, none of which is correct, every time. I've derived an answer geometrically and cannot replicate it with a double integral.</p> <p>Here's the problem: Use a double integral to find the area between two circles $$x^2+y^2=4$$ and $$(x−1)^2+y^2=4.$$</p> <p>Here is how I have tried to go about this problem:</p> <p>First, I graphed it to get a good idea visually of what I was doing. <a href="https://i.stack.imgur.com/i7cNa.png" rel="nofollow noreferrer">Here's the graph I scribbled on.</a> The region I'm interested is where these two circles overlap. This region can easily be divided into two separate areas. There are clearly a number of ways to go about solving this...but the one I opted for is to find the shaded region. The bounds for $x$ in this case are between $D$ and $C$. D can be found by setting $C_1=C_2$, and $x$ turns out to be $\frac{1}{2}$. On the right, $x$ is where $C_1(y)=0$, $x=\pm2$, so $x=2$ at point $C$. $y$ is greater than $B_y$ and less than $A_y$, which are also found where $C_1=C_2$, and $y$ turns out to be $\pm\sqrt{\frac{15}{4}}$. So far so good. Now I know my limits of integration. But here's what I don't understand. What am I actually integrating? $x$ has constant bounds, and $y$ does not, and looking at other double integral problems, that would lead me to believe that I should integrate $y$ first as a function of $x$, evaluate it at its bounds, and then integrate $x$ and evaluate it at its bounds giving me half the area I am looking for. However, when I try to do this, I get utter nonsense for an answer, or I get lost trying to set up the problem.</p> <p>I could really use the help, I've spent entirely too much time trying to puzzle through this. Thank you in advance!</p> <p>P.s. I determined the area geometrically using a CAD program to calculate the area, and it should be approximately $8.46$.</p>
E.H.E
187,799
<p>by using the Cartesian coordinates $$\int_{-\frac{\sqrt{15}}{2}}^{\frac{\sqrt{15}}{2}}\int_{1-\sqrt{4-y^2}}^{\sqrt{4-y^2}}dxdy$$ </p>
2,306,122
<p>Show $X=\{n \in \mathbb{N}: \text{n is odd and} \ n = k(k+1) \text{for some} \ k \in \mathbb{N}\}=\emptyset$</p> <p>My proof is as follow, please point if I have made any mistake. </p> <p><strong>proof:</strong></p> <p>we have $\emptyset \subseteq X$ suppose $X≠\emptyset$ pick $n \in X$</p> <p>Then there are 2 cases 1st case: n is odd then n=(k+1)k</p> <p>Then suppose k is odd $\implies$ k+1 is even $\implies$ n is even</p> <p>2nd case: consider k is even $\implies k+1$ is odd</p> <p>then n=(k+1)k for some $k \in \mathbb{N}=\emptyset \implies n$ is even</p> <p>Therefore, n is neither even nor odd, so $k \in \mathbb{N} \implies n \not\in X$ and $\implies X= \emptyset$ </p> <p>Q.E.D</p>
avs
353,141
<p>An even shorter proof (not that yours is wrong): </p> <p>One of $k, k+1$ must be even, therefore so is the product $k(k+1)$.</p>
2,645,406
<p>I am reading Rudin's real and complex analysis. On page 14 we see $\underset{n} {\text{sup }} f_n$, and on page 15 we see max$\{f,g\}$. These two are obviously different. I searched some answers and found an example: </p> <p>sup $\{f,g\} = f(x)$ wehn $f(x)\geq g(x)$ and $g(x)$ when $f(x) &lt; g(x)$ </p> <p>But then I saw max$\{f,g\}$ which makes me confused. Are they the same? What are the differences?</p> <p>Can someone also give some explanation of $(\underset{n\to \infty}{\text{lim sup }} f_n)(x)$? How to visualize/understand it?</p>
Jay Zha
379,853
<p>If $x = \max A$ for some set $A \subset \mathbb R$, then $x \ge y$ for any $y \in A$, and $x \in A$.</p> <p>But the $\sup A$ does not need to be in $A$. By definition if $x = \sup A$, for some set $A \subset \mathbb R$, then $ x \ge y$ for any $y \in A$, but $x \le y$ if $y$ is an upper bound of $A$.</p>
248,325
<p>is there some connection between a curve in the algebraic geometry sense, e.g.</p> <blockquote> <p>Separated scheme of finite type over spec($k$)</p> </blockquote> <p>for a field $k$</p> <p>and a curve in the sense of a smooth map from an interval in $\mathbb R$ to $\mathbb R^n$?</p>
Bin Wang
88,343
<p>I think when you mention a smooth map, the condition seems appear in the category of diffential geometry, which is quite different. The concept of 1 dimensional separated scheme (since we consider curves) is in fact a generalization of algebraic curves(not necessarily smooth). More precisely, an algebraic curve is a geometric object plus some algebraic conditions for example the sheaf of polynomials on it, scheme in some sense generalize the concept of sheaves of regular function on it!</p> <p>By the way, even a smooth algebraic curve may not be just a single map as you mentioned. This answer is somehow conceptual ,hope it useful </p>
4,417,353
<p><a href="https://i.stack.imgur.com/td05G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/td05G.png" alt="Here is the question. I sent it as a image since I could not find how to write a1 as below." /></a></p> <p>My first approach is that I need to use induction on that. However the problem is that what is the upper bound limit for this series?</p> <p>Also, let's say we showed that it converges. Doesn't it directly imply that that convergence point is the limit of the series? What is the difference?</p> <p>Note: This is non-grading homework question. Since it is explained in image, I feel that I need to explain.</p>
Hagen von Eitzen
39,174
<p>Once you know <em>that</em> the sequence converges, what can you infer about the actual <em>value</em> of <span class="math-container">$a:=\lim a_n$</span>? Since <span class="math-container">$f\colon x\mapsto \sqrt2^x$</span> is continuous on <span class="math-container">$\Bbb R$</span>, we must have <span class="math-container">$$\tag1a=\sqrt 2^a.$$</span> As <span class="math-container">$f$</span> is also strictly increasing, <span class="math-container">$(1)$</span> cannot have more than one solution. One (and hence <em>the</em>) solution we can immediately spot is <span class="math-container">$$a=2.$$</span></p>
972,191
<p>Consider the linear transformation of $\mathbb{R}^3$ given by $Ax = (a \cdot x)a+ |a|^2x$. Is $A$ symmetric? Is it positive?</p> <p>I know that a matrix is symmetric IFF $(Ax,y) = (x,Ay)$ and positive if $x \cdot Ax &gt; 0$ for all vectors $x \not = 0$.</p>
Timbuc
118,527
<p>Take the usual, canonical basis of $\;\Bbb R^3\;$ :</p> <p>$$u_1=\begin{pmatrix}1\\0\\0\end{pmatrix}\;,\;\;u_2=\begin{pmatrix}0\\1\\0\end{pmatrix}\;,\;\;u_3=\begin{pmatrix}0\\0\\1\end{pmatrix}$$</p> <p>and suppose $\;a=\begin{pmatrix}a_1\\a_2\\a_3\end{pmatrix}\;$ , so that</p> <p>$$\begin{align*}&amp;Au_1:=a_1a+(a_1^2+a_2^2+a_3^3)\,u_1=\begin{pmatrix}2a_1^2+a_2^2+a_3^2\\a_1a_2\\a_1a_3\end{pmatrix}\\{}\\ &amp;Au_2:=a_2a+(a_1^2+a_2^2+a_3^3)\,u_2=\begin{pmatrix}a_1a_2\\a_1^2+2a_2^2+a_3^2\\a_2a_3\end{pmatrix}\\{}\\ &amp;Au_3:=a_3a+(a_1^2+a_2^2+a_3^3)\,u_3=\begin{pmatrix}a_1a_3\\a_2a_3\\a_1^2+a_2^2+2a_3^2\end{pmatrix}\end{align*}$$</p> <p>From the above, the matrix representation of $\;A\;$ is</p> <p>$$\begin{pmatrix}2a_1^2+a_2^2+a_3^2&amp;a_1a_2&amp;a_1a_3\\ a_1a_2&amp;a_1^2+2a_2^2+a_3^2&amp;a_2a_3\\ a_1a_3&amp;a_2a_3&amp;a_1^2+a_2^2+2a_3^2\end{pmatrix}$$</p> <p>Thus, clearly the matrix is symmetrix (and so is $\;A\;$ ), and its prinicpal minors are</p> <p>$$\begin{align*}A_{11}=2a_1^2+a_2^2+a_3^2\ge0\\{}\\A_{22}=(2a_1^2+a_2^2+a_3^2)(a_1^2+2a_2^2+a_3^2)-(a_1a_2)^2=\\ =(\text{sum of products of squares})+5a_1^2a_2^2-a_1^2a_2^2\ge 0\end{align*}$$</p> <p>I leave it to you to check the minor $\;A_{33}=\det A\;$ is also non-negative, and thus $\;A\;$ is indefinite positive.</p> <p>Or directly, for any $\;x^t=(x_1,x_2,x_3)\in\Bbb R^3\;$:</p> <p>$$x^tAx=(x_1\;x_2\;x_3)\left((a\cdot x)a+||a||^2 x\right)=$$</p> <p>$$=(x_1\;x_2\;x_3)\begin{pmatrix}(2a_1^2+a_2^2+a_3^2)x_1+a_1a_2x_2+a_1a_3x_3\\ a_1a_2x_1+(a_1^2+2a_2^2+a_3^2)x_2+a_2a_3x_3\\ a_1a_3x_1+a_2a_3x_2+(a_1^2+a_2^2+2a_3^2)x_3\end{pmatrix}=$$</p> <p>$$=(2a_1^2+a_2^2+a_3^2)x_1^2+a_1a_2x_1x_2+a_1a_3x_1x_3+ a_1a_2x_1x_2+(a_1^2+2a_2^2+a_3^2)x_2^2+a_2a_3x_2x_3+ a_1a_3x_1x_3+a_2a_3x_2x_3+(a_1^2+a_2^2+2a_3^2)x_3^2$$</p> <p>Continuing this seems it is going to get annoyingly ugly and messy, but in fact life is pretty:</p> <p>$$=(2a_1^2+a_2^2+a_3^2)x_1^2+2a_1a_2x_1x_2+ (a_1^2+2a_2^2+a_3^2)x_2^2+2a_2a_3x_2x_3+ 2a_1a_3x_1x_3+(a_1^2+a_2^2+2a_3^2)x_3^2=$$</p> <p>$$=(a_1x_1+a_2x_2)^2+(a_1x_1+a_3x_3)^2+(a_2x_2+a_3x_3)^2\ge 0$$</p>
972,191
<p>Consider the linear transformation of $\mathbb{R}^3$ given by $Ax = (a \cdot x)a+ |a|^2x$. Is $A$ symmetric? Is it positive?</p> <p>I know that a matrix is symmetric IFF $(Ax,y) = (x,Ay)$ and positive if $x \cdot Ax &gt; 0$ for all vectors $x \not = 0$.</p>
Yiorgos S. Smyrlis
57,021
<p>Checking symmetry of $A$: $$ ( y,Ax)= ( y,(a \cdot x)a+ |a|^2x)=(a\cdot x)(a\cdot y)+\lvert a\rvert^2(x,y)=( x,Ay)=(Ay,x). $$ Hence $A$ is indeed symmetric.</p> <p>Positive definite $$ ( x,Ax)=(a\cdot x)^2+\lvert a\rvert^2(x,x)\ge \lvert a\rvert^2\lvert x\rvert^2, $$ hence it is non-negative definite, and if $a\ne 0$, it is positive definite.</p>
3,841,535
<p>I'm having a lot of trouble solving this question via the differentiating with respect to a parameter method. I can get the correct result for the integral containing sine, but I'm totally lost when it comes to evaluating the integral containing cosine. Here's the problem statement:</p> <p>Given:</p> <p><span class="math-container">$$\int_{0}^{\infty} e^{-ax} \sin(kx) \ dx = \frac{k}{a^2+k^2}$$</span></p> <p>evaluate <span class="math-container">$\int_{0}^{\infty} xe^{-ax}\sin(kx) \ dx$</span> and <span class="math-container">$\int_{0}^{\infty} xe^{-ax} \cos(kx) \ dx$</span>.</p> <p>This is the last question in the 2nd chapter of 'Basic Training in Mathematics' by Shankar. Any help would be appreciated, I've been tearing my hair out all day with this.</p>
Aadhaar Murty
826,105
<p>As suggested by @Abhi, differentiating w.r.t. <span class="math-container">$k$</span> will give you a direct answer. We have,</p> <p><span class="math-container">$$\frac {d}{dk} \left(\frac {k}{a^{2}+ k^{2}}\right) = \int_{0}^{\infty} e^{-ax} \cdot \partial_{k}\sin(kx) dx = \int_{0}^{\infty}xe^{-ax} \cos(kx) dx = \boxed{\frac {a^{2}-k^{2}}{(a^{2}+k^{2})^{2}}}$$</span></p> <p>Differentiating with respect to <span class="math-container">$a$</span> gives the value of the other integral -</p> <p><span class="math-container">$$-\frac {d}{da}\left(\frac {k}{a^{2}+ k^{2}}\right) = -\int_{0}^{\infty} \sin(kx) \cdot \partial_{a}(e^{-ax})dx = \int_{0}^{\infty} xe^{-ax} \sin(kx) dx = \boxed {\frac{2ak}{(a^{2}+k^{2})^{2}}}$$</span></p>
674,621
<p>I am trying to figure out what the three possibilities of $z$ are such that </p> <p>$$ z^3=i $$</p> <p>but I am stuck on how to proceed. I tried algebraically but ran into rather tedious polynomials. Could you solve this geometrically? Any help would be greatly appreciated.</p>
Gilles 'SO- stop being evil'
1,853
<p>You can solve this geometrically if you know <a href="http://en.wikipedia.org/wiki/Polar_coordinates" rel="noreferrer">polar coordinates</a>.</p> <p>In polar coordinates, multiplication goes $(r_1, \theta_1) \cdot (r_2, \theta_2) = (r_1 \cdot r_2, \theta_1 + \theta_2)$, so cubing goes $(r, \theta)^3 = (r^3, 3\theta)$. The cube roots of $(r, \theta)$ are $\left(\sqrt[3]{r}, \frac{\theta}{3}\right)$, $\left(\sqrt[3]{r}, \frac{\theta+2\pi}{3}\right)$ and $\left(\sqrt[3]{r}, \frac{\theta+4\pi}{3}\right)$ (recall that adding $2\pi$ to the argument doesn't change the number). In other words, to find the cubic roots of a complex number, take the cubic root of the absolute value (the radius) and divide the argument (the angle) by 3.</p> <p>$i$ is at a right angle from $1$: $i = \left(1, \frac{\pi}{2}\right)$. Graphically:</p> <p><img src="https://i.stack.imgur.com/gpBqi.png" alt=""></p> <p>A cubic root of $i$ is $A = \left(1, \frac{\pi}{6}\right)$. The other two are $B = \left(1, \frac{5\pi}{6}\right)$ and $\left(1, \frac{9\pi}{6}\right) = -i$.</p> <p>Recalling basic trigonometry, the rectangular coordinates of $A$ are $\left(\cos\frac{\pi}{6}, \sin\frac{\pi}{6}\right)$ (the triangle OMA is rectangle at M). Thus, $A = \cos\frac{\pi}{6} + i \sin\frac{\pi}{6} = \frac{\sqrt{3}}{2} + i\frac{1}{2}$.</p> <p>If you don't remember the values of $\cos\frac{\pi}{6}$ and $\sin\frac{\pi}{6}$, you can find them using geometry. The triangle $OAi$ has two equal sides $OA$ and $Oi$, so it is isoceles: the angles $OiA$ and $OAi$ are equal. The sum of the angles of the triangle is $\pi$, and we know that the third angle $iOA$ is $\frac{\pi}{2} - \frac{\pi}{6} = \frac{\pi}{3}$; therefore $OiA = OAi = \dfrac{\pi - \frac{pi}{3}}{2} = \dfrac{\pi}{3}$. So $OAi$ is an equilateral triangle, and the altitude AN is also a median, so N is the midpoint of $[Oi]$: $\sin\frac{\pi}{6} = AM = ON = \frac{1}{2}$. By the Pythagorean theorem, $OM^2 + AM^2 = OA^2 = 1$ so $\cos\frac{\pi}{6} = \sqrt{1 - \left(\frac{1}{2}\right)^2} = \dfrac{\sqrt{3}}{2}$.</p>
1,601,427
<blockquote> <p>Let $a,b,c$ be three nonnegative real numbers. Prove that $$a^2+b^2+c^2+3\sqrt[3]{a^2b^2c^2} \geq 2(ab+bc+ca).$$</p> </blockquote> <p>It seems that the inequality $a^2+b^2+c^2 \geq ab+bc+ca$ will be of use here. If I use that then I will get $a^2+b^2+c^2+3\sqrt[3]{a^2b^2c^2} \geq ab+bc+ca+3\sqrt[3]{a^2b^2c^2}$. Then do I use the rearrangement inequality similarly on $3\sqrt[3]{a^2b^2c^2}$?</p>
ki3i
202,257
<p>Yet another way in which this can be shown using <a href="https://en.wikipedia.org/wiki/Schur%27s_inequality">Schur's inequality</a> in tandem with the AM-GM inequality is as follows: $$ a^2+b^2+c^2+3(a^2b^2c^2)^{1/3}\geqslant a^{2/3}b^{4/3} + a^{4/3}b^{2/3} +b^{2/3}c^{4/3} + b^{4/3}c^{2/3} + a^{2/3}c^{4/3} + a^{4/3}c^{2/3} \\[2ex]= 2\left({a^{2/3}b^{4/3} + a^{4/3}b^{2/3}\over 2} + {b^{2/3}c^{4/3} + b^{4/3}c^{2/3}\over 2} + {a^{2/3}c^{4/3} + a^{4/3}c^{2/3}\over 2}\right)\\[2ex] \geqslant 2(ab + bc + ac) $$</p>
3,828,205
<blockquote> <p>Find all integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span> that satisfy <span class="math-container">$$x^4-12x²+x^2y^2+30 &lt; 0$$</span></p> </blockquote> <p>Letting <span class="math-container">$a = x^2$</span> and <span class="math-container">$b = y^2$</span> I got <span class="math-container">$$a^2-12a+ab+30 &lt; 0$$</span></p> <p>from which I managed to get <span class="math-container">$$a^2-12+30+ab &lt;0 \Rightarrow (a-6)^2+ab &lt; 6.$$</span></p> <p>However I'm not sure how to proceed from here. What should I do?</p>
markvs
454,915
<p>You have <span class="math-container">$(a-6)^2&lt; 6$</span>, <span class="math-container">$a\ge 0$</span>. So <span class="math-container">$(a-6)^2\in \{0,1,4\}$</span>. If <span class="math-container">$(a-6)^2=0$</span>, then <span class="math-container">$a=6$</span>, impossible since <span class="math-container">$a=x^2$</span>.</p> <p><span class="math-container">$(a-6)^2=1$</span> implies <span class="math-container">$a-6=\pm 1$</span>, <span class="math-container">$a=5 $</span> or <span class="math-container">$a=7$</span>, impossible again.</p> <p>Finally <span class="math-container">$(a-6)^2=4$</span> imples <span class="math-container">$a=6\pm 2$</span>, the only possibility is <span class="math-container">$a=4$</span>. But then <span class="math-container">$(a-6)^2+ab&lt;6$</span> means <span class="math-container">$ab&lt;2$</span>, so <span class="math-container">$b=0$</span>.</p> <p>Conclusion: <span class="math-container">$x=\pm 2, y=0$</span>.</p>
1,165,828
<p>I run into a problem when I'm trying to prove how $\tan^2x+1 = \sec^2x$, and $1+\cot^2x=\csc^2x$</p> <p>I understand that $\sin^2x+\cos^2x = 1$. (To my understanding 1 is the Hypotenuse, please correct me if I'm wrong). If referring to a Pythagorean triangle, let's say a triangle where $a=3$, $b=4$, and $c=5$, or $a=\cos$ $b=\sin$ and $c=\text{hypotenuse}$.</p> <p>$3^2 + 4^2 = 5^2$. Which is true and proves that this identity work. To my understanding, that is how this identity work.</p> <p>However, when I try to make sense of the $\tan^2x+1 = \sec^2x$ and $1+\cot^2x=\csc^2x$ identity, using the triangle example from above, it doesn't work. For example, here's how I did it on paper,</p> <p>$\tan^2x + 1 = {\sin^2x\over \cos^2x} + 1$, so I would get using the triangle above, ${4^2\over 3^2} + 1 = {16\over 9} + 1 = 2.777777778$</p> <p>Now on the right hand side, $\sec^2x, = {1\over \cos^2x} = {1\over 3^2} = .1111111111$. The answer I get from $\tan^2x+1$ DOES NOT EQUAL the answer I get from $\sec^2x$. I think I may be misunderstanding a critical part here that I can't really pinpoint.</p> <p>However, I do know that if I prove $\tan^2x+1 = \sec^2x$ using just the identity itself it does work, for example,</p> <p>$\tan^2x+1 = {\sin^2x\over \cos^2x} + 1 = (\text{after some simplification}) = \sin^2x + {\cos^2x \over \cos^2x} = {1\over \cos^2x} = \sec^2x$.</p> <p>The same issue happens with $1+\cot^2x=\csc^2x$.</p> <p>To my understanding, $\sin^2+\cos^2=1$ is the same as $a^2+b^2=c^2$. Am I right or wrong? I think there is a huge concept that I am missing between the UNIT CIRCLE and just TRIANGLES. </p>
turkeyhundt
115,823
<p>The first identity works because $\frac{3}{5}^2+\frac{4}{5}^2=\frac{5}{5}^2$. When thinking of these trig ratios, you have to think of them in terms of both triangle sides you are interested in. Not merely one leg being equal to $\sin$, for example.</p> <p>The beauty of the "unit" circle is that with the hypotenuse being $1$, in this case, we can think of the legs of the triangle being the actual value of $\sin$ and $\cos$, because the ratio of that length to $1$ is just that length.</p> <p>You can workout the other identities if you express tan as $\frac{4}{3}$, $\csc$ as $\frac{5}{4}$, etc.</p>
1,756,567
<p>Let A be a 3*3 matrix and $A^{2014}=0$. Must $A^3$ be the zero matrix? I can work out that I-A is invertible, but I don't know how to proceed further.</p>
Michael Burr
86,421
<p>This is very much overkill for this problem (Wojowu's suggestion in the comments is a very nice and clean solution). This answer classifies the possible $A$'s.</p> <p>Since $A^{2014}=0$, all of the eigenvalues of $A$ must be $0$ (for if $\lambda\not=0$ were an eigenvalue with eigenvector $v$, then $A^{2014}v=\lambda^{2014}v\not=0$). Considering the Jordan form of $A$, there are three possibilities, there are three independent blocks, two blocks, or one block: $$ \begin{bmatrix}0&amp;0&amp;0\\0&amp;0&amp;0\\0&amp;0&amp;0\end{bmatrix},\begin{bmatrix}0&amp;1&amp;0\\0&amp;0&amp;0\\0&amp;0&amp;0\end{bmatrix},\begin{bmatrix}0&amp;1&amp;0\\0&amp;0&amp;1\\0&amp;0&amp;0\end{bmatrix}. $$ Since $A=PJP^{-1}$, we know that all matrices of the desired form are similar to one of these three matrices. We see that $A^3=PJ^3P^{-1}$, but $J^3=0$ for all of these matrices, so $A^3=0$.</p>
1,565,406
<p>Which interpolation method should I use for complicated "smooth" curves such as </p> <p>$\frac{sin(x)}{x}$ for $x&gt;0$.</p>
bubba
31,744
<p>The approximation method should generally be tuned to the function (or at least the type of function) that you're trying to approximate. There are general-purpose one-size-fits-all methods, but a specialized method will typically be much better.</p> <p>If I had to choose one particular class of methods, I suppose it would be those implemented in <a href="http://www.chebfun.org" rel="nofollow">the chebfun system</a>. The basic idea is to do Lagrange interpolation at Chebyshev nodes, and split the curve into several smaller ones if this runs into trouble. Basically, this package seems to be able to approximate anything you throw at it, with very high precision. There is some discussion of approximating very wiggly functions <a href="http://www.chebfun.org/examples/approx/ResolutionWiggly.html" rel="nofollow">here</a>.</p>
269,062
<p>Consider the following integral $$ pv\int_0^{\infty}e^{N(-2Ax+A\log x)}\frac{e^{-B\log x}}{1-2x}dx $$ where $A,B&gt;0$ and we take the Cauchy principal value at $x=1/2$. I am interested in obtaining the asymptotics when $N$ is very big. The first thing I thought of was some variant of Laplace's method but I am unsure if I can proceed here, because of the singularity at $x=1/2$. So, my question is, is some version of the Laplace's method applicable here to obtain the big $N$ asymptotics? and if so, how should I proceed?</p>
Fedor Petrov
4,312
<p>You may estimate the integral on $[1,\infty)$ somehow (it does not affect the asymptotics), after that consider the integral of $\int_0^{1/2}f_N(1/2-t)+f_N(1/2+t)dt$, where $f_N$ is your function. This already does not have a singularity at 0 and you may apply Laplace method. </p>
194,312
<p>I'm preparing myself to a combinatorics test. A part of it will concentrate on the pigeonhole principle. Thus, I need some hard to very hard problems in the subject to solve. I would be thankful if you can send me links\books\or just a lone problem.</p>
Holdsworth88
22,437
<p><a href="http://www.math.auckland.ac.nz/~olympiad/Training/Numbers/Pigeonhole.pdf" rel="nofollow">This</a> turned up in a routine google search of the phrase "pidgeonhole principle exercise" and appears to be training problems for the New Zealand olympiad team. It contains numerous problems and has some solutions in the back.</p>
3,719,575
<p>For <span class="math-container">$\lambda =0$</span> and <span class="math-container">$\lambda &lt;0$</span> the solution is the trivial solution <span class="math-container">$x\left(t\right)=0$</span></p> <p>So we have to calculate for <span class="math-container">$\lambda &gt;0$</span></p> <p>The general solution here is</p> <p><span class="math-container">$x\left(t\right)=C_1cos\left(\sqrt{\lambda }t\right)+C_2sin\left(\sqrt{\lambda \:}t\right)$</span></p> <p>Because <span class="math-container">$0=C_1\cdot cos\left(0\right)+C_2\cdot sin\left(0\right)=C_1$</span> we know that</p> <p><span class="math-container">$x\left(t\right)=C_2sin\left(\sqrt{\lambda }t\right)$</span></p> <p><span class="math-container">$\sqrt{\lambda }t=n\pi$</span></p> <p><span class="math-container">$\sqrt{\lambda }=\frac{n\pi }{t}$</span></p> <p>But does there a solution for lambda exist which is not dependent on t?</p>
Disintegrating By Parts
112,478
<p>There only exist solutions for specific values of <span class="math-container">$\lambda$</span>, and those <span class="math-container">$\lambda$</span> are the eigenvalues of the underlying problem. So the goal of finding non-trivial solutions of the following equations is to (a) find the <span class="math-container">$\lambda$</span> for which there exists a non-trivial solution, and (b) find the actual non-trival solutions. In order to normalize the problem, first find the solutions of <span class="math-container">$$ x''+\lambda x = 0,\;\;\; x(0)=0,\; x'(0)=1. $$</span> Every non-zero solution of <span class="math-container">$x''+\lambda x=0$</span> subject to <span class="math-container">$x(0)=0$</span> must be a non-zero scalar multiple of the above solution. <span class="math-container">$x'(0)=1$</span> can be imposed because <span class="math-container">$x$</span> an be scaled to achieve that, unless <span class="math-container">$x(0)=x'(0)=0$</span>, which is only the case for <span class="math-container">$x\equiv 0$</span>, which is case we're not interested in. The solution of the above equation is <span class="math-container">$$ x(t)= \frac{\sin(\sqrt{\lambda}x)}{\sqrt{\lambda}} $$</span> This is valid even when <span class="math-container">$\lambda=0$</span>, if you take the limit as <span class="math-container">$\lambda\rightarrow 0$</span>, which gives <span class="math-container">$x(t)=t$</span>. Finally in order to solve the full equation where <span class="math-container">$x(L)=0$</span>, you must solve the following equation in <span class="math-container">$\lambda$</span>: <span class="math-container">$$ \frac{\sin(\sqrt{\lambda}L)}{\sqrt{\lambda}} = 0. $$</span> <span class="math-container">$\lambda=0$</span> is not a solution because <span class="math-container">$x(t)=t$</span> is not a solution of <span class="math-container">$x(L)=0$</span>. So <span class="math-container">$\lambda=0$</span> does not work. However, <span class="math-container">$\sqrt{\lambda}L=n\pi$</span> for <span class="math-container">$n=1,2,3,\cdots$</span> or <span class="math-container">$\lambda=n^2\pi^2/L^2$</span> for <span class="math-container">$n=1,2,3,\cdots$</span> are valid values of <span class="math-container">$\lambda$</span> for which the original system has have a solution, and that solution is <span class="math-container">$$ x_n(t) = \frac{\sin(n\pi t)}{n\pi} $$</span> These are valid solutions for <span class="math-container">$n=1,2,3,\cdots$</span>, and there are no others.</p>
1,453,682
<p>I'm having trouble classifying the solution set of the systems $Ax = b$ and $Ax = 0$.</p> <p>Let A be an ($m \times n$) matrix.</p> <p><strong>Case I:</strong></p> <p><em>For $Ax = 0$, the system has a unique solution (the trivial one) when A is invertible, and infinitely many solutions when A is not.</em> </p> <ul> <li>We can scratch off the "no solution" case because there is always the zero matrix solution correct?</li> </ul> <p><strong>Case II:</strong> </p> <p><em>For $Ax = b$, if A is invertible, then for all ($n \times 1$) vector b, the matrix equation has a unique solution given by $x = A^{-1}b$. Else, there are two remaining cases: infinitely many solutions or no solutions.</em> </p> <ul> <li>How would I know which it is? Can we not tell until we have row reduced the augmented matrix?</li> </ul> <p>Finally is the following true or false?</p> <p><strong>If y and z are solutions of the system Ax = b then any linear combination of y and z is also a solution.</strong></p> <p>My thoughts are that it is correct but I am not too sure.</p>
Bernard
202,857
<p>In case II, note that in general, $A$ will not be invertible, as this requires $m=n$.</p> <p>The condition for having solutions is the rank of the augmented matrix is equal to $\operatorname{rank}A$ (in all cases it is $\ge\operatorname{rank}A$). It is the case if $A$ has rank $m$, which implies $n\ge m$ and $A$ is <em>right</em>-invertible – in particular if $A$ is invertible.</p> <p>Concerning the last question, the solution is not a subspace, but it is an <em>affine subspace</em>, which means any <em>weighted mean</em> of solutions is again a solution.</p>
742,160
<p>Answer true or false to each of the following questions. If a statement is true, prove it. If a statement is false, give a counterexample.</p> <ol> <li>For all sets $A$,$B$ and $C$: IF $A ⊆ B$ and $A ⊆ C$, Then $A ⊆ (B ∩ C)$</li> <li>For all sets $A$ and $B$, if $|A| \le |B|$, then $A ⊆ B$</li> </ol>
hmakholm left over Monica
14,366
<p>An injection $A\to B$ provides a correspondence between $A$ and some subset of $B$ -- that, is an INjection points to a copy of $A$ INside $B$.</p>
1,027,707
<p>In how many ways can 100 identical chairs be divided among 4 different rooms so that each room will have 10,20,30,40 or 50 chairs?</p> <p>I'm having problems coming up with the generating function for this question. </p> <p>The answer given is 68. </p>
Qiaochu Yuan
232
<p>You are correct that the category of finite-dimensional vector spaces is not cartesian closed. However, it is <a href="http://ncatlab.org/nlab/show/closed+monoidal+category" rel="nofollow">closed monoidal</a>: this is like being cartesian closed, but with the cartesian product replaced with an arbitrary monoidal product, here the tensor product. The essential adjunction now takes the form</p> <p>$$\text{Hom}(U \otimes V, W) \cong \text{Hom}(U, [V, W])$$</p> <p>where $[V, W]$ denotes the vector space of linear maps $V \to W$. In the particular case of finite-dimensional vector spaces, but not in more general settings, it is furthermore the case that $[V, W] \cong V^{\ast} \otimes W$. </p>
35,877
<p>I got stucked while I was working out whether $$\frac{4n-3}{n+43}$$ converges. I would be pleased if I could a hint of the above question.</p>
Dennis Gulko
6,948
<p>What are the tools that you can use? One possible way is to use Cauchy's theorem: Show that for any given $\varepsilon&gt;0$ there exists $N&gt;0$ such that for all $n&gt;N$ we have $\left|\frac{4n-3}{n+43}-4\right|&lt;\varepsilon$</p>
2,783,224
<p>$$\lim_{x \rightarrow 0}\frac{x^2 \sin(Kx)}{x - \sin x} =1$$</p> <p>Here K is a constant whose value I want to find,</p> <p>I got it by writing the series expansion of $\sin(\theta)$, but couldn't by L'hospital rule or standard limits,</p> <p>This is what I tried:</p> <p>$$\lim_{x \rightarrow 0}\frac{x^2 \sin(Kx)}{x - \sin x} =1$$</p> <p>$$\implies \lim_{x \rightarrow 0}\frac{x \sin(Kx)}{1 - \frac{\sin x} x} =1$$</p> <p>$$\implies \lim_{x \rightarrow 0}\frac{Kx^2 \frac{\sin(Kx)}{Kx}}{1 - \frac{\sin x} x} =1$$ So this gives a 0 in the denominator which doesn't help, L'hospital gives a huge mess, applying it once more didn't help</p> <p>The answer given is K =1/6.</p>
Dave
334,366
<p>L'Hopital's rule certainly works: you can apply L'Hopital's rule three times to get the limiting function to be continuous at $x=0$.</p> <p>The third derivative of the numerator is $$-k(6kx\sin(kx)+(k^2x^2-6)\cos(kx))$$ and the third derivative of the denominator is $\cos(x)$. Hence, after three applications of L'Hopital's rule, we get $$1=\lim_{x\to 0}\frac{-k(6kx\sin(kx)+(k^2x^2-6)\cos(kx))}{\cos(x)}=\frac{-k(0+(0-6)\cdot 1)}{1}=6k$$ </p> <p>I leave it to you to perform all the necessary steps, and in particular you must check that after the first and second applications of L'Hopital's rule, we are still in the "$\frac{0}{0}$" indeterminate form which actually allows us to apply L'Hoptial.</p>
540,029
<p>I know that for any matrix <span class="math-container">$A$</span>, <span class="math-container">$AA^{\ast}$</span> is positive semidefinite (where <span class="math-container">$A^\ast$</span> is <span class="math-container">$\overline{A}^T$</span>). Please help me show the following statement</p> <blockquote> <p>Any positive semidefinite can be written as <span class="math-container">$AA^{\ast}$</span>.</p> </blockquote>
copper.hat
27,978
<p>If $A \ge 0$ and $A = A^*$, then $A$ is unitarily diagonalizable and all eigenvalues are real and non-negative. That is, for some unitary $U$, we have $U^*AU = \Lambda$, where $\Lambda = \operatorname{diag}(\lambda_1,...,\lambda_n)$, and $\lambda_k \ge 0$.</p> <p>If we let $\Lambda^{\frac{1}{2}} = \operatorname{diag}(\sqrt{\lambda_1},...,\sqrt{\lambda_n})$, then we note that $(\Lambda^{\frac{1}{2}})^* = \Lambda^{\frac{1}{2}}$ and see that $A = U \Lambda U^* = U \Lambda^{\frac{1}{2}} \Lambda^{\frac{1}{2}} U^*= U \Lambda^{\frac{1}{2}} (\Lambda^{\frac{1}{2}})^* U^* = U \Lambda^{\frac{1}{2}} ( U \Lambda^{\frac{1}{2}})^*$, as desired.</p> <p><strong>Aside</strong>: The Hermitian assumption is necessary. The matrix $A= \begin{bmatrix} 1 &amp; 1 \\ 0 &amp; 1 \end{bmatrix}$ satisfies $\langle x, Ax \rangle \ge 0$ for all $x$, but cannot be written as $B B^*$ for any matrix $B$. (This follows since $B B^*$ is self-adjoint, but $A$ is not.)</p>
1,277,827
<p>I came across a question from Munkres: Problem 24.11: If A is a connected subspace of X, does it follow that IntA and BdA are connected? Does the converse hold?</p> <p>Thinking about the converse, I know that in general, if Bd A is connected, it doesn't necessarily mean that A is connected (e.g. the rationals). But what if X is connected, A is closed and the Bd A is connected? Would that then imply A is also connected?</p>
Andreas Blass
48,510
<p>Suppose that $A$ is closed in $X$, that $X$ and Bd$(A)$ are connected, but (toward a contradiction) that $A$ is disconnected. So $A$ is the union of two disjoint, nonempty subsets $P$ and $Q$ that are closed in $A$ and therefore in $X$. Now consider any point $x$ in Bd$(A)$. Since $A$ is closed, it follows that $x\in A$ and therefore either $x\in P$ or $x\in Q$; without loss of generality, assume $x\in P$. Furthermore, $x$ is not in the interior of $A$ and therefore not in the interior of $P$. So $x\in \text{Bd}(P)$. (Here and throughout this answer, boundaries and interiors are with respect to $X$.) This shows that Bd$(A)$ is covered by Bd$(P)$ and Bd$(Q)$. </p> <p>I claim that Bd$(P)$ is actually included in Bd$(A)$; then the same will be true for Bd$(Q)$. To verify the claim, consider any $y\in\text{Bd}(P)$. It's in $P$ and therefore in $A$, so the only way it could avoid being in Bd$(A)$ would be if it's in the interior of $A$. So suppose $y$ had an open neighborhood $U\subseteq A$. Since $y$ is not in the interior of $P$, there must be points in $U$ from $A-P=Q$. The same goes for every neighborhood $V$ with $y\in V\subseteq U$, and therefore $y$ is in the closure of $Q$. But $Q$ is closed and disjoint from $P$, so this is impossible for $y\in P$. This completes the proof of the claim.</p> <p>So now we have that Bd$(A)$ is the union of Bd$(P)$ and Bd$(Q)$. These are disjoint (because $P\cap Q=\varnothing$), closed sets, yet Bd$(A)$ is connected. So one of Bd$(P)$ and Bd$(Q)$ has to be empty. But that means one of $P$ and $Q$ is clopen in $X$, contrary to the assumption that $X$ is connected.</p>
261,474
<p>Let <span class="math-container">$A$</span> be the matrix <span class="math-container">$$A = \begin{pmatrix} 1 &amp; \sqrt{2}\\ -\sqrt{2} &amp; -1\\ \end{pmatrix}$$</span></p> <br> <blockquote> <p>Compute the matrix <span class="math-container">$B = 3A -2A^2 - A^3 -5A^4 + A^6$</span>.</p> </blockquote> <p>Could any one give me any hint for this one? I have calculated the eigenvalues they are <span class="math-container">$(1+\sqrt{2}i),(1-\sqrt{2}i)$</span></p>
P..
39,722
<p>In this example probably the best way is to use Raymond Manzoni's hint.</p> <p>In general if $A$ is similar to $C$ with $A=P^{-1}CP$ for some invertible $P$ then for any polynomial $\phi(x)$: $$\phi(A)=P^{-1}\phi(C)P.$$ In your case use that $A$ is similar to $\begin{pmatrix}1+\sqrt{2}&amp;0\\0&amp;1-\sqrt{2}\end{pmatrix}$.<br> Also see <a href="http://en.wikipedia.org/wiki/Diagonalizable_matrix#An_application" rel="nofollow">this</a>.</p>
261,474
<p>Let <span class="math-container">$A$</span> be the matrix <span class="math-container">$$A = \begin{pmatrix} 1 &amp; \sqrt{2}\\ -\sqrt{2} &amp; -1\\ \end{pmatrix}$$</span></p> <br> <blockquote> <p>Compute the matrix <span class="math-container">$B = 3A -2A^2 - A^3 -5A^4 + A^6$</span>.</p> </blockquote> <p>Could any one give me any hint for this one? I have calculated the eigenvalues they are <span class="math-container">$(1+\sqrt{2}i),(1-\sqrt{2}i)$</span></p>
Martin Sleziak
8,297
<p>We have the characteristic polynomial $ch_A(x)=\begin{vmatrix}1-x&amp;\sqrt{2}\\-\sqrt{2}&amp;1-x\end{vmatrix}=(x-1)^2+2=x^2-2x+3$.</p> <p>Then by <a href="http://en.wikipedia.org/wiki/Cayley-Hamilton_theorem" rel="nofollow">Cayley-Hamilton theorem</a> we know that $A^2-2A+3=0$.</p> <p>So if you divide your polynomial $p(x)$ by $(x^2-2x+3)$ and you get $p(x)=q(x)(x^2-2x+3)+r(x)$, you only need to calculate $r(A)$, i.e., to plug the matrix $A$ into the remainder. </p>
3,471,585
<blockquote> <p>All roots of polynomial <span class="math-container">$x^3+ax^2+17x+3b$</span>, <span class="math-container">$a,b\in \Bbb Z$</span> are integers. Prove that this polynomial doesn't have same roots.</p> </blockquote> <p>My plan was to find all three solutions and then compare them.</p> <p>I don't know how to find the first root - tried to use cubic formula to find it but I got a large expression, just a few numbers canceled. Any advices? Thanks.</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$ $</span> the roots are distinct <span class="math-container">$\bmod 3\!:\ f \equiv x(x^2\!+\!ax\!-1)\not \equiv x(x\!-\!r)^2\,$</span> by <span class="math-container">$\,r^2\not\equiv 1$</span></p>
1,251,334
<blockquote> <p>$||x-2|-3| &gt;1$, then $x$ belongs to:<BR> (a) $(-\infty, -2) \cup (0,4) \cup(6,\infty)$<br> (b) $(-1,1)$<br> (c) $(-\infty, 1)\cup (1, &gt; \infty)$ <br> (d) $(-2,2)$</p> </blockquote> <p><strong><em>Answer:(a)</em></strong></p> <p>My solution:</p> <p>let $|x-2| = p,$<br> $|p-3|&gt;1$ and finally i got four inequalities after more calculations :<br></p> <p>$x&gt;6, x&lt;-2,x&gt;0, x&lt;4$</p> <p>I think I am doing this question wrong but i don't know how to do this question either way. How do I do this?</p>
Simon S
21,495
<p>$||x−2|−3|&gt;1$ then either $|x-2| - 3 &gt; 1$ or $|x-2| - 3 &lt; -1$; respectively</p> <p>$$|x-2| &gt; 4 \ \text{ or } |x-2| &lt; 2$$</p> <p>The first of these is equivalent to $x - 2 &gt; 4$ or $x - 2 &lt; -4$; i.e., $$x &gt; 6 \text{ or } x &lt; -2$$</p> <p>The second is equivalent to $-2 &lt; x - 2 &lt; 2$; i.e., $$0 &lt; x &lt; 4$$</p> <p>Now put that all together.</p>
2,933,577
<p>Eliminate the parameters to find a Cartesian equation of the curve and sketch the curve.</p> <p><span class="math-container">$x = e^t – 1, y = e^{2t}$</span>.</p> <p>My attempt:</p> <p><span class="math-container">$x = e^{t} - 1$</span></p> <p><span class="math-container">$x + 1 = e^{t}$</span></p> <p><span class="math-container">$\ln(x+1) = t$</span></p> <p>so</p> <p><span class="math-container">$y = e^{2t} = e^{2\ln(x+1)} = (x+1)^2$</span> [fixed mistake]</p> <p>I think I eliminated the parameters now how would I sketch this? </p> <p>I made a table</p> <p><span class="math-container">\begin{array}{|c|c|c|c|} \hline t&amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \\ \hline x &amp; 0&amp; e^{1}-1 &amp; e^{2} - 1 &amp; e^{3}-1 &amp; e^{4}-1\\ \hline y &amp; 1 &amp; e^{2} &amp; e^{4} &amp; e^{6} &amp; e^{8}\\ \hline \end{array}</span></p> <p>If I graph above with respect to x and y, then would this be correct?</p>
Mike
544,150
<p>How you would sketch this:</p> <p>Hint 1: Note that <span class="math-container">$x$</span> can take on each possible value in precisely the set <span class="math-container">$(-1,\infty)$</span> as <span class="math-container">$t$</span> varies over the reals.</p>
1,879,149
<p>Let $\mu$ be a Radon measure on $\mathbb{R}$. (<em>i.e.</em>, a locally finite Borel measure)</p> <p>Let $I$ be an interval of $\mathbb{R}$.</p> <p>Let $\gamma : I \mapsto \mathbb{R}$ be a <strong>monotone</strong> function that is Lebesgue-Stieltjes integrable with respect to $\mu$: $$\int_I \gamma(x) \mathrm{d}\mu (x) &lt; \infty.$$</p> <p>I would like a simple argument to prove that there exists a sequence $(\gamma_n)_{n\in\mathbb{N}}$ of <strong>continuous and monotone</strong> functions $\gamma_n : I \mapsto \mathbb{R}$ such that $$\int_I \gamma_n(x) \mathrm{d}\mu (x) &lt; \infty$$ and $$\lim_{n\to\infty} \int_I | \gamma(x) - \gamma_n (x)| \mathrm{d}\mu (x) =0.$$</p> <p>I have thought I could use the <a href="https://mathoverflow.net/a/31380">Lusin theorem</a>, but it would require to adapt its statement and proof to show the approximating functions are monotone. Besides, my hypotheses are much simpler ($\mathbb{R}$, no compacity of the support of approximating functions). So maybe a weaker theorem could give the answer? I am interested by any reference to such theorem.</p> <p>N.B.: a proof of this is very common if $\mu$ is the Lebesgue measure, but here there is no absolute continuity assumption on $\mu$.</p>
PhilippeC
315,538
<p>@Luiz Cordeiro: thanks for your answer.</p> <p>I may have an alternative answer for the case $I = [a, b]$, with is reminiscent of the good old Riemann integration and does not use Lusin theorem. Assume $\gamma(a) \neq \gamma (b)$ (else, there is nothing to do).</p> <p>Let $\epsilon &gt; 0$. Define $\tilde\epsilon := \epsilon / |\gamma(b)-\gamma(a)|$. Since $\mu$ is locally finite, $\mu$ must have a finite number of atoms whose mass is $\geq \tilde\epsilon$. Therefore, there exists a finite partition $(A_n)_{n=1}^N$ of $I$ (with $N \in \mathbb{N}^*$) such that</p> <ul> <li>for all $n \leq N$, $\mu (\mathring{A}_n) \leq \tilde\epsilon$,</li> <li>$A_n := [x_n, x_{n+1})$ for all $n &lt; N$, and $A_N := [x_N, x_{N+1}]$,</li> <li>$a = x_1 &lt; x_2 &lt; \ldots &lt; x_N &lt; x_{N+1} = b$.</li> </ul> <p>(the idea is to choose some of the $(a_n)$ as those big atoms)</p> <p>Define $f$ piecewise with $$\forall x \in A_n, \qquad f(x) := \gamma(x_{n}) + (\gamma(x_{n+1}) - \gamma(x_n)) \frac{x-x_n}{x_{n+1}-x_n}.$$</p> <p>Since $\gamma$ is monotone,</p> <ul> <li>$\forall x \in A_n, \quad |f(x)-\gamma(x)| \leq |\gamma(x_{n+1})-\gamma(x_n)|,$</li> <li>$\sum_{n=1}^N |\gamma(x_{n+1})-\gamma(x_n)| = |\gamma(b)-\gamma(a)|,$</li> <li>$f$ is continuous, monotone and coincides with $\gamma$ on the boundaries of all $A_n$.</li> </ul> <p>In addition, $$\int_I |\gamma-f| \mathrm{d}\mu = \sum_{n=1}^N \int_{A_n} |\gamma-f| \mathrm{d}\mu = \sum_{n=1}^N \int_{\mathring{A}_n} |\gamma-f| \mathrm{d}\mu$$ $$\leq \sum_{n=1}^N \int_{\mathring{A}_n} |\gamma(x_{n+1})-\gamma(x_n)| \mathrm{d}\mu (t)$$ $$\leq \sum_{n=1}^N |\gamma(x_{n+1})-\gamma(x_n)| \int_{\mathring{A}_n} \mathrm{d}\mu \leq \frac{\epsilon}{|\gamma(b)-\gamma (a)|} \sum_{n=1}^N |\gamma(x_{n+1})-\gamma(x_n)|,$$ so $$\int_I |\gamma-f| \mathrm{d}\mu \leq \epsilon.$$</p> <p>For the case $I$ is not compact, Luiz Cordeiro's argument can be used.</p>
28,258
<p>The <a href="https://math.stackexchange.com/questions/tagged/set-theory" class="post-tag" title="show questions tagged &#39;set-theory&#39;" rel="tag">set-theory</a> tag is explicitly about mathematics (and metamathematics) in the context of $\mathsf{ZFC}$ and its subsystems and extensions.</p> <p>Every now and then we see questions about other set theories ($\mathsf{NF}$ is popular enough, but there are other versions, see <a href="https://math.stackexchange.com/q/2740723/462">here</a>, for instance). </p> <blockquote> <p>Wouldn't it be better to have a separate tag for them?</p> </blockquote> <p>I believe the chances of these questions being seen by their intended audience would increase significantly that way. The way things currently are, most of these questions end up being ignored or are not answered by the appropriate experts. </p>
Martin Sleziak
8,297
<p>It seems that there is a disagreement whether or not the new tag <a href="https://math.stackexchange.com/questions/tagged/alternative-set-theories" class="post-tag" title="show questions tagged &#39;alternative-set-theories&#39;" rel="tag">alternative-set-theories</a> should be used in conjunction with <a href="https://math.stackexchange.com/questions/tagged/set-theory" class="post-tag" title="show questions tagged &#39;set-theory&#39;" rel="tag">set-theory</a> or not. (This was explicitly mentioned in the original version of <a href="https://math.stackexchange.com/tags/alternative-set-theories/info">tag-excerpt and tag-wiki</a>, but then edited away.)</p> <p>Probably it might be useful to post a separate answer on this issue, so that it can be discussed (commented, voted up/down) separately.</p> <p>Here are my arguments for using the new tag <em>together</em> with <a href="https://math.stackexchange.com/questions/tagged/set-theory" class="post-tag" title="show questions tagged &#39;set-theory&#39;" rel="tag">set-theory</a> tag: </p> <ul> <li>Using also a bigger tag gives the question better chance to get noticed. Now the difference is simply because the tag is very new, but it seems quite likely that the tag (set-theory) will always have much more followers then (alternative-set-theories).</li> <li>As far as I can tell, the tag-info for (set-theory) never said that it is exclusively about ZF and ZFC. (In fact, even from comments by the OP it seems that he considers axiomatic theories such as NBG, MK, KP belong under (set-theory) tag rather than (alternative-set-theories) tag.) The OP explicitly says that intention behind creation of the new tag is that the questions from this area have better chance of getting noticed. Excluding the most important tag related to the topic works against this goal.</li> <li>Maybe the tag (nfu) can be considered as some kind of predecessor of this new tag. Although it only ever existed on three question, in all cases it was used together with set theory: <a href="https://math.stackexchange.com/q/87171">87171</a> (<a href="https://math.stackexchange.com/posts/87171/revisions">revision history</a>), <a href="https://math.stackexchange.com/q/193198">193198</a> (<a href="https://math.stackexchange.com/posts/193198/revisions">revision history</a>), <a href="https://math.stackexchange.com/q/208818">208818</a> (<a href="https://math.stackexchange.com/posts/208818/revisions">revision history</a>).</li> </ul>
6,825
<p>Consider four <span class="math-container">$n$</span>-sided polygons arranged so that their centers are the vertices of a square. The square is exactly as large as the diameter of the smallest circle enclosing each polygon. The polygons do not overlap. Polygons are oriented so that each one has either exactly two sides in common or exactly one side and one vertex in common with the other shapes.</p> <p>Trying this with small numbers gives me <span class="math-container">$f: 4 \to 0, \ 6 \to 4,\ 8 \to 4,\ 10 \to 8,\ 12 \to 8.$</span> This suggests that <span class="math-container">$$f(2n) = 4 \times \left( \left\lceil \frac{2n}{4} \right\rceil - 1 \right), 2n &gt; 4.$$</span></p> <p>Can this result be extended for <span class="math-container">$n \to \infty$</span>? Bonus: What about odd values of <span class="math-container">$n$</span>?</p>
Paul VanKoughnett
2,215
<p>There are two clear cases. When $2n$ is divisible by $4$, each polygon will have a horizontal side and a vertical side touching other polygons. So the number of sides each polygon contributes to the interior is just the number of sides between a horizontal and a vertical side. (Each side of the "interior" polygon belongs to exactly one of the exterior ones.) This is just $(2n-4)/4$, which is then multiplied by $4$ again to give $2n-4$ as the answer.</p> <p>In the other case, you have the points of tangency being a side and a vertex. We can reduce this to the other case by "stretching" the polygons so the vertices become edges. So the number of sides in this case is $(2n+2)-4=2n-2$. This is your result.</p>
95,616
<p>I have tried to adapt <a href="https://mathematica.stackexchange.com/a/16290/29734">this answer</a> to my problem of calculating some bosonic commutation relations, but there are still some issues.</p> <p>The way I'm implementing the commutator is straightforward:</p> <pre><code>commutator[x_, y_] :=x**y-y**x </code></pre> <p>Example: if I want to compute $[b^\dagger b,ab^\dagger]$ I write</p> <pre><code>commutator[BosonC["b"]**BosonA["b"], BosonA["a"]**BosonC["b"]] </code></pre> <p>and the output is $ab^\dagger$ as it should be. However this fails when I compute $[a^\dagger a,ab^\dagger]$ (which should be $-ab^\dagger)$:</p> <pre><code>commutator[BosonC["a"]**BosonA["a"], BosonA["a"]**BosonC["b"]] Out: a†** a^2 ** b†- a ** b†** a†** a </code></pre> <p>How can I modify the code <a href="https://mathematica.stackexchange.com/a/16290/29734">in this answer</a> to have it work properly?</p> <p><strong>EDIT</strong> Building on the answers of @QuantumDot and @evanb, I came up with this solution. First I implement the commutator, with <code>Distribute</code>.</p> <pre><code>NCM[x___] = NonCommutativeMultiply[x]; SetAttributes[commutator, HoldAll] NCM[] := 1 commutator[NCM[x___], NCM[y___]] := Distribute[NCM[x, y] - NCM[y, x]] commutator[x_, y_] := Distribute[NCM[x, y] - NCM[y, x]] </code></pre> <p>Then I implement two tools, one for teaching Mathematica how to swap creation and annihilation operators and one is for operator ordering:</p> <pre><code>dag[x_] := ToExpression[ToString[x] ~~ "†"] mode[x_] := With[{x†= dag[x]}, NCM[left___, x, x†, right___] := NCM[left, x†, x, right] + NCM[left, right]] leftright[L_, R_] := With[{R† = dag[R], L† = dag[L]}, NCM[left___, pr : R | R†, pl : L | L†, right___] := NCM[left, pl, pr, right]] </code></pre> <p>Now I can use it like this: after evaluating the definitions I input (for instance)</p> <pre><code>mode[a] mode[b] leftright[a,b] </code></pre> <p>And finally I can evaluate commutators, for instance</p> <pre><code>commutator[NCM[a†,a] + NCM[b†,b], NCM[a,b†]] (* 0 *) </code></pre>
QuantumDot
2,048
<p>The function <code>NonCommutativeMultiply</code> has too long of a name, so I make a short-hand version of it (<code>NCP</code> stands for <strong>n</strong>on-<strong>c</strong>ommutative <strong>p</strong>roduct):</p> <pre><code>NCP[x___] := NonCommutativeMultiply[x]; </code></pre> <p>Now, here's the code</p> <pre><code>NCP[] := 1 NCP[left___, a, a†, right___] := NCP[left, a†, a, right] + NCP[left, right] NCP[left___, b, b†, right___] := NCP[left, b†, b, right] + NCP[left, right] NCP[left___, pl : a | a†, pr : b | b†, right___] := NCP[left, pr, pl, right] </code></pre> <p>Now your function:</p> <pre><code>SetAttributes[commutator, HoldAll] commutator[NCP[x___], NCP[y___]] := NCP[x, y] - NCP[y, x] </code></pre> <p>Let's give it a try:</p> <pre><code>commutator[NCP[b†, b], NCP[a, b†]] </code></pre> <blockquote> <p><code>b† ** a</code></p> </blockquote>
171,449
<p><strong>Theorem</strong> Every sequence {$s_n$} has a monotonic subsequence whose limit is equal to $\limsup s_n$. I think to show that there exist a monotonic subsequence is kind of straight forward but I could show that there exist such subsequences whose limit is $\limsup s_n$.</p>
vanguard2k
35,711
<p><br></p> <p>You could plot the function $e^{rt}$ for different values of $t$ with $r$ being your annual exponential growth rate. Then you could evaluate this function at $t=10$ or $t=1000$. <br> Is this what you were looking for?</p>
3,155,241
<p>Let <span class="math-container">$\mathbb{A}^n$</span> denote the set of n-tuples of elements from field <span class="math-container">$k$</span> and <span class="math-container">$I(X)$</span> the ideal of polynomials in <span class="math-container">$k[x_1,...,x_n]$</span> that vanish every point in <span class="math-container">$X$</span>. The note I’m reading, in showing that <span class="math-container">$\mathbb{A}^n$</span> is affine variety, says <span class="math-container">$I(\mathbb{A}^n)=(0)$</span>.</p> <p>Why is that? I know it’s probably extremely trivial but I have very little background so please bear with me... Suppose that <span class="math-container">$k$</span> is <span class="math-container">$\mathbb{Z}_p$</span> then something like <span class="math-container">$x^p-x$</span> also belongs to <span class="math-container">$I(\mathbb{Z}_p)$</span> so I guess k must be infinite too (I don’t see this mentioned in the note)? Or am I confusing something?</p>
dcolazin
654,562
<p>If <span class="math-container">$k$</span> is finite, you can build polynomials what vanish in all <span class="math-container">$\mathbb{A}^n$</span> thanks to <span class="math-container">$p(x) = \Pi_{z\in k}(x-z)$</span>.</p> <p>If <span class="math-container">$k$</span> is infinite, then <span class="math-container">$I(\mathbb{A}^1) = 0$</span> because polynomials in one variable only have finite solutions. </p> <p>Then by induction: suppose <span class="math-container">$I(\mathbb{A}^n) = 0$</span>. Consider <span class="math-container">$p\in I(\mathbb{A}^{n+1})$</span>, then <span class="math-container">$p(x_1,\ldots,x_n,y) = \sum_{i=0}^d a_d(x)y^d$</span>, where <span class="math-container">$a_d \in k[x]$</span> and <span class="math-container">$x = (x_1,\ldots,x_n)$</span>. </p> <p>If <span class="math-container">$d=0$</span>, then <span class="math-container">$p\in I(\mathbb{A}^{n})$</span> and you can apply the inductive hypothesis.</p> <p>Suppose, by contradiction, that <span class="math-container">$\exists i \in \{0,\ldots,d\}$</span> such that <span class="math-container">$a_i(x) \neq 0$</span>. Then by inductive hypothesis <span class="math-container">$\exists \tilde{x} \in \mathbb{A^n}$</span> such that <span class="math-container">$a_i(\tilde{x}) \neq 0$</span>. Then <span class="math-container">$p(\tilde{x},y) \in I(\mathbb{A^1})$</span> and <span class="math-container">$p(\tilde{x},y) \neq 0$</span>, which is absurd.</p>
2,243,900
<p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p> <blockquote> <p>What exactly is calculus? </p> </blockquote> <p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
LDBerriz
299,917
<p>The essence of Calculus is the consideration of quantities that are very small such that they are almost zero but not quite. Doing so allows for the calculation of things that occur in an instant, whether an instant of time, or an instant of any dimension. These quantities usually are infinitely small and when they are infinitely large, they are used as their inverse which makes them infinitely small. That is the reason for calculus as conceived by Newton and Leibniz is called infinitesimal calculus. Using infinitesimals explains why Achilles passes the turtle and why the border of a rugged coast has a finite length which otherwise would be infinite. Infinitesimals allow to approximate the probability of an event that changes continuously and gives an exact number which otherwise would be zero. Making a quantity infinitely small forces a sequence when quantities go from being large to being small, therefore the concept of a rate of change is implicit in the conception of calculus. This is applied in derivatives that calculate instantaneous rates of change through differential calculus. Infinitesimals are the basis of calculating the areas under curves that are changing continuously which is the basis of integral calculus. These concepts can be extended to any discipline that deals with events that do not take a discrete value. The numbers of apples in a box is discrete and can be counted using algebra. The exact temperature outside can take infinite values. When we say that is 70 degrees we are using the differential of the temperature reached with the use of calculus. </p>
2,218,960
<blockquote> <p>Let $(W_t,\mathscr{F_t})$ be a Wiener process and let $$M_t=M_0e^{W_t-t/2}\qquad t\ge0$$ where $M_0$ is deterministic. Show that, for $\epsilon&gt;0$, $\tau=\inf\{t\ge0:M_t\le\epsilon\}$ is a stopping time.</p> </blockquote> <p>I have been able to show that $M_t$ is a martingale but I'm a bit stuck here. I tried to use the fact that $W_t$ is a normal distribution to show that $P(M_t&lt;\epsilon)&gt;0$ but that got me no where.</p>
Tsemo Aristide
280,301
<p>Consider the serie $\sum_{n\geq 0} (-1)^n/n$, it converges since it is an alternating serie, but $\sum (-1)^{2n}/(2n)=\sum_{n\geq 0}{1\over{2n}}$ does not converge.</p>
2,969,195
<p>I have a cube with rotations <span class="math-container">$\{r, r^2, s, t\}$</span>. <strong>I want to find the cardinality of the conjugacy classes for these elements.</strong> (I know they are 6, 3, 8 and 6 respectively) I couldn't find any formula or anything so I tried to do it by hand for <span class="math-container">$r$</span>, which seemed to work (see left side of my note picture), but for <span class="math-container">$r^2$</span> I ended up with the same elements in its conjugacy class as in <span class="math-container">$r$</span>. I haven't tried for <span class="math-container">$s$</span> or <span class="math-container">$t$</span>. Or is there some algebra like there is for dihedral groups (like <span class="math-container">$sr^b=r^{-b}s$</span>)</p> <p><strong>I also didn't know in which order I had to apply the elements</strong> (<span class="math-container">$s^2tr^2$</span> versus <span class="math-container">$r^2s^2t$</span> for example).</p> <p>I am using this to calculate the distinct orbits of a group (correct terminology?) using the Counting Theorem</p> <p><a href="https://i.stack.imgur.com/PT6oo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PT6oo.png" alt="Rotations"></a> <a href="https://i.stack.imgur.com/hl2Me.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hl2Me.jpg" alt="enter image description here"></a></p>
Burnsba
114,910
<p><a href="https://en.wikipedia.org/wiki/Octahedral_symmetry#The_isometries_of_the_cube" rel="nofollow noreferrer">Wikipedia</a> has a good explanation. This is basically the geometric explanation of what your elements are:</p> <p><span class="math-container">$r$</span>: rotation about an axis from the center of a face to the center of the opposite face by an angle of 90°: 3 axes, 2 per axis, together 6 </p> <p><span class="math-container">$r^2$</span>: ditto by an angle of 180°: 3 axes, 1 per axis, together 3 </p> <p><span class="math-container">$t$</span>: rotation about an axis from the center of an edge to the center of the opposite edge by an angle of 180°: 6 axes, 1 per axis, together 6 </p> <p><span class="math-container">$s$</span>: rotation about a body diagonal by an angle of 120°: 4 axes, 2 per axis, together 8 </p> <p>and of course, don't forget the identity. </p>
2,148,631
<p>$a_{n+1} = 1-\frac{1}{a_n + 1}$<br> $a_1 = \sqrt 2$</p> <p>I need to prove:<br> 1. $a_n$ is irrational for every $n$.<br> 2. $a_n$ convregres</p> <p>My ideas: </p> <ol> <li>Induction - as I know that $a_1 $ is irrational, I'm assuming $a_n$ is also irrational, and then: $a_{n+1} = 1-\frac{1}{a_n + 1} \implies a_{n+1} = \frac{a_n}{a_n + 1}$. Now it can be explained that $\frac{a_n}{a_n + 1}$ has got to be irrational.</li> <li>No good ideas here, I can show that $a_n &gt; 0$ for every $n$ by induction. Then $\frac{a_{n+1}}{a_n} = \frac{1}{a_n + 1} \le 1$. No idea how to continue</li> </ol>
Fred
380,717
<ol> <li>If $a_n$ is irrational, then suppose that $a_{n+1} = \frac{a_n}{a_n + 1}=\frac{p}{q}$ it follows $a_n= \frac{p}{q-p}$.</li> </ol> <p>This gives: if $a_n \notin \mathbb Q$ then $a_{n+1} \notin \mathbb Q$ .</p> <ol start="2"> <li>You have shown that $ \frac{a_{n+1}}{a_n} = \frac{1}{a_n + 1} \le 1$, hence $a_{n+1} \le a_n$.</li> </ol> <p>Therefore $(a_n)$ is monotonic. </p> <p>From </p> <p>$0&lt;a_n \le a_1$, we get that $(a_n)$ is bounded.</p> <p>Conclusion: $(a_n)$ is convergent.</p>
162,147
<p>I have</p> <pre><code>J = Table[{x10, y10, x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}] L = Table[{x10, y10, 2.0*x10*y10}, {x10, 0, 1, 0.5}, {y10, 0, 1, 0.5}] </code></pre> <p>I want the third elements of J and L to be added and the first and second elements are as they are (as they are the same in both cases) for an example.</p>
Akku14
34,287
<p>Another possibility</p> <pre><code>t3 = Map[(# /. {a_, b_, c_} -&gt; {0, 0, c}) &amp;, t1, {2}] + t2 </code></pre>
2,044,322
<blockquote> <p>How can you show that :</p> <p><span class="math-container">$$D_n=\sum_{k=1}^{n } \frac{1}{k}-\int_{1}^{n+1} \frac{1}{x} \ dx $$</span> is increasing and bounded (and hence convergent). I'm having trouble.</p> </blockquote>
kccu
255,727
<p>Assuming you meant $$D_n=\sum_{k=1}^n \frac{1}{k}-\int_1^{n+1}\frac{1}{x}\ dx$$ we have \begin{align*} D_{n+1}-D_n &amp;= \left(\sum_{k=1}^{n+1} \frac{1}{k}-\int_1^{n+2}\frac{1}{x}\ dx\right)-\left(\sum_{k=1}^n \frac{1}{k}-\int_1^{n+1}\frac{1}{x}\ dx\right)\\ &amp;= \frac{1}{n+1}-\int_1^{n+2}\frac{1}{x} \ dx+\int_1^{n+1}\frac{1}{x}\ dx\\ &amp;=\frac{1}{n+1}-\int_{n+1}^{n+2} \frac{1}{x} \ dx. \end{align*} Now on $[n+1,n+2]$ we have $x \geq n+1$ so $\frac{1}{x} \leq \frac{1}{n+1}$. Hence $\int_{n+1}^{n+2} \frac{1}{x} \ dx \leq \int_{n+1}^{n+2} \frac{1}{n+1} \ dx = \frac{1}{n+1}$. In fact the inequality is strict because we only have $\frac{1}{x}=\frac{1}{n+1}$ when $x=n+1$. This shows $D_{n+1}-D_n&gt;0$ so $D_n$ is increasing.</p> <p>To see that $D_n$ is bounded, note that $\sum_{k=1}^n \frac{1}{k}$ is a left Riemann sum for $\int_1^{n+1} \frac{1}{x} \ dx$. Hence $D_n$ is the difference between an integral and left Riemann sum, so $D_n$ is bounded by $|R_n-L_n|$ where $R_n$ is a right Riemann sum and $L_n$ is a left Riemann sum for $\int_1^{n+1} \frac{1}{x} \ dx$. One particular choice of left and right Riemann sums gives $$D_n \leq \left|\sum_{k=2}^{n+1} \frac{1}{k} - \sum_{k=1}^{n}\frac{1}{k}\right|=\left|\frac{1}{n+1}-1\right|\leq \frac{1}{n+1}+1 \leq 2.$$</p>
177,091
<p>What would $\int\limits_{-\infty}^\infty e^{ikx}dx$ be equal to where $i$ refers to imaginary unit? What steps should I go over to solve this integral? </p> <p>I saw this in the Fourier transform, and am unsure how to solve this.</p>
Raymond Manzoni
21,783
<p>Draks is right on this (+1) it makes sense as (up to a constant) a representation of the <a href="http://en.wikipedia.org/wiki/Dirac_delta_function#Fourier_kernels" rel="noreferrer">Dirac delta distribution</a> (it is divergent from other points of view!).</p> <p>More exactly : $$2\pi \delta(k)=\int\limits_{-\infty}^\infty e^{ikx}dx$$</p>
3,986,785
<p>On page 153 of <em>Linear Algebra Done Right</em> the second edition, it says:</p> <blockquote> <p>Define a linear map <span class="math-container">$S_1: \text{range}(\sqrt{T^*T} ) \to \text{range}(T)$</span> by:</p> </blockquote> <blockquote> <p><strong>7.43:</strong> <span class="math-container">$S_1 (\sqrt{T^* T}v)=Tv$</span></p> </blockquote> <blockquote> <p>First we must check that <span class="math-container">$S_1$</span> is <strong>well defined</strong>. To do this, suppose <span class="math-container">$v_1, v_2 \in V$</span> are such that <span class="math-container">$\sqrt {T^*T}v_1 = \sqrt{T^*T}v_2$</span>. For the definition given by 7.43 to make sense, we must show that <span class="math-container">$Tv_1=T v_2$</span>.</p> </blockquote> <p>It is not entirely clear to me what the term 'well-defined' means here. Can someone clarify?</p> <p>Thanks</p>
dperrin
818,901
<p>For a function to be well-defined, its output needs to be unambiguous. So if <span class="math-container">$a$</span> = <span class="math-container">$b$</span>, then we need <span class="math-container">$f(a) = f(b)$</span>. Let's look at something that's not well-defined. Let's try to define addition on the rational numbers as <span class="math-container">$$ \frac{a}{b} + \frac{c}{d} = \frac{a + c}{b + d}. $$</span></p> <p>Let's look at <span class="math-container">$\frac{1}{2} + \frac{1}{3}$</span>. Under this proposed definition of addition we have <span class="math-container">$\frac{1}{2} + \frac{1}{3} = \frac{2}{5}$</span>. We know that <span class="math-container">$\frac{1}{2} = \frac{2}{4}$</span>. So we have <span class="math-container">$\frac{2}{4} + \frac{1}{3} = \frac{3}{7} \neq \frac{2}{5}$</span>. This definition of addition isn't well-defined because our output is ambiguous.</p> <p>Edit: I hope this makes sense.</p>
311,617
<p>If $S$ is a ring and $R \subset S$ is a subring it's common to write that $S/R$ is an extension of rings. I frequently find myself writing this and read it quite often in textbooks and lecture notes. But whenever I actually think about the notation, I find it to be one of the most confusing conventions in algebra. In almost any other context $S/R$ would mean taking a quotient of $S$ by $R$. It seems much more clear to me write let $R \subset S$ be an extension of rings, but I don't see this notation used very frequently. </p> <p>So I'm wondering if there's some high level reason we use this notation that I'm not seeing. I'm also curious in what context this notation first appeared. </p>
Pedro
23,350
<p>What is true is that $\rm Tor$ commutes with <em>filtered</em> colimits. We already know that $M\otimes -$ commutes with colimits, now take a right $R$-module $M$ and a system $(N_i,\psi_{ji})$ of left $R$-modules over some filtered set $I$. First, we can obtain a short exact sequence of filtered systems </p> <p>$$0\to (K_i,\rho_{ji})\to (P_i,\tilde\psi_{ji})\to (N_i,\psi_{ji})\to 0$$</p> <p>by covering $N_i$ by $P_i=R^{(N_i)}$ by $e_n\to n$ and defining $\tilde \psi_{ji}(e_n)=e_{\psi_{ji}n}$, the condition $\psi_{kj}\psi_{ji}=\psi_{ki}$ is immediately inherited by the $\tilde \psi_{ji}$, similarly for the induced morphisms $\rho_{ji}:K_i\to K_j$. Since $I$ is filtered, $\rm colim$ is exact giving an exact sequence </p> <p>$$0\to {\rm colim}\; K_i \to {\rm colim}\; P_i \to {\rm colim}\;N_i\to 0$$</p> <p>Call the terms $K,P,N$ for brevity. Since each $P_i$ is flat and $I$ is filtered, $P$ is flat. We may then use the long exact sequence for $\rm Tor$ and obtain a diagram $\require{AMScd}$ $$\begin{CD} \\ {\rm Tor_1}\;(M,P) @&gt;&gt;&gt; {\rm Tor_1}\;(M,N) @&gt;&gt;&gt; M\otimes K @&gt;&gt;&gt; M\otimes P \\ {}&amp;{}&amp; {} &amp;{}&amp; @VVV @VVV \\ {\rm colim}\;{\rm Tor_1}\;(M,P_i)@&gt;&gt;&gt; {\rm colim}\;{\rm Tor_1}\;(M,N_i)@&gt;&gt;&gt; {\rm colim}\; M\otimes K_i@&gt;&gt;&gt; {\rm colim}\; M\otimes P_i' \\ \end{CD}$$</p> <p>The first two columns vanish since $P_i,P$ are flat, and the last two columns are connected by natural isomorphisms that give a commutative diagram. You can check that if you have an incomplete commutative diagram with exact rows $$\begin{CD} \\ 0@&gt;&gt;&gt; A @&gt;&gt;&gt; B @&gt;&gt;&gt; C \\ {}&amp;{}&amp; {} &amp;{}&amp; @VVV @VVV \\ 0 @&gt;&gt;&gt; A' @&gt;&gt;&gt; B' @&gt;&gt;&gt; C' \\ \end{CD}$$</p> <p>you may always complete it, and the morphism introduced is an isomorphism if both vertical arrows are. This gives the isomorphism for $n=1$. The case $n\geqslant 1$ is handled by dimension shifting. Indeed, we get a diagram $$\begin{CD} \\ 0@&gt;&gt;&gt; {\rm Tor}_2(M,{\rm colim}\;N_i) @&gt;\partial &gt;&gt; {\rm Tor}_1(M,{\rm colim}\;K_i) @&gt;&gt;&gt; 0 \\ {}&amp;{}&amp; {} &amp;{}&amp; @VVV \\ 0 @&gt;&gt;&gt; {\rm colim}\;{\rm Tor}_2(M,N_i) @&gt;\partial &gt;&gt; {\rm colim}\;{\rm Tor}_1(M,K_i) @&gt;&gt;&gt; 0 \\ \end{CD}$$</p> <p>and we get the desired isomorphism by conjugating the vertical isomorphism. One may show the isomorphism induced is natural, much like that of $M\otimes {\rm colim}$ and ${\rm colim}\; M\otimes $, but that is a bit more tortuous. Note this gives naturality of all the upper isomorphisms, since they are obtained by composing the natural connection morphisms and the natural isomorphism in the case $n=1$. </p>
4,139,012
<p>Unfortunately, I don't know much about this topic, that's why I apologize for my language. I have two points <code>P1</code> and <code>P2</code>, now I want &quot;to get from one to the other&quot;, but I can only go a distance <code>d</code>. And I want to get as close as possible from <code>P1</code> to <code>P2</code>.</p> <p>Here is an example:</p> <blockquote> <p>d=400, P1(0,0), P2(500,500)</p> </blockquote> <p>Now the formula should give me <code>P3(282.843, 282.843)</code>.</p> <p>This works with the formula <span class="math-container">$$d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$$</span> but this only works if <code>x</code> and <code>y</code> of P3 are equal, and I can't control where <code>P2</code> is.</p> <p>I hope you understand what I mean. This might be a very stupid question, but I don't know how to search for a solution. Maybe you can help me with a keyword for this problem, formatting the answer right or a solution.</p> <p>Thank you</p>
Siong Thye Goh
306,553
<p>We travel along the straight line <span class="math-container">$x=y$</span> where as this line interporlate the two points. We are interested with the case when <span class="math-container">$x&gt;0$</span>.</p> <p><span class="math-container">$$2x^2=d^2$$</span></p> <p><span class="math-container">$$x=\frac{d}{\sqrt2}=200\sqrt2$$</span></p> <p>Hence the solution is <span class="math-container">$(200\sqrt2, 200\sqrt2)$</span></p>
4,139,012
<p>Unfortunately, I don't know much about this topic, that's why I apologize for my language. I have two points <code>P1</code> and <code>P2</code>, now I want &quot;to get from one to the other&quot;, but I can only go a distance <code>d</code>. And I want to get as close as possible from <code>P1</code> to <code>P2</code>.</p> <p>Here is an example:</p> <blockquote> <p>d=400, P1(0,0), P2(500,500)</p> </blockquote> <p>Now the formula should give me <code>P3(282.843, 282.843)</code>.</p> <p>This works with the formula <span class="math-container">$$d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$$</span> but this only works if <code>x</code> and <code>y</code> of P3 are equal, and I can't control where <code>P2</code> is.</p> <p>I hope you understand what I mean. This might be a very stupid question, but I don't know how to search for a solution. Maybe you can help me with a keyword for this problem, formatting the answer right or a solution.</p> <p>Thank you</p>
Glärbo
918,887
<p>Although Siong Thye Goh already provided the <a href="https://math.stackexchange.com/a/4139525/918887">correct answer</a>, I wanted to show the answer explicitly in parametric form.</p> <p>Let <span class="math-container">$\vec{p}_0$</span> be the starting point, <span class="math-container">$\vec{p}_1$</span> the target, and <span class="math-container">$d$</span> the distance one can travel.</p> <p>If <span class="math-container">$d \ge \lVert \vec{p}_1 - \vec{p}_0 \rVert$</span>, one can travel from <span class="math-container">$\vec{p}_0$</span> to <span class="math-container">$\vec{p}_1$</span>. Otherwise, one can travel only to <span class="math-container">$\vec{p}$</span>, <span class="math-container">$$\vec{p} = \vec{p}_0 + \left( \vec{p}_1 - \vec{p}_0 \right) \frac{d}{\left\lVert \vec{p}_1 - \vec{p}_0 \right\rVert} \tag{1}\label{G1}$$</span> where <span class="math-container">$\lVert\vec{v}\rVert = \sqrt{\vec{v} \cdot \vec{v}} = \sqrt{\sum_i v_i^2}$</span> is the Euclidean norm (length) of vector <span class="math-container">$\vec{v}$</span>.</p> <p>In two dimensions, with <span class="math-container">$\vec{p} = (x, y)$</span>, <span class="math-container">$\vec{p}_0 = (x_0, y_0)$</span>, and <span class="math-container">$\vec{p}_1 = (x_1, y_1)$</span>, <span class="math-container">$$\lambda = \frac{d}{\sqrt{ (x_1 - x_0)^2 + (y_1 - y_0)^2 }}$$</span> and <span class="math-container">$$\left\lbrace \begin{aligned} x &amp;= (1 - \lambda) x_0 + \lambda x_1 \\ y &amp;= (1 - \lambda) y_0 + \lambda y_1 \\ \end{aligned} \right.$$</span> and in three dimensions, <span class="math-container">$$\lambda = \frac{d}{\sqrt{ (x_1 - x_0)^2 + (y_1 - y_0)^2 + (z_1 - z_0)^2 }} \\ $$</span>and<span class="math-container">$$\left\lbrace \begin{aligned} x &amp;= (1 - \lambda) x_0 + \lambda x_1 \\ y &amp;= (1 - \lambda) y_0 + \lambda y_1 \\ z &amp;= (1 - \lambda) z_0 + \lambda z_1 \\ \end{aligned} \right.$$</span> noting that <span class="math-container">$(1-\lambda) x_0 + \lambda x_1 = x_0 + \lambda ( x_1 - x_0 )$</span>; where <span class="math-container">$\lambda$</span> is just a temporary parameter (&quot;lambda&quot;, for &quot;length scale factor&quot;).</p> <p>In a computer program that uses floating-point numbers the first form is preferable, because the latter form can suffer from domain cancellation. When say <span class="math-container">$x_1$</span> is large in magnitude and <span class="math-container">$x_0$</span> relatively small in magnitude (so much closer to zero), <span class="math-container">$x_1 - x_0 = x_1$</span> using floating_point numbers due to the finite precision! Then, at very small <span class="math-container">$\lambda$</span> (<span class="math-container">$d$</span> very small compared to the distance between <span class="math-container">$\vec{p}_0$</span> and <span class="math-container">$\vec{p}_1$</span>), <span class="math-container">$x = 0$</span> and not <span class="math-container">$x_0$</span>. Similarly for the case when <span class="math-container">$x_0$</span> is close to zero, and <span class="math-container">$x_1$</span> is very large, and for the other Cartesian coordinates. Using <span class="math-container">$(1 - \lambda)$</span> and <span class="math-container">$\lambda$</span> avoids the domain cancellation (because it calculates the contribution of each point separately), so this is precise near both the start and end points, regardless of the numerical magnitudes.</p>
527,665
<p>I need to bound a sum of the form $\sum_{j=1}^m |z+j|^{-n}$ with $\Re z\ge1$ and $n \ge 2$. I am searching for a bound of the form $$\sum_{j=1}^m |z+j|^{-n} \le C |z|^{-n+1}.$$</p> <p>It is easy to see that</p> <p>$$\sum_{j=1}^m |z+j|^{-n} \le \int_{0}^{m+1} |z+x|^{-n}dx \le \int_{0}^{\infty} |z+x|^{-n}dx.$$</p> <p>My problem is that I cannot compute the integral. By setting $z=r+i s$, we have $$\int_0^\infty ((r+x)^2+s^2)^{-\frac n2} dx.$$</p> <p>My idea was to use integration by parts to transform it to an integrable form, but so far I cannot find how I am going to do that. Any help is welcome.</p>
Community
-1
<p>We have $(r+x)^2 + s^2 \geq (r+x)^2$. Hence, we have $$\int_0^{\infty} \dfrac{dx}{((r+x)^2 + s^2)^{n/2}} \leq \int_0^{\infty} \dfrac{dx}{(r+x)^n} = \dfrac{r^{-n+1}}{n-1} &lt; r^{-n+1}$$ We also have $(r+x)^2 + s^2 \geq x^2+s^2$. Hence, we have \begin{align} \int_0^{\infty} \dfrac{dx}{((r+x)^2 + s^2)^{n/2}} &amp; \leq \int_0^{\infty} \dfrac{dx}{(x^2+s^2)^{n/2}} = \int_0^{\pi/2} \dfrac{\vert s \vert \sec^2(t) dt}{\vert s \vert^n \sec^n(t)}\\ &amp; = \vert s \vert^{-n+1} \int_0^{\pi/2} \cos^{n-2}(t)dt &lt; \vert s \vert^{-n+1} \dfrac{\pi}2 \end{align} Hence, we have (make use of the fact that $\vert r \vert + \vert s \vert \leq 2 \vert z\vert$) $$\int_0^{\infty} \dfrac{dx}{((r+x)^2 + s^2)^{n/2}} \leq C \vert z \vert^{-n+1}$$</p>
133,109
<p>How can I integrate,</p> <p>$$ \int_n^{+\infty} x \exp\{-ax^2+bx+c\}dx $$</p> <p>and what's the result w.r.t the Gaussian function's p.d.f $p(x)$ and c.d.f $\phi(x)$?</p> <p>Thanks!</p>
Dilip Sarwate
15,941
<p>Hint: Since $\frac{\mathrm d}{\mathrm dx}\exp(-ax^2+bx+c) = (-2ax+b)\exp(-ax^2+bx+c)$, you can massage the given integrand to something of the form $$\frac{-1}{2a}\int (-2ax+b)\exp(-ax^2+bx+c)\ \mathrm dx + \int \frac{b}{2a}\exp(-ax^2+bx+c)\ \mathrm dx$$ where the first integral now has a perfect integrand and the second, after further massaging will give you something involving $\Phi(\cdot)$, the cdf of the standard normal (Gaussian) random variable. Note that @Wonder's answer does not get the second term.</p>
1,723,299
<p>I can work out an heuristic argument for $n=2$: (homeomorphically) turning the disc $D^2$ to something like a funnel (no pipe of course), then gradually contracting the open "mouth" of the funnel to a smaller and smaller hole which eventually vanishes (or "heals", if you imagine the hole as a open wound), we have now got an $S^2$. </p> <p>This argument unfortunately doesn't carry over into higher dimensions. So I think I have to find another approach. Hopefully this simple-looking result $D^n/S^{n-1}\cong S^n$ doesn't entail a dreadfully hard proof. So is there any easy way out or is there any theorem that the result is just a few steps away from? Thanks in advance. </p>
Community
-1
<p>There are probably several ways to see this. One way is to realize that the $n$-sphere is homeomorphic to the one point compactification $\mathbb{R}^n\cup\{\infty\}$ of $\mathbb{R}^n$ (you can prove this explicitly using the stereographic projection). Now recall that the open unit $n$-disc $D^n$ in $\mathbb{R}^n$ is homeomorphic to $\mathbb{R}^n$. Name this homeomorphism $\phi$. Define the map $$f:D^n/S^{n-1}\to \mathbb{R}^{n}\cup\{\infty\}$$ by $\phi$ for all points not on the boundary of $D^n$ and send every point on the boundary $S^{n-1}$ to the point $\infty$. Its easy to show that this map defines a homeomorphism on the quoitient.</p>
32,111
<p>Let p : Y -> X be an n-sheeted covering map, where X and Y are topological spaces. If X is compact, prove that Y is compact.</p> <p>I realize that this seems like a very simple problem, but I want to stress the lack of assumptions on X and Y. For example, this is very easy to prove if we can assume that X and Y are metrizable, for sequential compactness is then equivalent to compactness and it is easy to lift sequential compactness from X to Y.</p> <p>I asked three people in person this question and all of them immediately made the assumption that X and Y are metrizable, so I feel like I should put in this warning here that they are not.</p>
Georges Elencwajg
450
<p>Dear Eric, here is a Bourbaki-style proof.</p> <p>Recall that a continuous map $f: Y\to X$ is called proper by Bourbaki if, for all spaces $Z$, the map $f\times 1_Z: Y \times Z\to X \times Z$ is closed. For example the trivial finite covering $X\times \{ 1,\ldots n \}\to X$ is proper.</p> <p>Now, your $X$ is covered by opens $X_\iota \subset X$ such that the restricted/corestricted maps $f_{X_\iota }:f^{-1} (X_\iota) \to X_\iota $ are trivial finite coverings, hence are proper by the example above. We deduce that the original covering $f:Y\to X$ is proper: this follows easily from the definition of "proper" and (if a reference is needed) is proved in Bourbaki's General Topology, Chapter 1, §10, Proposition 3.</p> <p>But a proper map has the property that the inverse image of a quasi-compact subset of the target (in our case all of $X$) is quasi-compact (ibid., Proposition 6). Hence $Y$ is quasi-compact if $X$ is.</p> <p><strong>NB</strong> I have used Bourbaki's definition "universally closed" for proper. As I said, this <em>implies</em> that inverse images of quasi-compact subsets are quasi-compact.This last property is often taken as the definition of proper. For locally compact spaces, both definitions coincide. </p>
619,607
<p>$\mathbf{(1)}$ Find $y^{\prime}$ of $y=8^{\sqrt x}$ </p> <p>My try: </p> <p>$\ln y=\ln(8)^{\sqrt x}$<br> $\dfrac{1}{y}y^{\prime}=\sqrt{x}\ln8$<br> I don't know how to proceed with right side. </p> <p>$\mathbf{(2)}$ Find $y^{\prime}$ of $y=(t+4)(t+6)(t+7).$</p> <p>This one I have no idea what to do so I don't have any work to show. My text says to use logarithmic differentiation, but still I don't how to solve this. </p> <p>Thank you. </p>
QED
91,884
<p>$\ln y=\sqrt{x}\ln8$, so $\frac{1}{y}\frac{dy}{dx}=\frac{1}{2\sqrt{x}}\ln8$. Therefore $$y'=\frac{y}{2\sqrt{x}}\ln8=\frac{8^{\sqrt{x}}}{2\sqrt{x}}\ln8$$</p> <p>For the second question, except for $t=-4,-6,-7$, you may take logarithm of both sides and note that $\frac{1}{dt}\ln(t+a)=\frac{1}{t+a}$, and then carry on as above. However you can solve this problem by simple differentiation then you need not even take care of whether $t$ takes the aforementioned values or not.</p>
3,746,986
<p>Let <span class="math-container">$\phi:\mathbb (0,\infty) \to [0,\infty)$</span> be a continuous function, and let <span class="math-container">$c \in (0,\infty)$</span> be fixed.</p> <p>Suppose that &quot;<span class="math-container">$\phi$</span> is convex at <span class="math-container">$c$</span>&quot;. i.e. for any <span class="math-container">$x_1,x_2&gt;0, \alpha \in [0,1]$</span> satisfying <span class="math-container">$\alpha x_1 + (1- \alpha)x_2 =c$</span>, we have <span class="math-container">$$ \phi(c)=\phi\left(\alpha x_1 + (1- \alpha)x_2 \right) \leq \alpha \phi(x_1) + (1-\alpha)\phi(x_2) . $$</span></p> <p>Assume also that <span class="math-container">$\phi$</span> is strictly decreasing in a neighbourhood of <span class="math-container">$c$</span>.</p> <blockquote> <p>Do the one-sided derivatives <span class="math-container">$\phi'_{-}(c),\phi'_{+}(c)$</span> necessarily exist?</p> </blockquote> <p><strong>Edit:</strong></p> <p>As pointed by Aryaman Maithani if <span class="math-container">$c$</span> is a global minimum of <span class="math-container">$\phi$</span>, then clearly <span class="math-container">$\phi$</span> is convex at <span class="math-container">$c$</span>, but there should be no reason to expect for existence of one-sided derivatives. (e.g. <span class="math-container">$\phi(x)=\sqrt{|x|}, c=0$</span>).</p> <p><strong>Edit 2:</strong></p> <p>In the example described <a href="https://math.stackexchange.com/a/3747376/104576">here</a>, the left derivative does not exist. Can we create an example where the right derivative does not exist?</p>
Aryaman Maithani
427,810
<p>Define <span class="math-container">$\phi:(-1, \infty) \to [-1, \infty)$</span> as <span class="math-container">$$\phi(x) = \begin{cases} \sqrt{1 - (1+x)^2} &amp; x \le 0\\ -x &amp; 0 \le x \le 1 \\ -1 &amp; 1 \le x\end{cases}$$</span></p> <p>A graph is shown below. (Courtesy of <a href="https://www.desmos.com/calculator" rel="nofollow noreferrer">Desmos</a>.)<br /> <a href="https://i.stack.imgur.com/tofHV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tofHV.png" alt="The graph" /></a></p> <p>Clearly, <span class="math-container">$\phi$</span> is continuous and strictly decreasing in <span class="math-container">$(-1, 1)$</span>. Thus, choosing <span class="math-container">$c = 0$</span> satisfies the conditions. (It has to be shown that <span class="math-container">$\phi$</span> is convex at this point but that is simple.)<br /> However, the limit <span class="math-container">$\displaystyle\lim_{x\to0^-}\phi'(x)$</span> does not exist (as a real number).</p> <hr /> <p>To meet the conditions of your domain and codomain, consider <span class="math-container">$\tilde \phi := [x \mapsto \phi(x-1)+1].$</span></p>
2,611,450
<p>I'm having trouble understanding the description of the event in italics (with P = 0.8) in the question below. </p> <p>"The probability that an American industry will locate in Shanghai is P(S) = 0.7.<br> The probability that it will locate in Beijing is P(B) = 0.4.<br> And the <em>probability that it will locate in either Shanghai or Beijing or both is 0.8</em>.<br> What is the probability that the industry will locate: (a) in both cities? (b) in neither city?"</p> <p><strong>Doubt:</strong> Why the answer to (a) isn't P = 0.8? </p> <p>The question itself says that "<em>the probability that it will locate in either Shanghai or Beijing or both is 0.8</em>". Isn't it saying P(A∪B) = P(A∩B) = 0.8?</p> <p>I've found many solutions available and the answers are (a) 0.3 and (b) 0.2 but I still cannot understand why it is not 0.8.</p> <p>Thanks.</p>
Doug M
317,162
<p>What does this relation actually mean?</p> <p>It really is all about the equivalence of fractions. $\frac {a}{b} = \frac dc \implies ac = bd$</p> <p>Now, instead of writing these fractions in the more familiar form we show them as and ordered pair.</p> <p>a,b)</p> <p>reflexive: </p> <p>$(a,b)R(a,b) \iff ab = ba$</p> <p>symmetric:</p> <p>$(a,b)R(c,d) \implies (c,d)R(a,b)$</p> <p>$ad = bc \implies cb =da$</p> <p>Transitive:</p> <p>$(a,b)R(c,d)$ and $(c,d)R(p,q) \implies (a,b)R(p,q)$</p> <p>$ad = bc$ and $cq = dp$</p> <p>$adp = bcp\\ acq = bcp\\ aq = bp$</p> <p>d)</p> <p>Is the function $f(a,b) = \frac ab$ bijective?</p> <p>It is not injective as $(1,2)$ and $(2,4)$ both map onto the same element of $\mathbb Q$</p>
1,778,601
<p>Let $G$, $H$ be finite groups. </p> <p>For a homomorphism $\varphi \colon$ $G \to H$ we learned that for $g \in G$, $ord(\varphi(g))$ must divide $|G|$ and $|H|$. </p> <p>So then, (in essence), what we did was look at the common factors of $|G|$ and $|H|$ and set $\varphi(x) = \gamma $ where $x$ is a generator of $G$ and $\gamma \in H$ s.t. $ord(\gamma)$ divides both $|G|$ and $|H|$.</p> <hr> <p>However, I came across an example where this seemed to break-down – I was hoping for some guidance in this case. </p> <p>For homomorphisms between $C_4$ and $C_2 \times C_2$, we have $|C_4| = |C_2 \times C_2| = 4$. </p> <p>This implies (with our strategy above) that if $C_4 = \lbrace 1, x, x^2, x^3 \rbrace$, $x^4 = 1$ then $\varphi(x)$ can have order 1, 2 or 4. </p> <p>However, the example goes on to state that we cannot set $\varphi(x)$ to have an order of 4, as it is unattainable.</p> <p>Does this mean that my condition of $\varphi(x)$ dividing both $|G|$ and $|H|$ is necessary but not sufficient? For such cases (as in my example) how would I go about correctly stating all the homomorphisms under a time pressure? (i.e. is there a way for me to check, without too much calculation, that setting $\varphi(x)$ to an element with order 4 would not produce a homomorphism?)</p> <p>I'm sure my mistake is somewhat elementary, any help on this would be much appreciated!</p> <hr> <p>Follow up question: I don't understand why the line $\varphi_i$ is a homomorphism iff ord($\varphi_i$) divides ord(g) is true in this image <a href="https://imgur.com/ICFuuQG" rel="nofollow noreferrer">Link</a> </p>
Marko Riedel
44,883
<p>Suppose we have $n$ items of $m$ types where $n=km.$ We draw $p$ items and ask about the expected value of the number of distinct items that appear.</p> <p><P>First compute the total count of possible configurations. The species here is </p> <p>$$\mathfrak{S}_{=m}(\mathfrak{P}_{\le k}(\mathcal{Z})).$$</p> <p>This gives the EGF $$G_0(z) = \left(\sum_{q=0}^k \frac{z^q}{q!}\right)^m.$$</p> <p>The count of configurations is then given by (compute this as a sanity check)</p> <p>$$p! [z^p] G_0(z).$$</p> <p>Note however that in order to account for the probabilities we are missing a multinomial coefficient to represent the configurations beyond $p.$ If a set of size $q$ was chosen for a certain type among the first $p$ elements that leaves $k-q$ elements to distribute. Therefore we introduce</p> <p>$$G_1(z) = \left(\sum_{q=0}^k \frac{z^q}{q! (k-q)!}\right)^m.$$</p> <p>The desired quantity is then given by</p> <p>$$(n-p)! p! [z^p] G_1(z) = (n-p)! p! \frac{1}{(k!)^m} [z^p] \left(\sum_{q=0}^k {k\choose q} z^q\right)^m \\ = (n-p)! p! [z^p] \frac{1}{(k!)^m} (1+z)^{km} = {n\choose p} \frac{1}{(k!)^m} (n-p)! p! \\ = \frac{n!}{(k!)^m}$$ </p> <p>The sanity check goes through. What we have done here in this introductory section is classify all ${n\choose k,k,\ldots k}$ combinations according to some fixed value of $p,$ extracting the information of the distribution of values among the first $p$ items. We should of course get all of them when we do this, and indeed we do.</p> <p><P>Now we need to mark zero size sets. The count of zero size sets counts the types that are not present. We get the EGF</p> <p>$$H(z,u) = \left(\frac{u}{k!} + \sum_{q=1}^k \frac{z^q}{q! (k-q)!}\right)^m.$$</p> <p>For the count of the number of types that are not present we thus obtain</p> <p>$$(n-p)! p! [z^p] \left.\frac{\partial}{\partial u} H(z, u)\right|_{u=1}.$$</p> <p>We have</p> <p>$$\frac{\partial}{\partial u} H(z, u) = m \left(\frac{u}{k!} + \sum_{q=1}^k \frac{z^q}{q! (k-q)!}\right)^{m-1} \frac{1}{k!}$$</p> <p>Evaluate this at $u=1$ to get</p> <p>$$\frac{m}{k!} \left(\sum_{q=0}^k \frac{z^q}{q! (k-q)!}\right)^{m-1}.$$</p> <p>Extracting coefficients yields</p> <p>$$\frac{m}{k!} (n-p)! p! [z^p] \frac{1}{(k!)^{m-1}} \left(\sum_{q=0}^k {k\choose q} z^q\right)^{m-1} \\ = \frac{m}{k!} (n-p)! p! [z^p] \frac{1}{(k!)^{m-1}} (1+z)^{km-k} \\ = {n-k\choose p} m (n-p)! p! \frac{1}{(k!)^{m}}.$$</p> <p>Therefore the expectation turns out to be</p> <p>$$m - m {n-k\choose p} {n\choose p}^{-1} = m \left(1 - {n-k\choose p} {n\choose p}^{-1}\right).$$</p> <p><strong>Remark.</strong> The simplicity of this answer is evident and an elegant and straightforward probabilistic argument is sure to appear.</p> <p><P> <strong>Remark II.</strong> For any remaining sceptics and those seeking to know more about the probability model used here I present the Maple code for this work, which includes total enumeration as well as the formula from above. Routines ordered according to efficiency and resource consumption.</p> <pre> with(combinat); Q := proc(m, k, p) option remember; local n, perm, items, dist, res; n := m*k; items := [seq(seq(r, q=1..k), r=1..m)]; res := 0; for perm in permute(items) do dist := convert([seq(perm[q], q=1..p)], &#96;set&#96;); res := res + nops(dist); od; res/(n!/(k!)^m); end; QQ := proc(m, k, p) option remember; local n, perm, items, dist, rest, res; n := m*k; items := [seq(seq(r, q=1..k), r=1..m)]; res := 0; for perm in choose(items, p) do dist := convert(perm, &#96;set&#96;); rest := p!* (n-p)! /mul(q[2]!*(k-q[2])!, q in convert(perm, &#96;multiset&#96;)); rest := rest/(k!)^(m-nops(dist)); res := res + rest*nops(dist); od; res/(n!/(k!)^m); end; X := proc(m, k, p) local n; n := m*k; m*(1-binomial(n-k,p)/binomial(n,p)); end; </pre>
4,531,998
<p>As you increase the value of n, you will generate all pythagorean triples whose first square is even. Is there any visual proof of the following explicit formula and where does it come from or how to derive it?</p> <p><span class="math-container">$(2n)^2 + (n^2 - 1)^2 = (n^2 + 1)^2$</span></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> </tr> </thead> <tbody> <tr> <td><span class="math-container">$(2*0)^2+(0^2-1)^2=(0^2+1)^2$</span></td> <td><span class="math-container">$(2*1)^2+(1^2-1)^2=(1^2+1)^2$</span></td> <td><span class="math-container">$(2*2)^2+(2^2-1)^2=(2^2+1)^2$</span></td> </tr> <tr> <td><span class="math-container">$(2*0)^2+(0-1)^2=(0+1)^2$</span></td> <td><span class="math-container">$(2*1)^2+(1-1)^2=(1+1)^2$</span></td> <td><span class="math-container">$(2*2)^2+(4-1)^2=(4+1)^2$</span></td> </tr> <tr> <td><span class="math-container">$0^2+1^2=1^2$</span></td> <td><span class="math-container">$2^2+0^2=2^2$</span></td> <td><span class="math-container">$4^2+3^2=5^2$</span></td> </tr> <tr> <td><span class="math-container">$0+1=1$</span></td> <td><span class="math-container">$4+0=4$</span></td> <td><span class="math-container">$16+9=25$</span></td> </tr> <tr> <td><span class="math-container">$1=1$</span></td> <td><span class="math-container">$4=4$</span></td> <td><span class="math-container">$25=25$</span></td> </tr> </tbody> </table> </div>
Community
-1
<p>We have the identity <span class="math-container">$a^{2}- b^{2}= \left ( a- b \right )\left ( a+ b \right )\!,$</span> so <span class="math-container">$$\left ( m^{2}+ 1 \right )^{2}- \left ( m^{2}- 1 \right )^{2}\!=\!2m^{2}\cdot 2= \left ( 2m \right )^{2}$$</span></p>
4,531,998
<p>As you increase the value of n, you will generate all pythagorean triples whose first square is even. Is there any visual proof of the following explicit formula and where does it come from or how to derive it?</p> <p><span class="math-container">$(2n)^2 + (n^2 - 1)^2 = (n^2 + 1)^2$</span></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> </tr> </thead> <tbody> <tr> <td><span class="math-container">$(2*0)^2+(0^2-1)^2=(0^2+1)^2$</span></td> <td><span class="math-container">$(2*1)^2+(1^2-1)^2=(1^2+1)^2$</span></td> <td><span class="math-container">$(2*2)^2+(2^2-1)^2=(2^2+1)^2$</span></td> </tr> <tr> <td><span class="math-container">$(2*0)^2+(0-1)^2=(0+1)^2$</span></td> <td><span class="math-container">$(2*1)^2+(1-1)^2=(1+1)^2$</span></td> <td><span class="math-container">$(2*2)^2+(4-1)^2=(4+1)^2$</span></td> </tr> <tr> <td><span class="math-container">$0^2+1^2=1^2$</span></td> <td><span class="math-container">$2^2+0^2=2^2$</span></td> <td><span class="math-container">$4^2+3^2=5^2$</span></td> </tr> <tr> <td><span class="math-container">$0+1=1$</span></td> <td><span class="math-container">$4+0=4$</span></td> <td><span class="math-container">$16+9=25$</span></td> </tr> <tr> <td><span class="math-container">$1=1$</span></td> <td><span class="math-container">$4=4$</span></td> <td><span class="math-container">$25=25$</span></td> </tr> </tbody> </table> </div>
Blue
409
<p>Here's a visual proof:</p> <p><a href="https://i.stack.imgur.com/H7XFV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H7XFV.png" alt="enter image description here" /></a></p> <p>(This space intentionally left blank.)</p>
907,729
<p>I am sorry if the numbers are not formatted, I have searched but found nothing on how. I am trying to multiply $$(2x^3 - x)\left(\sqrt{x} + \frac {2}{x}\right)$$ together and I arrive at a different answer that does not match the one given. The approach I take is to assign the numbers powers then multiply them by the lattice method. So $$(2x^3) (X^{\frac {1}{2}}) + (2x^3) (2x^{-1}) + (x^{\frac {1}{2}}) (-x)) + (2x^{-1}) (-x)$$. I am not asking for a solution but rather an explanation on how to go about multiplying the terms with negative and fractional exponents.</p>
Hypergeometricx
168,053
<p>Try this:</p> <p>Put $u^2=x$, (i.e $u=x^{1/2})$:</p> <p>$$\begin{align} (2x^3-x)(\sqrt{x}+\frac 2x)&amp;=(2u^6-u^2)(u+\frac2{u^2})\\ &amp;=2u^7+4u^4-u^3-2\\ &amp;=2x^{7/2}+4x^2-x^{3/2}-2\qquad \blacksquare\end{align}$$</p>
168,224
<p>I am doing a project on Hamiltonian group actions on symplectic manifolds, and my supervisor was able to list several good books on Riemannian geometry to start me off, but he didn't know of any single place to learn about Hamiltonian groups. </p> <p>I have found some books (even available online from the author!) that come highly recommended, specifically: <em>Introduction to Symplectic and Hamiltonian Geometry, A. C. da Silva.</em></p> <p>As the title suggests however, this seems to come from more of a geometric standpoint. Which books are recommended that might focus on the group-theory and topology end of this subject? The project description specifically mentions cohomological obstructions, something that I think is related to group cohomology? (At this point I'm getting all of this from Wikipedia...) </p> <p>I have had basic, introductory courses in Differential Geometry (in $\mathbb{R}^n$) and in topology (up to calculating the first fundamental group of a topological space). </p> <p>Thank you in advance for any input!</p>
user109527
51,275
<p>The project is now finished, and for anybody else looking to do something similar, I would like to add the following book as an excellent source for an introduction to the material:</p> <p><a href="http://books.google.co.uk/books?id=5HbJJNYXoN0C&amp;dq=berndt%20symplectic&amp;source=gbs_navlinks_s" rel="nofollow noreferrer">An Introduction to Symplectic Geometry, Berndt</a></p> <p>This was found more helpful than any of the others, (save perhaps da Silva's lectures) as a short-term introduction to the subject.</p>
3,464,383
<p>I couldn't find any substantial list of 'strange infinite convergent series' so I wanted to ask the MSE community for some. By <em>strange</em>, I mean infinite series/limits that <strong>converge when you would not expect them to and/or converge to something you would not expect</strong>.</p> <p>My favorite converges to Khinchin's (sometimes Khintchine's) constant, <span class="math-container">$K$</span>. For almost all <span class="math-container">$x \in \mathbb{R}$</span> (those for which this does not hold making up a measure zero subset) with infinite c.f. representation: <span class="math-container">$$x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac1{\ddots}}}$$</span> We have: <span class="math-container">$$\lim_{n \to \infty} =\root n \of{\prod_{i=1}^na_i} = \lim_{n \to \infty}\root n \of {a_1a_2\dots a_n} = K$$</span> Which is...wow! That it converges independent of <span class="math-container">$x$</span> really gets me.</p>
Descartes Before the Horse
592,365
<p>Another one I like for how simply it is written is as follows: <span class="math-container">$$\sum_{n=1}^{\infty}z^nH_n = \frac{-\log(1-z)}{1-z}$$</span> Which holds for <span class="math-container">$|z|&lt;1$</span>, <span class="math-container">$H_n$</span> being the <span class="math-container">$n$</span>-th harmonic number <span class="math-container">$= 1 + \frac12+\frac13 \dots \frac1n$</span>. I can't quite remember where I learned this one from.</p>
4,293,960
<p>I assume discrete time. Suppose I start with some amount of money <span class="math-container">$P_{0}$</span>. Then, using a simple rate of interest <span class="math-container">$r$</span> for a given period of time, I would have</p> <p><span class="math-container">$$P_{t}= P_{0} (1+r)^{t}$$</span></p> <p>Suppose <span class="math-container">$t$</span> represents years. Now, suppose I want the payments to happen more frequently, say <span class="math-container">$n$</span> times a year, but I want <span class="math-container">$t$</span> to still represent years. Then I can model the amount I get paid as</p> <p><span class="math-container">$$P_{t}= P_{0} (1+r)^{nt}$$</span></p> <p>This model can't be right however because, as one user pointed out, as <span class="math-container">$n\rightarrow \infty$</span>, then <span class="math-container">$P_{t} \rightarrow \infty$</span>. We want <span class="math-container">\begin{align*} t\rightarrow \infty &amp;\Rightarrow P_{t} \rightarrow \infty\\ n\rightarrow \infty &amp;\Rightarrow P_{t} \text{ converges to some limit} \end{align*}</span> Ok, so we need a way to make it such that the limit exists as <span class="math-container">$n\rightarrow \infty$</span> and that necessitates adding a term <span class="math-container">$f(n)$</span> such that</p> <p><span class="math-container">$$P_{t}= P_{0} \left(1+\frac{r}{f(n)}\right)^{nt}$$</span></p> <p>Why does compound interest typically involve a <span class="math-container">$f(n)=n$</span>? I know the reason why we need <span class="math-container">$f(n)$</span> to have certain properties like <span class="math-container">$f(n)=n$</span> is because the purpose of this is to reduce the interest rate to prevent the infinity stated earlier. But I don't get why that needs to take the form of <span class="math-container">$f(n)=n$</span> specifically as opposed to some general class of functions <span class="math-container">$f(n)$</span>, where they could have certain restrictions to give the derivatives properties similar to the usual formula.</p> <p>I also recognize that by choosing to use <span class="math-container">$f(n)=n$</span>, then in the limit, we have the continuous case of <span class="math-container">$P_t = P_0 \cdot e^{rt}$</span>. But that doesn't change the fact there could be many compound interest formulas for the discrete case, even if most can't be made into a continuous version.</p> <p>Is the choice of <span class="math-container">$f(n)=n$</span> just by convention or is there a reason for this choice?</p>
user58697
58,697
<blockquote> <p>Now, suppose I want the payments to happen more frequently, say <span class="math-container">$n$</span> times a year, but I want <span class="math-container">$t$</span> to still represent years. Then I can model the amount I get paid as <span class="math-container">$P_{t}= P_{0} (1+r)^{nt}$</span></p> </blockquote> <p>No you cannot. As <span class="math-container">$n \rightarrow \infty$</span>, the amount <span class="math-container">$P_t$</span> also tends to infinity. It means that it is not a right model. If you are paid <span class="math-container">$n$</span> times a year, <span class="math-container">$r$</span> must also depend on <span class="math-container">$n$</span>.</p> <p>If you want to accumulate the yearly interest <span class="math-container">$r$</span> in <span class="math-container">$n$</span> installments, each installment shall yield <span class="math-container">$\frac{r}{n}$</span> interest.</p>
4,293,960
<p>I assume discrete time. Suppose I start with some amount of money <span class="math-container">$P_{0}$</span>. Then, using a simple rate of interest <span class="math-container">$r$</span> for a given period of time, I would have</p> <p><span class="math-container">$$P_{t}= P_{0} (1+r)^{t}$$</span></p> <p>Suppose <span class="math-container">$t$</span> represents years. Now, suppose I want the payments to happen more frequently, say <span class="math-container">$n$</span> times a year, but I want <span class="math-container">$t$</span> to still represent years. Then I can model the amount I get paid as</p> <p><span class="math-container">$$P_{t}= P_{0} (1+r)^{nt}$$</span></p> <p>This model can't be right however because, as one user pointed out, as <span class="math-container">$n\rightarrow \infty$</span>, then <span class="math-container">$P_{t} \rightarrow \infty$</span>. We want <span class="math-container">\begin{align*} t\rightarrow \infty &amp;\Rightarrow P_{t} \rightarrow \infty\\ n\rightarrow \infty &amp;\Rightarrow P_{t} \text{ converges to some limit} \end{align*}</span> Ok, so we need a way to make it such that the limit exists as <span class="math-container">$n\rightarrow \infty$</span> and that necessitates adding a term <span class="math-container">$f(n)$</span> such that</p> <p><span class="math-container">$$P_{t}= P_{0} \left(1+\frac{r}{f(n)}\right)^{nt}$$</span></p> <p>Why does compound interest typically involve a <span class="math-container">$f(n)=n$</span>? I know the reason why we need <span class="math-container">$f(n)$</span> to have certain properties like <span class="math-container">$f(n)=n$</span> is because the purpose of this is to reduce the interest rate to prevent the infinity stated earlier. But I don't get why that needs to take the form of <span class="math-container">$f(n)=n$</span> specifically as opposed to some general class of functions <span class="math-container">$f(n)$</span>, where they could have certain restrictions to give the derivatives properties similar to the usual formula.</p> <p>I also recognize that by choosing to use <span class="math-container">$f(n)=n$</span>, then in the limit, we have the continuous case of <span class="math-container">$P_t = P_0 \cdot e^{rt}$</span>. But that doesn't change the fact there could be many compound interest formulas for the discrete case, even if most can't be made into a continuous version.</p> <p>Is the choice of <span class="math-container">$f(n)=n$</span> just by convention or is there a reason for this choice?</p>
Misha Lavrov
383,078
<p>In principle you could put any function <span class="math-container">$f(n)$</span> anywhere you like. The bank is allowed to pay interest according to any rule that it's willing to explain to confused clients. But using <span class="math-container">$\frac rn$</span> isn't arbitrary: it comes out of some generalizations of simple ideas about interest payments.</p> <hr /> <p>The idea is that - at least in principle - the number of payments per year doesn't have to match the number of times the interest is compounded (that is, the number of times the payment is adjusted).</p> <p>Let's suppose we're looking at <span class="math-container">$t=1$</span> year (for simplicity) with <span class="math-container">$r$</span> annual interest. You go from <span class="math-container">$P_0$</span> to <span class="math-container">$P_1$</span> by receiving a payment of <span class="math-container">$r P_0$</span> at the end of the year, for a total of <span class="math-container">$P_0 (1+r)$</span>.</p> <p>You could say that you want to be paid more often than that. &quot;Okay!&quot; says the bank. &quot;We will give you <span class="math-container">$rP_0$</span> in monthly installments.&quot; So you instead go from <span class="math-container">$P_0$</span> to <span class="math-container">$P_1$</span> by receiving <span class="math-container">$12$</span> payments of <span class="math-container">$\frac r{12} P_0$</span>.</p> <p>It's likely that you take the first month's interest payment and put it in the bank along with your original balance. You might then complain that on the second month, you're getting a payment of only <span class="math-container">$\frac r{12}P_0$</span>, based only on the initial amount <span class="math-container">$P_0$</span> that you put in - but at that point, you've already had <span class="math-container">$(1 + \frac r{12})P_0$</span> sitting in your account for a whole month! Shouldn't that be taken into consideration, too?</p> <p>Of course, the bank doesn't have to listen to you. But if they decide to be nice to you (maybe because the other banks are also being nice, and they don't want to lose you as a client), they'll say &quot;Okay! Each time we make a monthly payment, we'll also adjust the next payment to take your new balance into account.&quot; At the end of the second month, they pay you <span class="math-container">$\frac{r}{12}$</span> of <span class="math-container">$(1 + \frac r{12})P_0$</span>, bringing your total balance to <span class="math-container">$(1 + \frac r{12})^2 P_0$</span>.</p> <p>And if this keeps going for the whole year, then you'll end the year with <span class="math-container">$P_1 = (1 + \frac r{12})^{12} P_0$</span> in your bank account.</p> <p>The result, of course, is that the whole thing is a fancy and confusing way of getting a yearly interest of <span class="math-container">$r' = (1 + \frac r{12})^{12} - 1$</span>. But this is the way that originally made sense to people as the process of paying interest was being developed.</p> <p>The formula generalizes to <span class="math-container">$P_1 = (1 + \frac rn)^n P_0$</span> if we divide the year up into <span class="math-container">$n$</span> time periods, and it generalizes to <span class="math-container">$P_t = (1 + \frac rn)^{nt} P_0$</span> if we keep going for <span class="math-container">$t$</span> years.</p>
294,972
<p>I am seeking a particular integral of the differential equation:</p> <p>$u^{(4)}(t) - 5u''(t) + 4u(t) - t^3 = 0$</p> <p>I simply interested in the technique, not just an answer (Mathematica suffices for that). How would one apply undetermined coefficients in this case?</p>
vonbrand
43,946
<p>The standard <em>general</em> technique is <a href="http://www.cliffsnotes.com/study_guide/Variation-of-Parameters.topicArticleId-19736,articleId-19722.html" rel="nofollow">variation of parameters</a>. In this case, as the ODE is linear with constant coefficients and the RHS is a polynomial, just try a $3 + 4 = 7$ degree polynomial.</p>
3,848,676
<p>Let <span class="math-container">$n$</span> and <span class="math-container">$k$</span> be nonnegative integers. The binomial coefficient <span class="math-container">$\binom{n}{k}$</span> is equal to the number of ways to choose <span class="math-container">$k$</span> items from a set with <span class="math-container">$n$</span> elements. How can this interpretation of <span class="math-container">$\binom{n}{k}$</span> be used to understand or prove the binomial theorem <span class="math-container">$$ (x+y)^n = \sum_{k=0}^n \binom{n}{k} x^k y^{n-k} \qquad \text{for all numbers } x, y. $$</span></p> <hr /> <p><strong>Note</strong>: The original text of this question is shown below. I’m editing the question because it’s fundamental and deserves to be salvaged.</p> <p>Original text:</p> <p>Sorry for the childish question. I am in 7th grade and need some help. Thank You!</p>
Vinyl_cape_jawa
151,763
<p>HINT:</p> <p>The Binomial Theorem says: <span class="math-container">$$ (a+b)^n=\sum_{k=0}^{n}\binom{n}{k} a^{n-k}b^k, $$</span> where the left hand side is <span class="math-container">$$ \underbrace{(a+b)\cdot(a+b)\cdot(a+b)\cdot\ldots\cdot(a+b)}_{n \ times}. $$</span> If you would expand this product you would pick one element from each bracket. This last sentence could already ring some bells since combinatorics usually deals with questions <em>&quot;In now many different ways can I pick...&quot;</em>.</p> <p>For a more detailed explanation you could look at the <a href="https://en.wikipedia.org/wiki/Binomial_theorem" rel="nofollow noreferrer">wikipedia page</a>.</p> <p>Hope this helped a bit and you are always welcome to add further questions and ideas as a comment on my answer</p>
2,010,768
<p>I have the question </p> <p>Solve the simultaneous equations </p> <p>$$\begin{cases} 3^{x-1} = 9^{2y} \\ 8^{x-2} = 4^{1+y} \end{cases}$$</p> <p><a href="https://i.stack.imgur.com/xN9Ny.jpg" rel="nofollow noreferrer">source image</a></p> <p>I know that $x-1=4y$ and $3X-6=2+2y$ .</p> <p>However when I checked the solutions this should become $6X-16=4Y$ .</p> <p>How is this? </p>
Peter
82,961
<p>Look at the second equation : $3x-6=2+2y$. If we subtract $2$ and multiply with $2$, we get $6x-16=4y$</p>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
Anon
155,030
<p>Here's one way to visualize an infinite series adding up to a finite value:</p> <p>A flea is 1 metre away from a wall. The flea jumps half way to the wall (1/2). Then it jumps half way again (1/4). And again (1/8). It keeps going forever.</p> <p>After a large number of jumps, it gets really close to the wall. Pick any distance, no matter how small, and the flea gets closer to the wall than that!</p> <p>This example shows us that: 1/2 + 1/4 + 1/8 + 1/16 + ... = 1</p> <p>This infinite series adds up to a finite value. The same is true for the decimal expansion of the square root of 2. It's an infinite series that represents a finite value.</p>
4,004,457
<p>I searched but couldn't find the proof.</p> <p><strong>Isosceles hyperbola equation</strong>: <span class="math-container">$${H:x^{2}-y^{2} = a^{2}}$$</span></p> <p>And let's take <strong>any point <span class="math-container">$P(x, y)$</span></strong> on this hyperbola. Now, the product of the distances of this point <strong><span class="math-container">$P(x, y)$</span></strong> to the foci of the <strong>isosceles hyperbola</strong> is <strong>equal to</strong> the square of the distance from point <span class="math-container">$P$</span> to the center of the hyperbola.</p> <p><strong>Proof?</strong></p> <p>I took this question from my analytical geometry project assignment. I tried various ways (I found the foci <span class="math-container">$F(x,y)$</span> and <span class="math-container">$F^{'}(x,y)$</span> terms of <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and chose any point on the hyperbola...) but I couldn't prove. I request your help.</p>
Z Ahmed
671,540
<p>The missing no is 6. Sum of numbers in each horizontal layer is 36.</p>
4,004,457
<p>I searched but couldn't find the proof.</p> <p><strong>Isosceles hyperbola equation</strong>: <span class="math-container">$${H:x^{2}-y^{2} = a^{2}}$$</span></p> <p>And let's take <strong>any point <span class="math-container">$P(x, y)$</span></strong> on this hyperbola. Now, the product of the distances of this point <strong><span class="math-container">$P(x, y)$</span></strong> to the foci of the <strong>isosceles hyperbola</strong> is <strong>equal to</strong> the square of the distance from point <span class="math-container">$P$</span> to the center of the hyperbola.</p> <p><strong>Proof?</strong></p> <p>I took this question from my analytical geometry project assignment. I tried various ways (I found the foci <span class="math-container">$F(x,y)$</span> and <span class="math-container">$F^{'}(x,y)$</span> terms of <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and chose any point on the hyperbola...) but I couldn't prove. I request your help.</p>
Dietrich Burde
83,966
<p>The number is <span class="math-container">$n=66$</span>. The sum of all alternating sums of the rows should equal zero. So we need <span class="math-container">$$ 36+(30-n)-8+8=0, $$</span> which says that <span class="math-container">$n=66$</span>. For example, the last row has alternating sum <span class="math-container">$$4-7+5-4+5-3+8=8$$</span></p>
2,857,508
<p>I'm trying to prove this: $$0.99^n \le 1/2,\text{ for }n=100$$</p> <p>I tried Bernoulli's inequality $(1-0.01)^n \geq 1-n\cdot0.01$ and it gave me LHS $\geq0$. </p> <p>I also tried to do this: $((1-0.01)^n)^n \leq(1-1/2)^n$ and it gave me LHS $\geq-99$ and RHS $\geq49$. </p> <p>Now I'm stuck.</p>
Batominovski
72,152
<p>Note that $$\left(1+\frac{1}{99}\right)^n \geq 1 +\frac{n}{99}$$ for every $n\geq 1$. In particular, $$\frac{1}{0.99^{100}}=\left(1+\frac{1}{99}\right)^{100}\geq 1+\frac{100}{99}&gt;2\,.$$</p>
1,056,397
<p>I saw the following problem: $$\lim_{x\to \infty} \sqrt{9x^2+x}-3x$$ My first thought was to say that the $x$ term is overpowered when $x$ becomes large enough, so the square root becomes just $\sqrt{9x^2} = 3x$, and the value of the limit is zero.</p> <p>However, the solution given is $1\over6$, and starts with: $$\lim_{x\to \infty} \sqrt{9x^2+x}-3x=\lim_{x\to \infty} {(\sqrt{9x^2+x}-3x)(\sqrt{9x^2+x}+3x)\over \sqrt{9x^2+x}+3x}$$</p> <p>I'm assuming the continuation is:</p> <p>$$=\lim_{x\to \infty} {x\over \sqrt{9x^2+x}+3x} = \lim_{x\to \infty} {x\over \sqrt{9x^2}+3x} = \lim_{x\to \infty} {x \over 6x} = {1 \over 6}$$</p> <p>The question is, why is it okay to ignore the $x$ term in $\sqrt{9x^2+x}+3x$ but not in $\sqrt{9x^2+x}-3x$?</p>
Vincenzo Oliva
170,489
<p>Because the original limit is an indeterminate form, where that $x$ moves you a bit further from $0$. At any rate, formally you're not allowed to simply drop a term. Initially, as you can see, the <a href="http://demonstrations.wolfram.com/FundamentalLawOfFractions/" rel="nofollow">fundamental law of fractions</a> was used to rationalize the numerator. In the latter case, factorizing the denominator leads to the answer: $$\lim_{x\to \infty} {x\over \sqrt{9x^2+x}+3x}=\lim_{x\to \infty}\frac{x}{x\sqrt{9+1/x}+3x}=\lim_{x\to \infty}\frac{x}{x(\sqrt{9+1/x}+3)}=\\ \lim_{x\to \infty}\frac{1}{\sqrt{9+1/x}+3}=\frac{1}{6}. $$</p>
248,167
<p>I want to know why, when I look at the Julia sets of the quadratic family, I see only a finite number of repeating patterns, rather than a countable infinity of them.</p> <p>My question is specifically about the interaction of these three theorems:</p> <p><strong>Theorem 1</strong>: Let $z_0\in\mathbb{C}$ be an repelling periodic point of the function $f_c:z\mapsto z^2+c$. Tan Lei proved in the 90s that the filled in Julia set $K_c$ is asymptotically $\lambda$-self-similar about $z_0$, where $\lambda$ denotes the multiplier of the orbit.</p> <p><strong>Theorem 2</strong>: (Iterated preimages are dense) Let $z\in J_c$, then the preimages of $z$ under the set $\cup_{n\in\mathbb{N}} ~ f^{-n}(z)$ is dense in $J_c$</p> <p><strong>Theorem 3</strong>: $J_c$ is the closure of repelling periodic points.</p> <p>Let's expand on Theorem 1:<br> Technically it means that the sets $(\lambda^n \tau_{-z_0} K_c)\cap\mathbb{D}_r$ approach (in the Hausdorff metric of compact subsets of $\mathbb{C}$) a set $X \cap \mathbb{D_r}$ where the limit model $X \subset \mathbb{C}$ is such $\lambda$-self-similar: $X = \lambda X$.<br> Practically this means that, when one zooms into a computer generated $K_c$ about $z_0$, the image becomes, to all practical purposes, self-similar. No new information is gained by zooming again about $z_0$.</p> <p>Lei also proved that $K_c$ is asymptotically $\lambda$-self-similar about the preimages of $z_0$, with the same limit model $X$, up to rotation and rescaling. This means that zooming in at each point in the repelling cycle of $z_0$, provides a basically the same spectacle, apart maybe rotated, that does zooming into $z_0$. Not only, but the preimages of $z_0$ are dense in $J_{c}$ (Theorem 2), meaning that this $X$ pattern can be seen throughout the Julia set.</p> <p>Now, let consider a different repelling periodic point $z_1$. Lei tells us that $K_c$ will be asymptotically self-similar about $z_1$ and all <em>its</em> pre-images, with an <em>a priori different</em> limit set $Y$. Since the pre-images of $z_1$ are also dense in $J_c$ we may observe the limit model $Y$ all over $J_c$.</p> <p>So, <strong><em>a priori</em></strong> to each repelling periodic orbit, there should be an associated limit model, and each of these limit models could be distinct. <em>However</em>, when I look at a computer generated Julia set, the parts of it that are asymptotically self-similar, seem to approach one of a <strong><em>finite</em></strong> set of limit models (up to rotation).</p> <p>Why is it so? Maybe my eye cannot see the difference? Or the computer cannot generate all of the detail?</p> <p>Or is it the case that the limit models are finite?</p> <p><a href="https://i.stack.imgur.com/JKaA9.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JKaA9.jpg" alt="Simple Julia zoom"></a> In this image (read like a comic strip), I zoom into the neighbourhood of a point, four times, then purposely "miss the center", and zoom onto a detail for four more times. The patterns that emerge are very similar. Are they the same?<br> This is perhaps one of the simplest Julia set, but the experience is </p>
Lasse Rempe
3,651
<p>As @GNiklasch points out, you seem to be zooming into two places which are both preimages of the same repelling periodic point. So the images of the Julia set are locally related by a conformal map, and hence indeed asymptotically the same.</p> <p>If you zoom in at different repelling periodic points, then generally these will have different multipliers. For example, if you look at periodic points away from the real axis in your example, you would expect complex multipliers, and hence spiralling behaviours at small scales. </p> <p>Look at <a href="https://commons.wikimedia.org/wiki/File:Julia_set_of_the_quadratic_polynomial_f(z)_%3D_z%5E2_-_1.12_%2B_0.222i.png" rel="nofollow noreferrer">this picture</a>:<a href="https://i.stack.imgur.com/DWvnH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DWvnH.png" alt="quadratic Julia set"></a></p> <p>There is a periodic point within the "rabbit" parts, where there is lots of spiralling. There is also a fixed point connecting the big rabbit in the middle with the one to its left. (For those who know what this means, the latter is the $\alpha$ fixed point of the polynomial, while the former is the $\alpha$ fixed point of its renormalisation.) Finally, there is another fixed point to the very right of the picture. </p> <p>The Julia set looks different near each of these.</p> <p><strong>EDIT.</strong> You can get an even clearer example by considering <strong>infinitely renormalisable</strong> quadratic polynomials. Consider the following procedure to select a parameter. Start at c=0, the centre of the main cardioid. Then move to the centre of the period 2 bulb at the left of the cardioid (c=-1, the "basilica"). This creates a periodic point at which two dynamic rays land, and which hence separates the Julia set into (exactly) two components. </p> <p>Now move through a period 3 bifurcation from this component, creating a periodic point of period 6 having three rays landing at it. (This is the component containing the "dancing rabbits" shown above.) Continue, with a period 4 bifurcation, period 5, etc.</p> <p>In the limit, you obtain a quadratic polynomial having infinitely many periodic points at which the Julia set is even <strong>topologically</strong> very different, in that they separate the Julia set into different components.</p> <p>(For more details, on this kind of construction, refer to Milnor's <a href="http://arxiv.org/abs/math/9207220" rel="nofollow noreferrer">Local connectivity of Julia sets: expository lectures</a>, Section 3.)</p> <p>I remark that, for a quadratic polynomial, the only points that can have more than two rays landing, and hence separate the Julia set into more than two pieces, are preimages of repelling periodic points. Each of these is associated to some small copy of the Mandelbrot set. Hence you can only get the above type of example by having an infinite number of renormalisations.</p> <p><strong>EDIT 2.</strong> As my original point does not seem to have come across to some, here are some pictures. For $$ c = 0.340095913765605+0.076587412582221i,$$ in the main cardioid, we obtain the following Julia set. <a href="https://i.stack.imgur.com/eAJ36.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eAJ36.png" alt="Julia set of $z^2+c$ in the main cardioid"></a></p> <p>Here is the scaling limit near the $\beta$-fixedpoint, $$ z_0 = 0.618645316268697-0.322757842411465i:$$ <a href="https://i.stack.imgur.com/gQJOe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gQJOe.png" alt="Scaling limit near beta fixed point"></a></p> <p>Here is the scaling limit near a period 9 periodic point, $$ z_1 = 0.177144137748545 + 0.032520156063447i.$$ <a href="https://i.stack.imgur.com/QfnkZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QfnkZ.png" alt="Scaling limit near period 9 point"></a></p> <p>You can see that the scaling limits are very different. (Images produced using the "Winfeed" fractal program by Richard Parris, 2012 version.)</p>
3,366,981
<p>I have been looking for a concrete answer on this matter since leaving secondary school and thought now to ask online since YouTube and Wikipedia seem too convoluted. </p> <p>I remember watching <a href="https://www.youtube.com/watch?v=Rzac520CESc" rel="nofollow noreferrer">this</a> video on the proof behind the nCr factorials and I understood the logical proof clearly. </p> <p>However, I remember when asked</p> <blockquote> <p>Expand <span class="math-container">$(x+y)^3$</span></p> </blockquote> <p>I would, automatically, draw out the following line:</p> <blockquote> <p><span class="math-container">${3 \choose 0}x^0y^3 + {3 \choose 1}x^1y^2 + {3 \choose 2}x^2y^1 + {3 \choose 3}x^3y^0$</span></p> </blockquote> <p>and then use my calculator to get</p> <blockquote> <p><span class="math-container">$y^3 + 3x^1y^2 + 3x^2y^1 + x^3$</span></p> </blockquote> <p>and further to my disappointment, my teacher would speak about Pascal's triangle and say that the coefficients (the numbers infront of my <span class="math-container">$x$</span> and <span class="math-container">$y$</span> terms) came from this so-called triangle. There was no explanation given for the purpose of factorials or where this Pascal's triangle came from. </p> <p>After looking at the video above and looking at the Pascal's triangle, I'm just confused as to why I would need <span class="math-container">${n \choose r}$</span> for expanding the equation above. </p> <p><em>What is the link between using this factorials formula:</em></p> <p><span class="math-container">$$\frac{n!}{r!(n-r)!}$$</span></p> <p><em>and applying it to expanding algebraic expressions as above?</em></p> <p>I understand if it seems trivial to most people here, but after my first year as a chemistry student, I'm still interested in knowing the link between the two!</p> <p><em>EDIT - I should add I understand the video where he describes how many words can be made but I can't apply this understanding for an algebraic perspective</em></p>
Batominovski
72,152
<p>Without doing the integration, notice first that <span class="math-container">$I_x+I_y=I_z$</span>. Also, <span class="math-container">$I_x=I_y$</span> due to symmetry, so <span class="math-container">$I_x=I_y=\frac{1}{2}I_z$</span>. But for <span class="math-container">$I_z$</span>, every part of the ring is at the same distance <span class="math-container">$R$</span> from the axis of rotation, so <span class="math-container">$I_z=mR^2$</span>. This gives <span class="math-container">$I_x=I_y=\frac12mR^2$</span>.</p>
190,497
<p>Q: Is there a reference for a detailed proof of <a href="http://en.wikipedia.org/wiki/Explicit_formula" rel="nofollow">Riemann's explicit formula</a> ?</p> <p>I am reading the <a href="http://www.claymath.org/millennium/Riemann_Hypothesis/1859_manuscript/EZeta.pdf" rel="nofollow">TeXified version</a> of Riemann's manuscript, but sometimes I can't follow (I think the author has kept typos from the <a href="http://www.claymath.org/millennium/Riemann_Hypothesis/1859_manuscript/riemann1859.pdf" rel="nofollow">orignal paper</a>).</p> <p>Here are some points I have trouble with (but there are others) :</p> <ul> <li><p>How does he calculate $$\int_{a+i\mathbb{R}} \frac{d}{ds} \left( \frac{1}{s} \log(1-s/\beta)\right) x^s ds$$ on page 4 ?</p></li> <li><p>What do I need to know about <a href="http://en.wikipedia.org/wiki/Logarithmic_integral_function" rel="nofollow">$Li(x)$</a> to see how the terms $Li(x^{1/2+i\alpha})$ appear ?</p></li> </ul>
Gerry Myerson
8,269
<p>Answering the question in the title, not the body, Riemann's explicit formula is stated on page 244 of Stopple, A Primer of Analytic Number Theory, and discussed over the next several pages. By the way, it's considered that Riemann only gave a "heuristic proof," the first rigorous proof being given by von Mangoldt in 1895. </p>
2,982,723
<p>I need help in this exercise. What I need to prove that the function <span class="math-container">$f$</span> given is not continuous in the point <span class="math-container">$(0,0)$</span></p> <p><span class="math-container">$$ f(x,y) = \begin {cases} \frac {x^3\times y} {x^6+y^2} &amp; (x,y) \not = 0\\ 0 &amp; (x,y) = 0 \end {cases} $$</span></p> <p>So what I've done so far is to calculate the limit of the function in the first place with two variables:</p> <blockquote> <p><span class="math-container">$$ \lim_{(x,y)\to\ (0,0)} \frac {x^3\times y} {x^6+y^2} $$</span> I substitute <span class="math-container">$y=mx$</span> which is the slope <span class="math-container">$$ \lim_{(x)\to\ (0)} \frac {x^3\times mx} {x^6+(mx)^2} $$</span> <span class="math-container">$$=\lim_{(x)\to\ (0)} \frac {x^4\times m} {x^6+m^2x^2} $$</span> <span class="math-container">$$=\lim_{(x)\to\ (0)} \frac {x^4\times m} {x^2(x^6+m^2)} $$</span> <span class="math-container">$$=\lim_{(x)\to\ (0)} \frac {x^2\times m} {x^6+m^2} $$</span> <span class="math-container">$$=\lim_{(x)\to\ (0)} \frac {x^2\times m} {x^6+m^2} $$</span> <span class="math-container">$$=\frac {0^2\times m} {0^6+m^2} = 0$$</span></p> </blockquote> <p>So my result says that it is continuous. What have I done wrong? What do I need to do to prove that it is not if I already calculated that it is? Thank you so much. If something isn't very clear, please let me know.</p>
user
505,767
<p><strong>HINT</strong></p> <p>We have that by <span class="math-container">$x=u$</span> and <span class="math-container">$y=v^3$</span></p> <p><span class="math-container">$$\frac {x^3 y} {x^6+y^2}=\frac {u^3 v^3} {u^6+v^6}$$</span></p> <p>which is a simpler case to handle.</p>
2,251,112
<p>A small college has 1095 students. What is the approximate probability that more than five students were born on Christmas day? Assume that the birthrates are constant throughout the year and that each year has 365 days.</p> <p>I tried doing<br> $X \sim Pn(3) $ and calculating $P(X\gt5)$. My calculation turned out to be 0.22....., which was wrong. (What was wrong with my approximation?). The solution given used a Normal approximation to get an answer of 0.0735. When I tried using a Normal approximation, I was still unable to get that answer. Here is how I attempted it. </p> <p>$ N\sim(3,1092/365)$.<br> $ P(X\gt5)=P(N\gt4.5) $ #continuity correction<br> $= P\left(Z\gt\frac{4.5-3}{\sqrt{\frac{1092}{365}}}\right)$<br> =$ P(Z&gt;0.86721)$.<br> =$0.193$. </p> <p>Any help is much appreciated.</p>
WhiteDeer
502,721
<p>N should be 5.5 not 4.5</p> <p>P(X>5) = P(N >5.5) // continuity correction</p> <p>N~(μ,σ^2)</p> <p>N~(3,2.99) &lt;--- σ^2 should not be (1092/365)</p> <p>μ = n<em>p(amount of students)</em>(probability of person born on Christmas Day)</p> <p>=1095 *(1/365)</p> <p>=3</p> <p>σ = sqrt(n*p(1-p))</p> <p>= sqrt(1095*(1/365)(1-(1/365)))</p> <p>= sqrt(3*(364/365))</p> <p>= sqrt(2.99)</p> <p>P(X > 5) = P(X > 5.5)</p> <pre><code> = P(Z &gt;(5.5-3)/(sqrt(2.99))) //Z =((X-μ)/σ) = P(Z &gt; 1.45) //using the Standard Normal Probabilities table = 0.0735 </code></pre> <p><a href="https://i.stack.imgur.com/jbkLk.jpg" rel="nofollow noreferrer">enter image description here</a></p>
76,747
<pre><code>Plot[D[Abs[x], x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>gives out several lines of error messages and an empty plot.</p> <pre><code>Plot[Derivative[1][Abs[#1] &amp; ][x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>just gives out an empty plot.</p> <p>How do I plot $\left(\left|x\right|\right)^\prime$?</p>
Basheer Algohi
13,548
<p>you can also use <code>UnitStep</code>:</p> <pre><code>f[a_] := (x - a) (UnitStep[x - a] - UnitStep[-(x - a)]) Plot[Evaluate@D[f[5], x], {x, -10, 10}, Exclusions -&gt; 0] </code></pre> <p><img src="https://i.stack.imgur.com/0aNZJ.jpg" alt="enter image description here"></p>
3,851,399
<p>I am attempting to prove the following statement</p> <blockquote> <p>Prove that if n is any integer then 4 either divides n^2 or n^2 − 1</p> </blockquote> <p>I have started with the case of n = 2k</p> <pre><code>Consider the case n = 2k n = 2k n^2 = 4k^2 ⇒ n = 4k ∴ 4 divides n^2 as there is some integer k in which n = k </code></pre> <p>Would this be considered as a correct proof for the first case here? Is there any additions I should make?</p> <p>I also attempted case 2, where n = 2k+1, however I am less sure of the direction I have taken this and is incomplete, so some advice on this would also be appreciated.</p> <pre><code>Consider the case n = 2k+1 n = 2k+1 n^2 = (2k+1)^2 n^2 = 4k^2 + 1 n^2 - 1 = 4k^2 </code></pre>
Servaes
30,382
<p>Your argument for the first case is fine, except for the very last few words:</p> <p><code>∴ 4 divides n^2 as there is some integer k in which n = k </code></p> <p>You have already declared before that <code>n=2k</code>. To now refer to <code>k</code> again and claim that <code>n=k</code> is confusing at best, and irrelevant. What you want to say is:</p> <p><code>∴ 4 divides n^2 as there is some integer m for which n^2 = 4m. </code></p> <p>And in fact your argument shows that this integer is <code>m=k^2</code>.</p> <p>For the second case, you are on the right track. But you make a mistake in your algebra:</p> <pre><code>n^2 = (2k+1)^2 n^2 = 4k^2 + 1 </code></pre> <p>This step is wrong. The distributive rule gives:</p> <p><code>(2k+1)^2 = 4k^2 + 4k + 1</code></p> <p>You should verify this. It now follows that</p> <p><code>n^2 - 1 = 4k^2 + 4k</code></p> <p>Now you would like to conclude, as before, by saying that</p> <p><code>∴ 4 divides n^2-1 as there is some integer m for which n^2-1 = 4m. </code></p> <p>Do you see why? What is this integer <code>m</code>?</p>
1,226,176
<p>The following serie is obviously convergent, but I can not figure out how to calculate its sum :</p> <p>$$\sum \frac{6^n}{(3^{n+1}-2^{n+1})(3^n - 2^n)}$$</p>
imranfat
64,546
<p>Allright. Time to get some excel in here. After playing with it, it shows that the limit goes to 4. Now let's try to prove that.</p> <p>Let's set up partial fractions, as first suggested by Steven.</p> <p>I did: $\frac{A}{3^{n+1}-2^{n+1}}+\frac{B}{3^n-2^n}$ Adding these terms together by making a common denominator, gives the following numerator: $A3^n-A2^n+B3^{n+1}-B2^{n+1}$ or $A3^n-A2^n+3B3^n-2B2^n$ or $(A+3B)3^n+(-A-2B)2^n$ and this should be equal to the given numerator $6^n$ This gives us the following system of equations: $A+3B=2^n$ and $-A-2B=3^n$ because $2^n$ times $ 3^n$ is $6^n$ Solving for A and B gives $-2*2^n-3*3^n$ and $2^n+3^n$ respectively. Plugging in values for $n=1,2,3,4,5,....$ shows indeed a telescoping sum where the first term in the second column (the "B" column) survives, which is $\frac{2^1+3^1}{3^1-2^1}=5$ When you would hypothetically stop, there is however another surviving term and that is the "last term" in the first (The "A") column. This term is $\frac{-2^{n+1}-3^{n+1}}{3^{n+1}-2^{n+1}}$ If you divide every term by $3^{n+1}$ and let $n$ go to infinity, this terms results in $-1$ Therefore the Sum is 4 . I hope this sincerely helps you, Sebastien</p>
1,879,346
<p>Ok, so I have this question from my math book.</p> <blockquote> <p>The vertices A,B,C of a triangle are (3,5),(2,6) and (-4,-2) respectively. Find the coordinates of the circum-centre and also the radius of the circum-circle of the triangle.</p> </blockquote> <p>How can we solve this? Can we use the distance formula? Answer: The circum-radius was found to be <em>R=5</em>.The coordinates of circum-centre were found to be <em>(-1,2)</em>. A diagram would be appreciated. Thank you!</p>
Emilio Novati
187,568
<p>Hint:</p> <p>You want a circumference that passe thorough the three given points. </p> <p>The general equation of a circumference is $x^2+y^2+ax+by+c=0$.</p> <p>Substitute the coordinates of the three points and you have a linear system in the three unknowns $a,b,c$. </p> <p>Solve this system and you have the equation of the circumference from wich you can find the center and the radius.</p>
1,789,414
<blockquote> <p><strong>Question :</strong> Prove that the equation $x + \cos(x) + e^{x} = 0$ has <em>exactly</em> one root</p> </blockquote> <hr> <p><em>This is what I thought of doing:</em></p> <p>$$\text{Let} \ \ \ f(x) = x + \cos(x) + e^{x}$$ By using the Intermediate Value Theorem on the open interval $(-\infty, \infty)$, and then by showing that </p> <p>$$\left(\lim_{x \to -\infty}f(x) &lt; 0 &lt; \lim_{x \to +\infty}f(x)\right) \lor \left(\lim_{x \to +\infty}f(x) &lt; 0 &lt; \lim_{x \to -\infty}f(x)\right)$$</p> <p>I could show that $\exists\ x \in \mathbb{R} \ s.t.\ f(x) = 0$.</p> <p>However this method, although it does show the existence of an $x$ such that $f(x)=0$, it doesn't show that there is <b><em>only</em> one $x$</b> that satisfies the statement $f(x)=0$.</p> <hr> <p>The original question, suggest's using either <em>Rolle's Theorem</em> or the <em>Mean Value Theorem</em>, however we face the same problem with both theorems as both theorems prove the existence of <b><em>at least</em> a single $x$</b> (or any arbitrary number) satisfying their given statements, they don't prove the existence of <b><em>only one $x$</em></b> satisfying their statements.</p> <p>All three theorem's I've mentioned here :</p> <ol> <li><em>Intermediate Value Theorem</em></li> <li><em>Rolle's Theorem</em></li> <li><em>Mean Value Theorem</em></li> </ol> <p>Can all be used to show that the equation $x + cos(x) + e^{x} = 0$ has <b><em>at least</em></b> one root, but they can't be used to show $x + cos(x) + e^{x} = 0$ has <b><em>only</em></b> one root. (Or can they?)</p> <p>How can this problem be solved then, using either Rolle's Theorem or the Mean Value Theorem (or even the Intermediate Value Theorem)</p>
openspace
243,510
<p>Consider the first derivate of it: $f'(x)=1-\sin(x) + e^{x} \ge e^{x}$, for $x \in R$. So function is grow up , now you should use the Rolle's theorem</p>
419,730
<p>$EX = \int xf(x)dx$, where $f(x)$ is the density function of some X variable. This is pretty understandable, but what if I had, for example an $Y = -X + 1$ function and I had to calculate $EY$? How should I do this?</p>
Adriano
76,987
<p>By the linearity of expectation, we have: $$ E(Y) = E(-X + 1) = -E(X) + 1 $$</p> <p>Alternatively, you could use the definition directly. Let $Y=g(X)=-X+1$. Then: $$ \begin{align*} E(Y)&amp;=E(g(X))\\ &amp;=\int_{-\infty}^\infty g(x)f(x)dx\\ &amp;=\int_{-\infty}^\infty(-x+1)f(x)dx\\ &amp;=-\int_{-\infty}^\infty xf(x)dx + \int_{-\infty}^\infty f(x)dx\\ &amp;=-E(X) + 1 \end{align*}$$</p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
TonyK
1,508
<p>If you want to use proof by induction, you have to prove the stronger statement that</p> <p>$$ 1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} - \frac{1}{3}\frac{1}{4^n} $$</p>
2,325,650
<p>This question is based on the fourth question in the 2003 edition of the <a href="https://www.vwo.be/vwo/files/finale03.pdf" rel="nofollow noreferrer">Flemish Mathematics Olympiad</a>.</p> <blockquote> <p>Consider a grid of points with integer coordinates. If one chooses the number $R$ appropriately, the circle with center $(0, 0)$ and radius $R$ crosses a number of grid points. A circle with radius $1$ crosses 4 grid points, a circle with radius $2\sqrt{2}$ crosses 4 grid points and a circle with radius 5 crosses 12 grid points. Prove that for any $n \in \mathbb{N}$, a number $R$ exists for which the circle with center $(0, 0)$ and radius $R$ crosses at least $n$ grid points.</p> </blockquote> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://i.stack.imgur.com/8ZIqS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ZIqS.png" alt="enter image description here"></a></p> <p>I have tried to solve this question by induction, considering a given point $(i, j)$, $i \gt j$, on the circle with radius $R$ and attempting to extract multiple points from this on a larger circle. In this case, the coordinates $(i+j,i-j)$ and $(i-j, i+j)$ are both on a circle with radius $\sqrt{2}R$. However, since $(j, i)$ is also a point on the circle, the number of crossed grid points remains the same. What is a correct way to prove the above statement?</p>
jgsmath
455,126
<p>Following arguments are not mathematically rigorous, but I think it will explain the main idea.</p> <p>The solution hinges on the fact that there are infinitely many values of $\phi \in [0, 2\pi)$ such that $\cos{\phi}$ and $\sin{\phi}$ are both rational. (This in turn depends on the fact that there are infinitely many <strong><em>Primitive Pythagorean Triples</em></strong>. A Pythagorean Triple $(a,b,c)$ is primitive if all three numbers are <em>pairwise coprime</em>).</p> <p>For every Pythagorean Triple $(a,b,c)$, the point $(\frac{a}{c}, \frac{b}{c})$ lies on the unit circle and both co-ordinates are rational.</p> <p>Given any $n \in \mathbb{N}$, choose at least $n$ primitive Pythagorean Triples $(a_1, b_1, c_1), (a_2, b_2, c_2), \dots, (a_n, b_n, c_n)$ such that the hypotenuse lengths $c_1, c_2, \dots c_n$ are all pairwise coprime. (You will need to prove that such a choice is possible). Then let $R = lcm(c_1, c_2, \dots c_n)$. This circle will contain at least $n$ grid points.</p> <p>(In fact this circle will contain at least $4n$ grid points, so this is a huge overestimate. But the question asks for <em>at least $n$ grid points</em>).</p> <p><strong>Edit 1</strong> It's not necessary to choose $n$ primitive triples such that $gcd(c_i,c_j) = 1$. Any $n$ primitive triples will suffice. </p>
3,043,319
<p>I just begin with the 3- dimension function. I dont really understand how to begin with this problem. Function <span class="math-container">$$f(x,y)= (x^2+4y^2-5)(x-1)$$</span> Find where <span class="math-container">$$f(x,y)=0; f(x,y)&gt;0; f(x,y)&lt;0.$$</span></p> <p>To find where <span class="math-container">$$f(x,y)=0$$</span> i already have the ellipse function <span class="math-container">$$(x^2)/5 +(y^2)/(5/4)=1$$</span> and the straight line <span class="math-container">$$x=1$$</span> but the other two i dont know how to solve. </p> <p>Thank you very much.</p>
Community
-1
<p>We only need to show three properties: </p> <p><strong>Reflexiveness:</strong></p> <p>Does there exist for ANY set <span class="math-container">$A$</span> a bijective function between the set and itself? If so, we can say that <span class="math-container">$A \sim A$</span>.</p> <p>Hint, what if there were a function that assigns <span class="math-container">$f(a)=a$</span> for all <span class="math-container">$a \in A$</span>, how do we call this function?</p> <p><strong>Symmetry:</strong> If <span class="math-container">$A \sim B$</span>, there exists a bijection between these two sets. Any bijective function is also invertible, so does there exist a bijection between <span class="math-container">$B$</span> and <span class="math-container">$A$</span>? if so, we can say that <span class="math-container">$B \sim A$</span> </p> <p><strong>Transivity:</strong></p> <p>Finally, if <span class="math-container">$A\sim B$</span> and <span class="math-container">$B \sim C$</span>, we know there are bijective functions <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, what do we know about the composition of two bijective functions?</p> <p>Good luck!</p>
7,725
<p>Basically the textbooks in my country are awful, so I searched on the web for a precalculus book and found this one: <a href="http://www.stitz-zeager.com/szprecalculus07042013.pdf">http://www.stitz-zeager.com/szprecalculus07042013.pdf</a></p> <p>However, it does not cover convergence,limits etc. and those topics were briefly mentioned in my old textbooks. So what i am asking is are these topics a prerequesite for calculus or are they a part of the subject?</p>
schremmer
4,815
<p>Strictly speaking, the following might be slightly off topic, but <em>I</em> think that it is directly relevant. </p> <p>From the preface of Calculus with Analytic Geometry by John H Staib [1966],</p> <blockquote> <p>The development of the book is committed to the following thesis: The theory of limits is most easily grasped in the case of sequences. Therefore the theory of limits is presented first for sequences and then that theory is exploited in the introduction of all other limit concepts including integration. Thus, although most of the usual topics appear here, they are in a rather different order. Moreover, the emphasis on sequences not only provides the course with a unifying theme,but also give it a distinctive flavor which is reflected in many of the proofs and exercises. Indeed, it is the special allurement of sequences that I have attempted to exploit. For instance, few students are moved by the announcement that $\lim_{x\to 2}x^2=4$ but "all" students feel the challenge implicit in the assertion that the sequence $$\sqrt{2}, \sqrt{2+\sqrt{2}}, \sqrt{2+\sqrt{2+\sqrt{2}}}, &gt; \dots$$ has 2 as a limit</p> </blockquote>
616,454
<p>How can I calculate this integral?</p> <p>$$ \int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x$$ </p> <p>I tried using integration by parts, but it doesn't lead me any improvement. So I made ​​an attempt through the replacement $$ \cos(3x) = t$$ and it becomes $$\frac{1}{-3}\int \exp\left(2\left(\dfrac{\arccos(t)}{3}\right)\right)\, \mathrm{d}t$$ but I still can not calculate the new integral. Any ideas?</p> <p><strong>SOLUTION</strong>:</p> <p>$$\int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x = \int {\sin(3x)}\, \mathrm{d(\frac{\exp(2x)}{2}))}=$$ </p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{1}{2}\int {\exp(2x)}\, \mathrm{d}(\sin(3x))=$$</p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{2}\int {\exp(2x)}{\cos(3x)}\mathrm{d}x=$$</p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{2}\int {\cos(3x)}\mathrm{d(\frac{\exp(2x)}{2}))}=$$</p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}+\frac{3}{4}\int {\exp(2x)}\mathrm{d({\cos(3x)})}=$$</p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}-\frac{9}{4}\int {\sin(3x)}{\exp(2x)}\mathrm{d}x$$</p> <p>$$ =&gt;(1+\frac{9}{4})\int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x= \frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}+c$$</p> <p>$$=\frac{1}{13}\exp(2x)(2\sin(3x)-3\cos(3x))+c$$</p>
ncmathsadist
4,154
<p>You need to integrate by parts twice, either differentiating the exponential each time or differentiating the trig function each time. You will be able to solve for the integral.</p>
2,610,560
<blockquote> <p>Given that $x, y, z$ are positive integers and that $$3x=4y=7z$$ find the minimum value of $x+y+z$. The options are:</p> <p>A) 33</p> <p>B) 40</p> <p>C) 49</p> <p>D) 61</p> <p>E) 84</p> </blockquote> <p>My attempt:</p> <p>$y=\frac{3}{4}x, z=\frac{3}{7}x$. </p> <p>Substituting these values into $x+y+z$, I get $\frac{117}{28}x$. I have no idea how to continue. $x$ in this case would have to be 28, meaning that the sum is $117$, which is not one of the options</p>
Mark Neuhaus
21,706
<p>This is a classical problem from <em>integer programming</em>. There are exact algorithmic solutions available (although they are slow, as it is known that this a NP-hard problem). You can look for "branch and bound" or "branch and cut" or <em>eventually</em> total unimodularity. It basically works by a cleaver extension of the classicalm simplex algorithm in ordinary linear programming.</p>
2,314,369
<p>My objective is to represent a unit square using a single matrix.<br>The four corners of the unit square are represented by column vectors as follows,</p> <p>$\left[\begin{matrix} 0 \\ 0 \end{matrix}\right],\left[\begin{matrix} 1 \\ 0 \end{matrix}\right],\left[\begin{matrix} 1 \\ 1 \end{matrix}\right],\left[\begin{matrix} 0 \\ 1 \end{matrix}\right]$</p> <p>when we write these points in a single matrix, it becomes $2 X 4$ matrix and we cannot multiply it with the standard matrix which is $2 X 2$, but when written as $\left[\begin{matrix} 0&amp;0 \\ 1&amp;0\\ 1&amp;1\\0&amp;1 \end{matrix}\right] $, it becomes $4 X 2 $ matrix which we can multiply with standard matrix of $2 X 2$ and produces the desired result. </p> <p>I find it counter-intuitive as points are represented by column vectors.</p> <p><strong><em>Question:</em></strong> Is it valid to say that by writing a 4 X 2 matrix I am representing a square by mentioning the four corners of it ?</p>
Jean Marie
305,862
<p>You should transpose this matrix:</p> <p>$$C=\left[\begin{matrix} 0&amp;1&amp;1&amp;0 \\ 0&amp;0&amp;1&amp;1 \end{matrix}\right]$$</p> <p>With this presentation, any linear operation $L$ on $C$ will be $C'$ given by $L \times C=:C'$; for example, </p> <p>1) The orthogonal reflection with respect to the $x$ axis (leaving $x$ axis invariant) is given by:</p> <p>$$\underbrace{\left[\begin{matrix} 1&amp; \ \ 0 \\ 0&amp;-1\end{matrix}\right]}_L\underbrace{\left[\begin{matrix} 0&amp;1&amp;1&amp;0 \\ 0&amp;0&amp;1&amp;1 \end{matrix}\right]}_C=\underbrace{\left[\begin{matrix} 0&amp;1&amp; \ \ 1&amp; \ \ 0 \\ 0&amp;0&amp;-1&amp;-1 \end{matrix}\right]}_{C'}.$$</p> <p>2) The $\pi/2$ rotation around $O$ is given by:</p> <p>$$\left[\begin{matrix} 0&amp;-1 \\ 1&amp; \ \ 0\end{matrix}\right]\left[\begin{matrix} 0&amp;1&amp;1&amp;0 \\ 0&amp;0&amp;1&amp;1 \end{matrix}\right]=\left[\begin{matrix} 0&amp;0&amp; -1&amp; -1 \\ 0&amp;1&amp; \ \ 1&amp; \ \ 0 \end{matrix}\right].$$</p>