qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,517,502 | <p>Point P$(x, y)$ moves in such a way that its distance from the point $(3, 5)$ is proportional to its distance from the point $(-2, 4)$. Find the locus of P if the origin is a point on the locus.</p>
<p><strong>Answer</strong>:</p>
<p>$$(x-3)^2 + (y-5)^2 = (x+2)^2 + (y-4)^2$$
or, $$10x+2y-14=0$$
or, $$5x+y-7=0$$</p>
<p>but answer given is $$7x^2+7y^2+128x-36y=0$$</p>
| Community | -1 | <p>$$(x-3)^2 + (y-5)^2 = \lambda((x+2)^2 + (y-4)^2).$$</p>
<p>We express that the curve passes through the orgin:</p>
<p>$$(-3)^2 + (-5)^2 = \lambda((+2)^2 + (-4)^2),$$</p>
<p>hence
$$\lambda=\frac{17}{10}.$$</p>
|
2,249,841 | <p>Let $a_n$ denote the number of those permutations $\sigma$ on $\{1,2,3....,n\}$ such that $\sigma$ is a product of exactly two disjoint cycles. Then </p>
<ol>
<li><p>$a_5 = 50$</p></li>
<li><p>$a_4 = 14$</p></li>
<li><p>$a_5 = 40$</p></li>
<li><p>$a_4 = 11$</p></li>
</ol>
<p>I tried specifically for $a_5$ and $a_4$ with a little bit of calculations. But I want to know about a formula for $a_n$ with less calculations </p>
| Marko Riedel | 44,883 | <p>I would like to present the connection to Stirling numbers since it
has not been pointed out. For the first interpretation where the
cycles may be singletons we get the species
$\mathfrak{P}_{=2}(\mathfrak{C}(\mathcal{Z}))$ which yields per
generating function</p>
<p>$$n! [z^n] \frac{1}{2!}\left(\log\frac{1}{1-z}\right)^2
= \left[n\atop 2\right]$$</p>
<p>the sequence</p>
<p>$$0, 1, 3, 11, 50, 274, 1764, 13068, 109584, 1026576,\ldots $$</p>
<p>which is <a href="https://oeis.org/A000254" rel="nofollow noreferrer">OEIS A000254</a> which looks to be a
match. The second interpretation is when we do not admit singletons as
cycles and we get the species $\mathfrak{P}_{=2}(\mathfrak{C}_{\ge
2}(\mathcal{Z}))$ which yields per generating function</p>
<p>$$n! [z^n] \frac{1}{2!}\left(-z + \log\frac{1}{1-z}\right)^2$$</p>
<p>the sequence</p>
<p>$$0, 0, 0, 3, 20, 130, 924, 7308, 64224, 623376,\ldots$$</p>
<p>which is <a href="https://oeis.org/A000276" rel="nofollow noreferrer">OEIS A000276</a>. For $n\ge 2$ this
simplifies to</p>
<p>$$\frac{1}{2} n! [z^n]
\left(z^2 - 2z \log\frac{1}{1-z} +
\left(\log\frac{1}{1-z}\right)^2\right)
\\ = [[n=2]] - n! [z^{n-1}] \log\frac{1}{1-z} + \left[n\atop 2\right]
\\ = [[n=2]] - n \times (n-2)! + \left[n\atop 2\right].$$</p>
|
51,096 | <p>Is it possible to have a countable infinite number of countable infinite sets such that no two sets share an element and their union is the positive integers?</p>
| Qiaochu Yuan | 232 | <p>Sure. For example, let $A_n$ be the natural numbers with exactly $n$ ones in their binary expansion.</p>
<p>Alternately, pick your favorite way of decomposing $\mathbb{N}$ into two disjoint infinite subsets $A, B$, and pick a bijection $f : B \to \mathbb{N}$. Then $f(B)$ can be decomposed into two disjoint subsets $A, B$, hence $B$ can be decomposed into two disjoint subsets $f^{-1}(A), f^{-1}(B)$. Rinse and repeat. This argument is fairly general and works for any infinite set which admits a decomposition into two disjoint subsets of the same cardinality as it (which under the Axiom of Choice is all of them). </p>
|
184,361 | <p>I'm doing some exercises on Apostol's calculus, on the floor function. Now, he doesn't give an explicit definition of $[x]$, so I'm going with this one:</p>
<blockquote>
<p><strong>DEFINITION</strong> Given $x\in \Bbb R$, the integer part of $x$ is the unique $z\in \Bbb Z$ such that $$z\leq x < z+1$$ and we denote it by $[x]$.</p>
</blockquote>
<p>Now he asks to prove some basic things about it, such as: if $n\in \Bbb Z$, then $[x+n]=[x]+n$</p>
<p>So I proved it like this: Let $z=[x+n]$ and $z'=[x]$. Then we have that</p>
<p>$$z\leq x+n<z+1$$</p>
<p>$$z'\leq x<z'+1$$</p>
<p>Then $$z'+n\leq x+n<z'+n+1$$</p>
<p>But since $z'$ is an integer, so is $z'+n$. Since $z$ is unique, it must be that $z'+n=z$.</p>
<p>However, this doesn't seem to get me anywhere to prove that
$$\left[ {2x} \right] = \left[ x \right] + \left[ {x + \frac{1}{2}} \right]$$</p>
<p>in and in general that </p>
<p>$$\left[ {nx} \right] = \sum\limits_{k = 0}^{n - 1} {\left[ {x + \frac{k}{n}} \right]} $$</p>
<p>Obviously one could do an informal proof thinking about "the carries", but that's not the idea, let alone how tedious it would be. Maybe there is some easier or clearer characterization of $[x]$ in terms of $x$ to work this out.</p>
<p>Another property is
$$[-x]=\begin{cases}-[x]\text{ ; if }x\in \Bbb Z \cr-[x]-1 \text{ ; otherwise}\end{cases}$$</p>
<p>I argue: if $x\in\Bbb Z$, it is clear $[x]=x$. Then $-[x]=-x$, and $-[x]\in \Bbb Z$ so $[-[x]]=-[x]=[-x]$. For the other, I guess one could say:</p>
<p>$$n \leqslant x < n + 1 \Rightarrow - n - 1 < x \leqslant -n$$</p>
<p>and since $x$ is not an integer, this should be the same as $$ - n - 1 \leqslant -x < -n$$</p>
<p>$$ - n - 1 \leqslant -x < (-n-1)+1$$</p>
<p>So $[-x]=-[x]-1$</p>
| Community | -1 | <p>Hint: Set $\{ x \} = x - [x]$. To prove $[nx] = \displaystyle\sum_{k = 0}^{n - 1} [x + \frac{k}{n}]$, consider the cases $\frac{k - 1}{n} \leq \{ x\} < \frac{k}{n}$ for $k = 1,2,\ldots,n$ separately.</p>
<p>The idea is that we want to see when exactly $[x + \frac{k}{n}] = [x] + 1$ starts to hold as $k$ grows.</p>
|
1,116,445 | <p>I am trying to understand <a href="http://en.wikipedia.org/wiki/Diophantine_equation" rel="nofollow">Diophantine equation</a> article in wiki. They say that in the given equation:</p>
<p>$$ax + by = c$$</p>
<p>There will be such integers $x,y$ <strong>if and only if</strong> $c$ is a multiplier of greatest common divisor of $a$ and $b$.</p>
<p>So how does this example lay out with that rule?</p>
<p>$3*3 + 2*4 = 17$</p>
| GPerez | 118,574 | <p>It does in fact verify the statement, because $$\mathrm{gcd}(3,2) = \mathrm{gcd}(3,4) = 1$$</p>
|
1,116,445 | <p>I am trying to understand <a href="http://en.wikipedia.org/wiki/Diophantine_equation" rel="nofollow">Diophantine equation</a> article in wiki. They say that in the given equation:</p>
<p>$$ax + by = c$$</p>
<p>There will be such integers $x,y$ <strong>if and only if</strong> $c$ is a multiplier of greatest common divisor of $a$ and $b$.</p>
<p>So how does this example lay out with that rule?</p>
<p>$3*3 + 2*4 = 17$</p>
| MJD | 25,554 | <p>Here you have $a=3, b=2$. The greatest common divisor of $3$ and $2$ is $1$. $17$ is a multiple of $1$, so there is a solution to the equation, namely that $x=3, y=4$.</p>
|
1,906,146 | <p>Can the following expression be further simplified: $$a^{(\log_ab)^2}?$$</p>
<p>I know for example that $$a^{\log_ab^2}=b^2.$$</p>
| Jan Eerland | 226,665 | <p>Use:</p>
<ul>
<li>$$\log_x(y)=\frac{\ln(y)}{\ln(x)}$$</li>
<li>$$\exp\left[\ln\left(x\right)\right]=e^{\ln(x)}=x$$</li>
</ul>
<hr>
<p>So, we get:</p>
<p>$$a^{\left(\log_a(b)\right)^2}=a^{\left(\frac{\ln(b)}{\ln(a)}\right)^2}=a^{\frac{\ln^2(b)}{\ln^2(a)}}=\exp\left[\frac{\ln^2(b)}{\ln(a)}\right]$$</p>
|
1,906,146 | <p>Can the following expression be further simplified: $$a^{(\log_ab)^2}?$$</p>
<p>I know for example that $$a^{\log_ab^2}=b^2.$$</p>
| mfl | 148,513 | <p>$$a^{(\log_ab)^2}=a^{\log_a b\cdot \log_a b}=(a^{\log_a b})^{\log_a b}=b^{\log_a b}=b^{\frac{1}{\log_b a}}.$$</p>
|
1,636,207 | <p>I understand the basics of Cartesian products, but I'm not sure how to handle a set inside of a set like $C = \{\{1,2\}\}$. Do I simply include the set as an element, or do I break it down?</p>
<p>If I use it as an element I think it would be something like this:</p>
<p>$$\{(1,\{1,2\}), (2,\{1,2\})\}$$</p>
<p>If I were to break $C = \{\{1,2\}\}$ further, I'm not sure how I would implement that, so I'm guessing what I did above is correct, but I want to make sure.</p>
| mm8511 | 180,904 | <p>You are correct. Even if a set has sets as elements, you still treat "each element" separately.</p>
<p><a href="https://i.stack.imgur.com/FAsbG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FAsbG.png" alt="mathematica output"></a></p>
|
1,333,994 | <p>We have a function $f: \mathbb{R} \to \mathbb{R}$ defined as</p>
<p>$$\begin{cases} x; \ \ x \notin \mathbb{Q} \\ \frac{m}{2n+1}; \ \ x=\frac{m}{n}, m\in \mathbb{Z}, n \in \mathbb{N} \ \ \ \text{$m$ and $n$ are coprimes} \end{cases}.$$</p>
<p>Find where $f$ is continuous</p>
| user84413 | 84,413 | <p>Let $ a=\log_{3}\frac{1}{2}$, so $\;a<0$ and $\;\displaystyle\log_{\frac{1}{2}}3=\frac{\log_{3}3}{\log_3\frac{1}{2}}=\frac{1}{a}$.</p>
<p>Then $(a+1)^2>0\implies a^2+1>-2a\implies a+\frac{1}{a}<-2\;\;$ (since $a<0$)</p>
|
4,636,101 | <p>Given a curve <span class="math-container">$y = x^3-x^4$</span>, how can I find the equation of the line in the form <span class="math-container">$y=mx+b$</span> that is tangent to only two distinct points on the curve?</p>
<p>The problem given is part of the Madas Special Paper Set. This paper set, seems to not have any answers as Madas himself only released the answers on request. Sadly, Madas passed away this year and the contact information has been removed. So I am here to ask how to solve this question. I have tried to create some simultaneous equations but I cannot get to a resolute answer. I have also tried to differentiate and find some equation for the lines gradient but have come to no success: Please help! <a href="https://i.stack.imgur.com/2CsfY.png" rel="nofollow noreferrer">Question Here</a></p>
| Jan-Magnus Økland | 28,956 | <p><span class="math-container">$(x,y)=(t,t^3-t^4),$</span> has <a href="https://en.wikipedia.org/wiki/Dual_curve" rel="nofollow noreferrer">dual</a> <span class="math-container">$(X,Y)=(-\frac{y'}{xy'-x'y},\frac{x'}{xy'-x'y})=(-\frac{3t^2-4t^3}{t(3t^2-4t^3)-(t^3-t^4)},\frac1{t(3t^2-4t^3)-(t^3-t^4)})$</span> and implicitizes to <span class="math-container">$27X^4+4X^3Y-6X^2Y+192XY+27Y^2+256Y=0$</span> which has a node at <span class="math-container">$(8,-64)$</span> (and a cusp at <span class="math-container">$(-4,16)$</span>) corresponding to the bitangent <span class="math-container">$8x-64y+1=0$</span> (and flex <span class="math-container">$-4x+16y+1=0$</span>).</p>
<p><em>Edit</em></p>
<p>The dual curve lives in the dual projective plane of lines in the plane. It encodes all the tangent lines of the curve. A double point of this dual curve naturally encodes a bitangent for the original curve.</p>
<p>As for finding the dual, I've used the formula for a parametrized curve. You could plot it and see the double point. I've elected to find a formula for it, and the system of the partial derivatives has solutions <span class="math-container">$(-4,16)$</span> (more precisely <span class="math-container">$\langle 8x+y+16,(y-16)^2\rangle$</span> ) and <span class="math-container">$(8,-64).$</span></p>
<p>Then the correspondence to get the lines goes through the equation for the universal line <span class="math-container">$xX+yY+1=0.$</span></p>
|
2,188,965 | <p>Can someone explain to me how this step done? I got a different answer than what the solution said.</p>
<p>Simplify $x(y+z)(\bar{x} + y)(\bar{y} + x + z)$</p>
<p>what the solution got </p>
<p>$x(y+z)(\bar{x} + y)(\bar{y} + x + z)$ = $x(y + z\bar{x})(\bar{y} + x + z)$ (Using distrubitive)</p>
<p>What I got</p>
<p>$x(y+z)(\bar{x} + y)(\bar{y} + x + z)$ = $x(y\bar{x} + y + z\bar{x} + zy)(\bar{y} + x + z)$ (Using distrubitive)</p>
| Bram28 | 256,001 | <p>Put the following equivalence into your boolean algebra toolkit:</p>
<p><strong>Absorption</strong></p>
<p>$x +xy = x$</p>
<p>Using Absorption twice in one step we get:</p>
<p>$y\bar{x}+y+z\bar{x}+zy = y+z\bar{x}$</p>
<p>Done!</p>
|
1,431,289 | <p>Find the average rate of change of $2x^3 - 5x$ on the interval $[1,3]$.</p>
<p>I'm really confused about this problem. I keep ending up with the answer $12$, but the answer key says otherwise. Someone please help! Thanks!</p>
| DirkGently | 88,378 | <p>The total change is $h(3)-h(1)=52$. The length of the interval is $2$. So the average rate of change is $52/2=26$.</p>
<p><strong>Update:</strong> You have apparently changed the function in the question from $h(x)=2x^3-5$ to $h(x)=2x^3-5x$. The answer for the new function is $\frac {h(3)-h(1)}2=21$.</p>
|
3,209,237 | <p>The proof of the CRT goes as follows:<br>
Given the number <span class="math-container">$x \epsilon Z_m$</span>, <span class="math-container">$m=m_1m_2...m_k$</span>
<span class="math-container">$$M_k = m/m_k$$</span>
construct:
<span class="math-container">$$ x = a_1M_1y_1+a_2M_2y_2+...+a_nM_ny_n$$</span>
where <span class="math-container">$y_k$</span> is the particular inverse of <span class="math-container">$M_k\ mod\ m_k$</span>
<span class="math-container">$$\Rightarrow x\equiv a_kM_ky_k\equiv a_k(mod\ m_k)$$</span></p>
<p>What I don't understand is:<br>
how is <span class="math-container">$x\equiv a_1M_1y_1+a_2M_2y_2+...+a_kM_ky_k$</span> and this lies in <span class="math-container">$mod\ m$</span>? Is this because there is some rule in modular arithmetic for adding two numbers in two different mod worlds like: <span class="math-container">$c \mod d \ + e\ mod f = (c+d)(mod(ef))$</span>? As far as I know, there isn't one like that. And how does the addition of these items all in a different mod world provide the solution for x?</p>
| Bernard | 202,857 | <p>I think <span class="math-container">$x$</span> is calculated in <span class="math-container">$\mathbf Z$</span>, using representatives of <span class="math-container">$M_k^{-1}\bmod m_k$</span>. Note that the congruence class of <span class="math-container">$x$</span> <span class="math-container">$\bmod m$</span> is independent of the choice of the representatives.</p>
|
3,953,674 | <p>Here is a common argument used to prove that the sum of an infinite geometric series is <span class="math-container">$\frac{a}{1-r}$</span> (where <span class="math-container">$a$</span> is the first term and <span class="math-container">$r$</span> is the common ratio):
<span class="math-container">\begin{align}
S &= a+ar+ar^2+ar^3+\cdots \\
rS &= ar+ar^2+ar^3+ar^4+\cdots \\
S-rS &= a \\
S(1-r) &= a \\
S &= \frac{a}{1-r} \, .
\end{align}</span>
I am sceptical about the validity of this argument. It feels like there is something amiss about a proof involving infinite series that makes no mention of the fact that they are typically defined as the limit of their partial sums. The third line involves 'cancelling' all of the terms other than <span class="math-container">$a$</span>. This makes it seem like an infinite series is actually an infinite string of symbols, rather than a limiting expression. If <span class="math-container">$a+ar+ar^2+ar^3+\cdots$</span> is simply a shorthand for
<span class="math-container">$$
\lim_{n \to \infty}\sum_{k=0}^{n}ar^k \, ,
$$</span>
and <span class="math-container">$ar+ar^2+ar^3+ar^4+\cdots$</span> a shorthand for
<span class="math-container">$$
\lim_{n \to \infty}\sum_{k=1}^{n}ar^k \, ,
$$</span>
then I am struggling to see how the terms actually cancel. Perhaps the third line can be written more formally as
<span class="math-container">\begin{align}
S-rS &= (a+ar+ar^2+ar^3+\cdots)-(ar+ar^2+ar^3+ar^4+\cdots) \\
&= \lim_{n \to \infty}\sum_{k=0}^{n}ar^k - \lim_{n \to \infty}\sum_{k=1}^{n}ar^k \\
&= \lim_{n \to \infty}\left(\sum_{k=0}^{n}ar^k - \sum_{k=1}^{n}ar^k\right) \\
&= \lim_{n \to \infty}\left(a +\sum_{k=1}^nar^k - \sum_{k=1}^{n}ar^k\right) \\
&= \lim_{n \to \infty}a \\
&= a \, .
\end{align}</span>
There is another concern I have. Geometric series only converge when <span class="math-container">$|r|<1$</span>. However, the argument used above seems to apply regardless of whether <span class="math-container">$|r|<1$</span>, which would yield nonsensical results such as
<span class="math-container">$$
1+2+4+8+\cdots = -1 \, .
$$</span>
So is the argument rigorous, and if so, why are my fears misplaced?</p>
| Gauge_name | 813,708 | <p>Your argument is rigorous if you know that your series converges. Namely you first should know that the series converges and afterwards you may use that method to compute its limit. Prior to know that the series converges the method is wrong since, if the series diverges, in the first step you are subtracting two infinities (<span class="math-container">$S-rS = \infty - \infty$</span>). Your fears are indeed justified.</p>
|
127,412 | <p>How to take a 3 random given name?</p>
<p>I tried:</p>
<p><a href="https://i.stack.imgur.com/BUYIT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BUYIT.png" alt="enter image description here"></a></p>
| Greg Hurst | 4,346 | <p>IMO the best way is <a href="http://reference.wolfram.com/language/ref/RandomEntity.html" rel="nofollow noreferrer"><code>RandomEntity</code></a> as <a href="https://mathematica.stackexchange.com/users/731/c-e">C.E.</a> points out in the comments:</p>
<pre><code>RandomEntity["GivenName", 3]
</code></pre>
<p><a href="https://i.stack.imgur.com/AMokP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AMokP.png" alt="enter image description here"></a></p>
<p>But another (deterministic) way is with the property <code>"SampleEntities"</code>:</p>
<pre><code>Take[EntityValue["GivenName", "SampleEntities"], UpTo[3]]
</code></pre>
<p><a href="https://i.stack.imgur.com/lKUBy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lKUBy.png" alt="enter image description here"></a></p>
|
127,412 | <p>How to take a 3 random given name?</p>
<p>I tried:</p>
<p><a href="https://i.stack.imgur.com/BUYIT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BUYIT.png" alt="enter image description here"></a></p>
| Anton Antonov | 34,008 | <p>Another approach is to use the Wolfram Function Repository function <a href="https://resources.wolframcloud.com/FunctionRepository/resources/RandomPetName/" rel="nofollow noreferrer"><code>RandomPetName</code></a>:</p>
<pre class="lang-mathematica prettyprint-override"><code>SeedRandom[33];
ResourceFunction["RandomPetName"][6]
(* {"Alphie", "Clef", "Dawg", "Xio Xin", "Cheyenne", "Audie"} *)
</code></pre>
<hr />
<p>A few of the names from the other answers <em>might be</em> generated by <code>RandomPetName</code>:</p>
<pre class="lang-mathematica prettyprint-override"><code>Cases[Normal[Normal /@ ResourceFunction["RandomPetName"][All]],
"Elizabeth" | "Michael" | "Melady" | "Jacob" | "Sue" | "Sauna" |
"Teko", Infinity] // Union
(* {"Elizabeth", "Jacob", "Michael", "Sue"} *)
</code></pre>
|
514,517 | <p>So this is what my book states:</p>
<p>Random variables $X,Y, and Z$ are said to form a Markov chain in that order denoted $X\rightarrow Y \rightarrow Z$ if and only if:</p>
<p>$p(x,y,z)=p(x)p(y|x)p(z|y) $</p>
<p>That's great and all but that doesn't give any intuition as to what a Markov chain is or what it implies.</p>
<p>Can someone please give me more intuition as to how I should think about Markov chains?</p>
<p>Thanks a lot!!</p>
| Balbichi | 24,690 | <p>Hint: $f:[a,b]\to \mathbb{R}$ be continuous, such that $f(a)f(b)<0$. Then there exists $c\in(a,b)$ such that $f(c)=0$.</p>
|
1,480,511 | <p>I have included a screenshot of the problem I am working on for context. T is a transformation. What is meant by Tf? Is it equivalent to T(f) or does it mean T times f?</p>
<p><a href="https://i.stack.imgur.com/LtRS1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtRS1.png" alt="enter image description here"></a></p>
| vonbrand | 43,946 | <p>Use the <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow">rational root theorem</a>: If the polynomial $a_n x^n + \dotsb + a_0$ with integer coefficients, such that $a_n \ne 0$ and $a_0 \ne 0$ has a rational zero $u / v$, then $u$ divides $a_0$ and $v$ divides $a_n$.</p>
<p><strong>Proof:</strong> Substitute the zero, and multiply through by $v^n$ to get:</p>
<p>$\begin{align}
a_n u^n + a_{n - 1} u^{n - 1} v + \dotsb + a_1 u v^{n - 1} + a_0 v^n = 0
\end{align}$</p>
<p>The right hand side is divisible by $u$ and $v$, and so is the left. But on the left all terms are divisible by $u$, except possibly the last one, so $u$ must divide $a_0$. In the same way, $v$ divides $a_n$.</p>
<p>Now consider $x^2 - 15 = 0$. By the above, any rational root must be an integer ($u$ divides 15, while $v$ divides $1$). But $\sqrt{15}$ isn't an integer, so it is irrational.</p>
|
3,660,101 | <p>I want to determine if the series <span class="math-container">$ \sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}}{\left(-1\right)^{n}+n} $</span> converge/diverge. the sequence in the denominator is not monotinic, so I cant use Dirichlet's or Abel's tests. My intuition is that this series converge, becuase its looks close to <span class="math-container">$ \sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}}{n} $</span> but im not sure how to prove. Any ideas will help, thanks.</p>
| marty cohen | 13,079 | <p>Let
<span class="math-container">$s(m)
=\sum_{n=2}^{m}\dfrac{\left(-1\right)^{n}}{\left(-1\right)^{n}+n}
$</span>.
The terms go to zero,
so it enough to show that
<span class="math-container">$s(2m+1)$</span>
converges.</p>
<p><span class="math-container">$\begin{array}\\
s(2m+1)
&=\sum_{n=2}^{2m+1}\dfrac{\left(-1\right)^{n}}{\left(-1\right)^{n}+n}\\
&=\sum_{n=1}^{m}(\dfrac{\left(-1\right)^{2n}}{\left(-1\right)^{2n}+2n}+\dfrac{\left(-1\right)^{2n+1}}{\left(-1\right)^{2n+1}+2n+1})\\
&=\sum_{n=1}^{m}(\dfrac{1}{1+2n}+\dfrac{-1}{-1+2n+1})\\
&=\sum_{n=1}^{m}(\dfrac{1}{1+2n}-\dfrac{1}{2n})\\
&=\sum_{n=1}^{m}\dfrac{-1}{2n(2n+1)}\\
\end{array}
$</span></p>
<p>and this sum converges
by comparison with
<span class="math-container">$\sum \dfrac1{4n^2}
$</span>.</p>
<p>To get an explicit bound,</p>
<p><span class="math-container">$\begin{array}\\
-s(2m+1)
&=\sum_{n=1}^{m}\dfrac{1}{2n(2n+1)}\\
&=\dfrac16+\sum_{n=2}^{m}\dfrac{1}{2n(2n+1)}\\
&<\dfrac16+\sum_{n=2}^{m}\dfrac{1}{2n(2n-2)}\\
&=\dfrac16+\dfrac14\sum_{n=2}^{m}\dfrac{1}{n(n-1)}\\
&=\dfrac16+\dfrac14\sum_{n=2}^{m}(\dfrac{1}{n-1}-\dfrac{1}{n})\\
&=\dfrac16+\dfrac14(1-\dfrac1{m})\\
&< \dfrac{7}{12}\\
\text{and}\\
-s(2m+1)
&=\sum_{n=1}^{m}\dfrac{1}{2n(2n+1)}\\
&=\dfrac16+\sum_{n=2}^{m}\dfrac{1}{2n(2n+1)}\\
&>\dfrac16+\sum_{n=2}^{m}\dfrac{1}{2n(2n+2)}\\
&=\dfrac16+\dfrac14\sum_{n=2}^{m}\dfrac{1}{n(n+1)}\\
&=\dfrac16+\dfrac14\sum_{n=2}^{m}(\dfrac1{n}-\dfrac1{n+1})\\
&=\dfrac16+\dfrac14(\frac12-\dfrac1{m+1})\\
&=\dfrac16+\dfrac18-\dfrac1{4(m+1)})\\
&=\dfrac{7}{24}-\dfrac1{4(m+1)}\\
\end{array}
$</span></p>
|
3,660,101 | <p>I want to determine if the series <span class="math-container">$ \sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}}{\left(-1\right)^{n}+n} $</span> converge/diverge. the sequence in the denominator is not monotinic, so I cant use Dirichlet's or Abel's tests. My intuition is that this series converge, becuase its looks close to <span class="math-container">$ \sum_{n=2}^{\infty}\frac{\left(-1\right)^{n}}{n} $</span> but im not sure how to prove. Any ideas will help, thanks.</p>
| P. Lawrence | 545,558 | <p>The series <span class="math-container">$\sum_{n=2}^{\infty}\frac{(-1)^n}{n}$</span> converges by the alternating series test. <span class="math-container">$$\text{Your given series }-\sum_{n=2}^{\infty}\frac{(-1)^n}{n}=-\sum_{n=2}^{\infty}\frac{1}{n((-1)^n+n)}$$</span>The series <span class="math-container">$$\sum_{n=2}^{\infty}\frac{1}{n((-1)^n+n)}$$</span> converges by the limit comparison test with the convergent <span class="math-container">$p-$</span>series
<span class="math-container">$$\sum_{n=2}^{\infty}\frac{1}{n^2}.$$</span> Thus your given series is the difference of two convergent series and hence your given series also converges.</p>
|
2,451,469 | <p>I have a system of differential equations:</p>
<p>$$x(t)'' + a \cdot x(t)' = j(t)$$
$$j(t)' = -b \cdot j(t) - x(t)' + u(t)$$</p>
<p>The task is: Substitute $v(t) = x(t)'$ into the system and rewrite the system as 3 coupled linear
differential equations of the same form (with
$y(t) = x(t)$ the solution sought), with the time-dependent vector function $x(t)=(x(t), v(t), j(t))$.</p>
<p>Write down the system matrix $A$, and the vectors $b$ and $d$ explicitly.</p>
<p>Anyone can guide me on how to write 3 equations?</p>
| Koto | 355,087 | <p>You just need to reduce the order of the first equation. If $v=x'$, then $v'=x''$ and the first equation can be written as the system $$v'=j-av$$ $$x'=v$$ so adding $j'=-bj-v+u$, you get a system in the form $X'=AX+b$, where $X=(x,v,j)^T$ and $$A=\begin{bmatrix}0&1&0\\0 &-a&1\\0 & -1&-b\end{bmatrix}$$ and $b=(0,0,u)^T$. </p>
<p>I omitted the time variable, because every function here depends on it, so there's no confusion and I couldn't figure out what $d$ was supposed to be.</p>
|
3,460 | <p>I asked the question "<a href="https://mathoverflow.net/questions/284824/averaging-2-omegan-over-a-region">Averaging $2^{\omega(n)}$ over a region</a>" because this is a necessary step in a research paper I am writing. The answer is detailed and does exactly what I need, and it would be convenient to directly cite the result. However, the author of the answer is anonymous... how would one deal with such a situation? I could of course very easily just reproduce the argument in my paper, but that would be academically dishonest.</p>
| Andy Putman | 317 | <p>I think you should just reproduce the argument in your paper while attributing it to the user in question (with a link to the question). As long as you give the correct attribution, there is nothing academically dishonest about this.</p>
<p>I had to do this in one of my papers; see the top of page 20 of <a href="https://www3.nd.edu/~andyp/papers/HighLevel.pdf" rel="nofollow noreferrer">this</a>.</p>
<p>I don't think it is any different from including an argument that a non-anonymous person told you. People do this all the time, and there is nothing wrong with it as long as you indicate who told you the argument (assuming it isn't <em>really</em> standard, in which case you can just thank them in the acknowledgements). For example, I did this in the "Proof of Theorem B" on page 3 of the paper I linked to above (which was explained to me by Eduard Looijenga).</p>
|
3,002,874 | <p>I found this limit in a book, without any explanation:</p>
<p><span class="math-container">$$\lim_{n\to\infty}\left(\sum_{k=0}^{n-1}(\zeta(2)-H_{k,2})-H_n\right)=1$$</span></p>
<p>where <span class="math-container">$H_{k,2}:=\sum_{j=1}^k\frac1{j^2}$</span>. However Im unable to find the value of this limit from myself. After some work I get the equivalent expression</p>
<p><span class="math-container">$$\lim_{n\to\infty}\sum_{k=0}^{n-1}\sum_{j=k}^\infty\frac1{(j+1)^2(j+2)}$$</span></p>
<p>but anyway Im stuck here. Can someone show me a way to compute this limit? Thank you.</p>
<p>UPDATE: Wolfram Mathematica computed it value perfectly, so I guess there is some integral or algebraic identity from where to calculate it.</p>
| Jack D'Aurizio | 44,121 | <p>Let's see:</p>
<p><span class="math-container">$$\begin{eqnarray*} \sum_{k=0}^{n-1}\left(\zeta(2)-H_k^{(2)}\right) &=& \zeta(2)+\sum_{k=1}^{n-1}\left(\zeta(2)-H_k^{(2)}\right)\\&\stackrel{\text{SBP}}{=}&\zeta(2)+(n-1)(\zeta(2)-H_{n-1}^{(2)})+\sum_{k=1}^{n-2}\frac{k}{(k+1)^2}\\&=&\zeta(2)+(n-1)(\zeta(2)-H_{n-1}^{(2)})+(H_{n-1}-1)-\sum_{k=1}^{n-2}\frac{1}{(k+1)^2}\\&=&n( \zeta(2)-H_{n-1}^{(2)})+H_{n-1}\end{eqnarray*}$$</span>
hence the claim is equivalent to</p>
<p><span class="math-container">$$ \lim_{n\to +\infty} n(\zeta(2)-H_{n-1}^{(2)}) = \lim_{n\to +\infty}n\sum_{m\geq n}\frac{1}{m^2} = 1 $$</span>
which is pretty clear since <span class="math-container">$\sum_{m\geq n}\frac{1}{m^2} = O\left(\frac{1}{n^2}\right)+\int_{n}^{+\infty}\frac{dx}{x^2}=\frac{1}{n}+O\left(\frac{1}{n^2}\right)$</span>.<br>
<span class="math-container">$\text{SBP}$</span> stands for Summation By Parts, of course.</p>
|
1,960,169 | <p><a href="http://puu.sh/rCwCy/c78a9ef78a.png" rel="nofollow noreferrer">Asymptote http://puu.sh/rCwCy/c78a9ef78a.png</a></p>
<p>Well my thinking was if the asymptote is at x = 4, it will reach as close to 4 as possible but will never reach 4, meaning it's not defined at 4. </p>
| Frank | 332,250 | <p>The vertex can be found by plugging $x$ with $-\frac {b}{2a}$ give the form $ax^2+bx+c=0$.</p>
<p>So with your example $x^2+4x-5=0$, we have $$a=1\\b=4\\c=-5\tag{1}$$
So $-\frac {b}{2a}=-\frac {4}{2}=-2$. Plugging that into the quadratic gives $$f(-2)=4-8-5=-9\tag{2}$$
Therefore, the vertex is $(-2,-9)$.</p>
|
1,904,553 | <blockquote>
<p>$$\displaystyle\lim_{x\to0}\left(\frac{1}{x^5}\int_0^xe^{-t^2}\,dt-\frac{1}{x^4}+\frac{1}{3x^2}\right)$$</p>
</blockquote>
<p>I have this limit to be calculated. Since the first term takes the form $\frac 00$, I apply the L'Hospital rule. But after that all the terms are taking the form $\frac 10$. So, according to me the limit is $ ∞$. But in my book it is given 1/10. How should I solve it? </p>
| zhw. | 228,045 | <p>Hint: $e^{-u} = 1-u+u^2/2 + O(u^3).$</p>
|
4,600,131 | <blockquote>
<p>If <span class="math-container">$$f(x)=\binom{n}{1}(x-1)^2-\binom{n}{2}(x-2)^2+\cdots+(-1)^{n-1}\binom{n}{n}(x-n)^2$$</span>
Find the value of <span class="math-container">$$\int_0^1f(x)dx$$</span></p>
</blockquote>
<p>I rewrote this into a compact form.
<span class="math-container">$$\sum_{k=1}^n\binom{n}{k}(x-k)^2(-1)^{k-1}$$</span>
Now,
<span class="math-container">$$\int_0^1\sum_{k=1}^n\binom{n}{k}(x-k)^2(-1)^{k-1}dx$$</span>
<span class="math-container">$$=\sum_{k=1}^n\binom{n}{k}\frac{(1-k)^3}{3}(-1)^{k-1}-\sum_{k=1}^n\binom{n}{k}\frac{(-k)^3}{3}(-1)^{k-1}$$</span>
<span class="math-container">$$=\sum_{k=1}^n\binom{n}{k}\frac{(1-k)^3}{3}(-1)^{k-1}+\sum_{k=1}^n\binom{n}{k}\frac{k^3}{3}(-1)^{k-1}$$</span>
After this, I took <span class="math-container">$\dfrac13$</span> common and did some simplifications but nothing useful came out.</p>
<p>Any help is greatly appreciated.</p>
| Sangchul Lee | 9,340 | <p><strong>Solution 1.</strong> Here is another approach. Consider the shift operator <span class="math-container">$\Delta$</span> defined for functions on <span class="math-container">$\mathbb{R}$</span> by</p>
<p><span class="math-container">$$ \Delta f(x) = f(x-1). $$</span></p>
<p>Then</p>
<p><span class="math-container">\begin{align*}
\sum_{k=1}^{n} (-1)^{k-1} \binom{n}{k} (x-k)^2
&= \sum_{k=1}^{n} (-1)^{k-1} \binom{n}{k} \Delta^k x^2 \\
&= [\operatorname{id} - (\operatorname{id} - \Delta)^n] x^2.
\end{align*}</span></p>
<p>Here, <span class="math-container">$\operatorname{id}$</span> is the identity operator on functions, i.e., <span class="math-container">$\operatorname{id} f(x) = f(x)$</span>. Now the crucial observation is as follows:</p>
<blockquote>
<p>The backward difference operator <span class="math-container">$D = \operatorname{id} - \Delta$</span>, when applied to a polynomial, results in another polynomial with degree decreased by at least one.</p>
</blockquote>
<p>Intuitively, this is because <span class="math-container">$D$</span> behaves similar to the differential operator <span class="math-container">$\frac{\mathrm{d}}{\mathrm{d}x}$</span>. In particular, when <span class="math-container">$n \geq 3$</span>, it follows that <span class="math-container">$D^n x^2 = 0$</span>. Hence it follows that</p>
<p><span class="math-container">$$ \sum_{k=1}^{n} (-1)^{k-1} \binom{n}{k} (x-k)^2 = x^2 \qquad\text{for}\quad n \geq 3. $$</span></p>
<p>Now the rest computation is straightforward.</p>
<hr />
<p><strong>Solution 2.</strong> Here is yet another approach. Define the coefficient extraction operator <span class="math-container">$[x^n]$</span> by</p>
<p><span class="math-container">$$ [x^n]\sum_{k=0}^{\infty} a_k x^k = a_k. $$</span></p>
<p>Note that <span class="math-container">$[x^n]$</span> is linear. Furthermore, we may rewrite the integral using this operator as:</p>
<p><span class="math-container">\begin{align*}
\int_{0}^{1} f(x) \, \mathrm{d}x
&= \int_{0}^{1} \left( \sum_{k=1}^{n} (-1)^{k-1} \binom{n}{k} (x-k)^2 \right) \, \mathrm{d}x \\
&= \int_{0}^{1} \left( \sum_{k=1}^{n} (-1)^{k-1} \binom{n}{k} 2 [s^2] e^{(x-k)s} \right) \, \mathrm{d}x \\
&= 2 [s^2] \int_{0}^{1} \left( \sum_{k=1}^{n} (-1)^{k-1} \binom{n}{k} e^{(x-k)s} \right) \, \mathrm{d}x \\
&= 2 [s^2] \int_{0}^{1} e^{xs} \left( 1 - (1 - e^{-s})^n \right) \, \mathrm{d}x \\
&= 2 [s^2] \left( \frac{e^s - 1}{s} \left( 1 - (1 - e^{-s})^n \right) \right).
\end{align*}</span></p>
<p>Then, using the fact that</p>
<p><span class="math-container">$$ \frac{e^s - 1}{s} = 1 + \frac{s}{2} + \frac{s^2}{6} + \mathcal{O}(s^3) $$</span></p>
<p>and</p>
<p><span class="math-container">$$ 1-(1-e^{-s})^n = \begin{cases}
1 - s + \frac{s^2}{2} + \mathcal{O}(s^3), & n = 1, \\
1 - s^2 + \mathcal{O}(s^3), & n =2, \\
1 + \mathcal{O}(s^3), & n \geq 3,
\end{cases} $$</span></p>
<p>we can easily conclude that</p>
<p><span class="math-container">$$ \int_{0}^{1} f(x) \, \mathrm{d}x = \begin{cases}
\frac{1}{3}, & n = 1 \text{ or } n \geq 3, \\
-\frac{5}{3}, & n = 2.
\end{cases} $$</span></p>
|
4,600,131 | <blockquote>
<p>If <span class="math-container">$$f(x)=\binom{n}{1}(x-1)^2-\binom{n}{2}(x-2)^2+\cdots+(-1)^{n-1}\binom{n}{n}(x-n)^2$$</span>
Find the value of <span class="math-container">$$\int_0^1f(x)dx$$</span></p>
</blockquote>
<p>I rewrote this into a compact form.
<span class="math-container">$$\sum_{k=1}^n\binom{n}{k}(x-k)^2(-1)^{k-1}$$</span>
Now,
<span class="math-container">$$\int_0^1\sum_{k=1}^n\binom{n}{k}(x-k)^2(-1)^{k-1}dx$$</span>
<span class="math-container">$$=\sum_{k=1}^n\binom{n}{k}\frac{(1-k)^3}{3}(-1)^{k-1}-\sum_{k=1}^n\binom{n}{k}\frac{(-k)^3}{3}(-1)^{k-1}$$</span>
<span class="math-container">$$=\sum_{k=1}^n\binom{n}{k}\frac{(1-k)^3}{3}(-1)^{k-1}+\sum_{k=1}^n\binom{n}{k}\frac{k^3}{3}(-1)^{k-1}$$</span>
After this, I took <span class="math-container">$\dfrac13$</span> common and did some simplifications but nothing useful came out.</p>
<p>Any help is greatly appreciated.</p>
| Alexander Burstein | 499,816 | <p>We wish to evaluate the sum <span class="math-container">$$\sum_{k=0}^{n}(-1)^k\binom{n}{k}(x-k)^2=x^2-\sum_{k=1}^{n}(-1)^{k-1}\binom{n}{k}(x-k)^2.$$</span></p>
<p>Consider an <span class="math-container">$x\times x$</span> board, where <span class="math-container">$x\ge n$</span> is an integer, in which we want to choose a single square, and the list of properties <span class="math-container">$\Omega=\{P_i\mid i=0,\dots,n\}$</span>, where the property <span class="math-container">$P_i$</span> is "the <span class="math-container">$i$</span>th row and the <span class="math-container">$i$</span>th column of the square are empty".</p>
<p>Let <span class="math-container">$S\subseteq[n]=\{1,\dots,n\}$</span>, then the number of ways to choose a single square on the board that satisfies the list of properties in <span class="math-container">$S$</span> is <span class="math-container">$$N(\supseteq S)=(x-|S|)^2.$$</span></p>
<p>There are <span class="math-container">$\binom{n}{k}$</span> sets <span class="math-container">$S\in[n]$</span>, so <span class="math-container">$$N_k=\sum_{S\,:\,|S|=k}N(\supseteq S)=\binom{n}{k}(x-k)^2.$$</span></p>
<p>Therefore, by the Inclusion-Exclusion Principle, the number of <span class="math-container">$x\times x$</span> boards that satisfy none of the properties in <span class="math-container">$\Omega$</span> is
<span class="math-container">$$
e_0(n,x)=\sum_{k=0}^{n}(-1)^k N_k=\sum_{k=0}^{n}(-1)^k\binom{n}{k}(x-k)^2,
$$</span>
exactly the sum we want to evaluate. But any choice of a single square is a choice of 1 row and 1 column, so at most 2 such properties can be violated at the same time. Thus, we have the following.</p>
<p>If <span class="math-container">$n\ge 3$</span>, then <span class="math-container">$e_0(x,n)=0$</span>.</p>
<p>If <span class="math-container">$n=2$</span>, there are <span class="math-container">$2$</span> such choices: (row 1, column 2) and (row 2, column 1), so <span class="math-container">$e_0(x,2)=2$</span>.</p>
<p>If <span class="math-container">$n=1$</span>, there are <span class="math-container">$2x-1$</span> such choices, since the chosen square must be in row 1 or column 1, so <span class="math-container">$e_0(x,1)=2x-1$</span>.</p>
<p>If <span class="math-container">$n=0$</span>, there are <span class="math-container">$0$</span> such choices, since the board is empty and we need to choose 1 square, so <span class="math-container">$e_0(x,0)=0$</span>.</p>
<p>Finally, these formulas apply for any integer <span class="math-container">$x\ge n$</span>, and since <span class="math-container">$e_0(x,n)$</span> is a polynomial in <span class="math-container">$x$</span>, they must hold identically.</p>
<p>This can easily be generalized in various ways. For example,
<span class="math-container">$$
\sum_{k=0}^{n}(-1)^k\binom{n}{k}(x-k)^m=
\begin{cases}
m!\,, & \text{ if } n=m,\\
0\,, & \text{ if } n>m.
\end{cases}
$$</span></p>
|
178,473 | <p>The dependent choice principle ${\rm DC}_\kappa$ states that if $S$ is a nonempty set and $R$ is a binary relation such that for every $s\in S^{\lt\kappa}$, there is $x\in S$ with $sRx$, then there is a function $f:\kappa\to S$ such that for every $\alpha<\kappa$, $f\upharpoonright\alpha R f(\alpha)$. The axiom of choice fragment ${\rm AC}_\kappa$ states that every family of size $\kappa$ has a choice function. There are several classical theorems (see Jech's "Axiom of Choice", chapter 8) concerning the relationship between the dependent choice principles and fragments of the axiom of choice.</p>
<p><strong>Theorem 1</strong>: Over ${\rm ZF}$, ${\rm AC}$ is equivalent to $\forall\kappa\,{\rm DC}_\kappa$.</p>
<p><strong>Theorem 2</strong>: Over ${\rm ZF}$, $\forall \kappa\,{\rm AC}_\kappa$ implies ${\rm DC}_\omega$.</p>
<p><strong>Theorem 3</strong>: It is consistent with ${\rm ZF}$ that $\forall \kappa\,{\rm AC_\kappa}$ holds but ${\rm DC_{\omega_1}}$ fails (theorem 8.9). </p>
<p><strong>Theorem 4</strong>: It is consistent with ${\rm ZF}$ that ${\rm AC}_\kappa$ holds for some cardinal $\kappa>>\omega$ but ${\rm DC}_\omega$ fails (theorem 8.12).</p>
<p>Jech proves theorems 3 and 4 using permutation models (and then discusses how to obtain ${\rm ZF}$-models with the same properties). But I am wondering whether there are direct symmetric model constructions for these two results. Either a reference for the arguments or the arguments themselves would be appreciated.</p>
| Asaf Karagila | 7,206 | <p>The idea is to mimic the permutation models as given in Jech. One can then ask, "Well, in Jech he chooses some set of objects in the full universe, and shows it has a support. But in forcing we don't have a simple access to names like that, since they might not be "sufficiently determined" for us to collect them into a symmetric name!"</p>
<p>To counter the effects of this problem here is a generalized formulation of The Continuity Lemma, as Felgner called it (for the basic Cohen model). <span class="math-container">$\newcommand{\PP}{\Bbb{P}}\newcommand{\dom}{\operatorname{dom}}\newcommand{\fix}{\operatorname{fix}}\newcommand{\sym}{\operatorname{sym}}\newcommand{\forces}{\Vdash}$</span></p>
<blockquote>
<p>Suppose that <span class="math-container">$\PP$</span> is a Cohen type forcing, with <span class="math-container">$p\colon A\times\kappa\to2$</span> such that the domain of <span class="math-container">$p$</span> is <span class="math-container">$<\kappa$</span>, ordered by reverse inclusion. We write <span class="math-container">$s(p)$</span> as the projection of the <span class="math-container">$\dom p$</span> onto <span class="math-container">$A$</span>.</p>
<p>Let <span class="math-container">$\scr G$</span> be a group of permutations of <span class="math-container">$A$</span> acting on <span class="math-container">$\PP$</span> naturally: <span class="math-container">$\pi p(\pi a,\alpha)=p(a,\alpha)$</span>. And let <span class="math-container">$I$</span> be an ideal on <span class="math-container">$A$</span> which is closed under <span class="math-container">$\scr G$</span>, and <span class="math-container">$s(p)\in I$</span> for all <span class="math-container">$i\in I$</span>. Moreover, assume that whenever <span class="math-container">$X,Y\in I$</span> there is a permutation in <span class="math-container">$\scr G$</span> such that <span class="math-container">$\pi\upharpoonright(X\cap Y)=\operatorname{id}$</span> and <span class="math-container">$\pi''(X\setminus Y)$</span> is disjoint from <span class="math-container">$Y$</span>.</p>
<p><strong>Then</strong> whenever <span class="math-container">$\dot x_1,\ldots,\dot x_n$</span> are symmetric names with respect to the filter generated by <span class="math-container">$\{\fix(E)\mid E\in I\}$</span> and <span class="math-container">$E\in I$</span> such that <span class="math-container">$\fix(E)$</span> is a subgroup of <span class="math-container">$\sym(\dot x_i)$</span> for each <span class="math-container">$i$</span>, and <span class="math-container">$p\forces^{\sf HS}\varphi(\dot x_1,\ldots,\dot x_n)$</span> then <span class="math-container">$p\upharpoonright(E\times\kappa)\forces^{\sf HS}\varphi(\dot x_1,\ldots,\dot x_n)$</span>.</p>
</blockquote>
<p>(If you are uncomfortable with the notion of <span class="math-container">$\forces^{\sf HS}$</span> you can instead require <span class="math-container">$\varphi$</span> to be <span class="math-container">$\Delta_0$</span>, and replace the relativized quantifiers by names one at a time.)</p>
<p>Okay let me explain this for a second, since there are plenty of conditions and plenty of more conditions in the consequences. The idea is that if <span class="math-container">$p$</span> forces that something happens in the symmetric model, about concrete symmetric names, then we can restrict <span class="math-container">$p$</span> to something whose support is in the ideal, and already this decides the same value for the same statement with the same names. In our two examples, all the conditions will be easy to verify.</p>
<hr />
<h2>Theorem I:</h2>
<blockquote>
<p>Let <span class="math-container">$\kappa$</span> be a regular cardinal, then it is consistent that <span class="math-container">$\sf DC_\kappa$</span> holds, <span class="math-container">$\sf W_{\kappa^+}$</span> fails, and <span class="math-container">$(\forall \lambda\in\rm Ord)\sf AC_\lambda$</span></p>
</blockquote>
<h3>Proof.</h3>
<p>We take <span class="math-container">$\PP$</span> to be functions from <span class="math-container">$\kappa^+\times\kappa\to2$</span> with domain smaller than <span class="math-container">$\kappa$</span>. <span class="math-container">$\scr G$</span> here is the group of all permutations of <span class="math-container">$\kappa^+$</span> and <span class="math-container">$I$</span> is <span class="math-container">$[\kappa^+]^{\leq\kappa}$</span>. So the conditions easily hold, and just to remind you here our filter of subgroups is the one generated by <span class="math-container">$\{\fix(E)\mid E\in I\}$</span>, and it is normal since <span class="math-container">$I$</span> is closed under the operation of <span class="math-container">$\scr G$</span>, and <span class="math-container">$\cal F$</span> is <span class="math-container">$\kappa^+$</span>-complete since <span class="math-container">$I$</span> is <span class="math-container">$\kappa^+$</span>-complete.</p>
<p>If <span class="math-container">$G$</span> is a generic filter, we let <span class="math-container">$a_\alpha=\{\beta\mid\exists p\in G: p(\alpha,\beta)=1\}$</span>, and <span class="math-container">$\dot a_\alpha$</span> is going to be the canonical name for this set. Additionally, <span class="math-container">$A$</span> is the set of all these <span class="math-container">$a_\alpha$</span> and <span class="math-container">$\dot A$</span> will be its canonical name. Let <span class="math-container">$N$</span> be a symmetric model defined by <span class="math-container">$\cal F$</span> given above, then by standard arguments <span class="math-container">$A$</span> is in <span class="math-container">$N$</span>.</p>
<p>First off, if both the forcing is <span class="math-container">$\kappa^+$</span>-c.c. and the filter is <span class="math-container">$\kappa^+$</span>-complete, then <span class="math-container">$\sf DC_\kappa$</span> holds in the symmetric model, and this is the case here assuming suitable <span class="math-container">$\sf GCH$</span>. This much is easy to verify (see my paper <a href="https://dx.doi.org/10.4064/ba8169-12-2018" rel="nofollow noreferrer">"Preserving Dependent Choice"</a> for that). So we have this for almost free.</p>
<p>Secondly, <span class="math-container">$\sf W_{\kappa^+}$</span> fails since <span class="math-container">$|A|$</span> and <span class="math-container">$\kappa^+$</span> are incomparable. This is a standard proof, like the one in Cohen's first model with the Dedekind-finite set of real numbers.</p>
<p>The big trick is to show that given a function <span class="math-container">$X\colon\lambda\to N\setminus\{\varnothing\}$</span> then we want to find a function <span class="math-container">$g$</span> with domain <span class="math-container">$\lambda$</span> such that <span class="math-container">$g(i)\in X(i)$</span>. Suppose that <span class="math-container">$p\in G$</span> and <span class="math-container">$\dot X$</span> is a hereditarily symmetric name for <span class="math-container">$X$</span> such that <span class="math-container">$p$</span> forces <span class="math-container">$\dot X$</span> has the above properties.</p>
<p>Let <span class="math-container">$E\in I$</span> be a support for <span class="math-container">$\dot X$</span>, namely if <span class="math-container">$\pi\in\fix(E)$</span> then <span class="math-container">$\pi\dot X=\dot X$</span>. Without loss of generality <span class="math-container">$s(p)\subseteq E$</span> and <span class="math-container">$|E|=\kappa$</span>. Pick some <span class="math-container">$E'$</span> disjoint to <span class="math-container">$E$</span> and <span class="math-container">$|E'|=|E|$</span>. We will find a choice function with support <span class="math-container">$E\cup E'$</span>.</p>
<p>For each <span class="math-container">$\alpha<\lambda$</span>, find a maximal antichain below <span class="math-container">$p$</span>, <span class="math-container">$D=\{q_\gamma\mid\gamma<\kappa\}$</span>, such that there is a hereditarily symmetric name <span class="math-container">$\dot y_\gamma$</span> for which <span class="math-container">$q_\gamma\forces\dot y_\gamma\in\dot X(\check\alpha)$</span>. Let <span class="math-container">$E_\gamma$</span> be such that <span class="math-container">$s(q_\gamma)\subseteq E_\gamma$</span> and <span class="math-container">$\fix(E_\gamma)\subseteq\sym(\dot y_\gamma)$</span>.</p>
<p>Now, find <span class="math-container">$\pi\in\fix(E)$</span> such that <span class="math-container">$\pi\colon\bigcup_{\gamma<\kappa}E_\gamma\to E\cup E'$</span> (it need not be surjective between the two sets, just a permutation of <span class="math-container">$\kappa^+$</span> mapping the points outside of <span class="math-container">$E$</span> into <span class="math-container">$E'$</span>). Note that <span class="math-container">$\{\pi q_\gamma\mid\gamma<\kappa\}$</span> remain a maximal antichain below <span class="math-container">$p$</span>. But now, <span class="math-container">$\sym(\pi\dot y_\gamma)$</span> contains <span class="math-container">$\fix(E\cup E')$</span>.</p>
<p>Finally, let <span class="math-container">$\dot x_\alpha$</span> denote the name mixed over the <span class="math-container">$\pi q_\gamma$</span> from the <span class="math-container">$\pi\dot y_\gamma$</span>. Namely, <span class="math-container">$\pi q_\gamma\forces\dot x_\alpha=\pi\dot y_\gamma$</span>. This can be done in a way that ensures that <span class="math-container">$\dot x_\alpha$</span> is hereditarily symmetric, since all the <span class="math-container">$\pi\dot y_\gamma$</span> and the <span class="math-container">$\pi q_\gamma$</span> have a common support, namely <span class="math-container">$E\cup E'$</span>.</p>
<p>Now define <span class="math-container">$\dot g=\{(p,(\check\alpha,\dot x_\alpha)^\bullet)\mid\alpha<\lambda\}$</span>, and it is easy to see that <span class="math-container">$p\forces\dot g(\check\alpha)\in\dot X(\check\alpha)$</span> and that <span class="math-container">$\dot g$</span> is hereditarily symmetric as wanted. <span class="math-container">$\square$</span></p>
<hr />
<h2>Theorem II:</h2>
<blockquote>
<p>If <span class="math-container">$\kappa$</span> is uncountable, then it is consistent that <span class="math-container">$\sf DC$</span> fails, <span class="math-container">$\sf W_{<\kappa}$</span> and <span class="math-container">$\sf AC_{<\kappa}$</span> both hold.</p>
</blockquote>
<h2>Proof.</h2>
<p>Let me skimp out on most of the details. We take <span class="math-container">$A$</span> in this case to be <span class="math-container">$\kappa^{<\omega}$</span> (you can replace <span class="math-container">$\omega$</span> here by the least cardinal for which you want <span class="math-container">$\sf DC$</span> to fail). Our automorphism group is going to be the automorphisms of the tree <span class="math-container">$\kappa^{<\omega}$</span> and the ideal of supports is the ideal of subtrees of cardinality less than <span class="math-container">$\kappa$</span> with no branches (in the case of <span class="math-container">$\sf DC$</span> these are really the subtrees which are well-founded).</p>
<p>For <span class="math-container">$t\in\kappa^{<\omega}$</span> define <span class="math-container">$a_t$</span> as the Cohen set defined when fixing <span class="math-container">$t$</span> and <span class="math-container">$A$</span> as the set of all <span class="math-container">$a_t$</span>'s. Then the structure of <span class="math-container">$\kappa^{<\omega}$</span> is fixed trivially by the automorphisms, so <span class="math-container">$A$</span> has a tree structure but no branches (since a branch would require a support with an unbounded tree). Therefore <span class="math-container">$\sf DC$</span> fails.</p>
<p>To show that <span class="math-container">$\sf W_\lambda$</span> or <span class="math-container">$\sf AC_\lambda$</span> hold, for <span class="math-container">$\lambda<\kappa$</span>, we perform a trick similar to the previous proof. However here we need to be slightly more careful. But we can also notice that the union of trees whose intersection is without branches is also without branches. Therefore the union of any less than <span class="math-container">$\kappa$</span> "almost disjoint" supports is a support.</p>
<p>So here we take a name and by induction we construct a sequence of conditions and names which witness <span class="math-container">$\sf W_\lambda$</span> or <span class="math-container">$\sf AC_\lambda$</span>. Simply by ensuring that the next name we take has a support which extends the previously chosen names "sideways" and not "up". This will guarantee that the union of the symmetric names for the functions at limit steps is a function. And again the generalized continuity lemma ensures we can always restrict back to smaller conditions as we progress, to ensure that their support is in the ideal.</p>
|
178,473 | <p>The dependent choice principle ${\rm DC}_\kappa$ states that if $S$ is a nonempty set and $R$ is a binary relation such that for every $s\in S^{\lt\kappa}$, there is $x\in S$ with $sRx$, then there is a function $f:\kappa\to S$ such that for every $\alpha<\kappa$, $f\upharpoonright\alpha R f(\alpha)$. The axiom of choice fragment ${\rm AC}_\kappa$ states that every family of size $\kappa$ has a choice function. There are several classical theorems (see Jech's "Axiom of Choice", chapter 8) concerning the relationship between the dependent choice principles and fragments of the axiom of choice.</p>
<p><strong>Theorem 1</strong>: Over ${\rm ZF}$, ${\rm AC}$ is equivalent to $\forall\kappa\,{\rm DC}_\kappa$.</p>
<p><strong>Theorem 2</strong>: Over ${\rm ZF}$, $\forall \kappa\,{\rm AC}_\kappa$ implies ${\rm DC}_\omega$.</p>
<p><strong>Theorem 3</strong>: It is consistent with ${\rm ZF}$ that $\forall \kappa\,{\rm AC_\kappa}$ holds but ${\rm DC_{\omega_1}}$ fails (theorem 8.9). </p>
<p><strong>Theorem 4</strong>: It is consistent with ${\rm ZF}$ that ${\rm AC}_\kappa$ holds for some cardinal $\kappa>>\omega$ but ${\rm DC}_\omega$ fails (theorem 8.12).</p>
<p>Jech proves theorems 3 and 4 using permutation models (and then discusses how to obtain ${\rm ZF}$-models with the same properties). But I am wondering whether there are direct symmetric model constructions for these two results. Either a reference for the arguments or the arguments themselves would be appreciated.</p>
| Lorenzo | 141,146 | <p>In the comments of Asaf' answer I explain why there is a problem in his proof of Theorem 1. In this answer I try to correct them by slightly modifying his argument. I'll keep the same notation of his answer.</p>
<hr />
<h2>Theorem I</h2>
<blockquote>
<p>Let <span class="math-container">$\kappa$</span> be a successor cardinal, then it is consistent that <span class="math-container">$\text{DC}_{<\kappa}$</span> holds, <span class="math-container">$\text{W}_\kappa$</span> fails and <span class="math-container">$(\forall \lambda \in \text{Ord})\ \text{AC}_\lambda$</span> holds</p>
</blockquote>
<h2>Proof.</h2>
<p>Let <span class="math-container">$\mathbb{P}$</span>, as before, to be the functions from <span class="math-container">$\kappa\times\kappa \rightarrow 2$</span> with domain smaller than <span class="math-container">$\kappa$</span>. If <span class="math-container">$G$</span> is a generic filter, <span class="math-container">$\alpha \in \kappa$</span> and <span class="math-container">$s\subset\kappa$</span> has cardinality less than <span class="math-container">$\kappa$</span>, we let <span class="math-container">$$a_{\alpha, s} = \{\beta \mid \exists p \in G : (p(\alpha, \beta) = 1 \text{ and } \beta \not\in s) \text{ or } (p(\alpha, \beta) = 0 \text{ and } \beta \in s)\}$$</span> with <span class="math-container">$\dot{a}_{\alpha, s}$</span> being its canonical name, and for each <span class="math-container">$\alpha \in \kappa$</span> we let <span class="math-container">$$\begin{align*}a_\alpha &= \{a_{\alpha, s} \mid s \subset \kappa \text{ with } |s|<\kappa\}\\A &=\{a_\alpha \mid \alpha\in\kappa\} \end{align*}$$</span> with <span class="math-container">$\dot{a}_\alpha, \dot{A}$</span> being their canonical names.<br />
At this point, for every <span class="math-container">$X$</span> subset of <span class="math-container">$\kappa\times\kappa$</span>, <span class="math-container">$|X|< \kappa$</span> we let <span class="math-container">$\sigma_X$</span> be the automorphism of <span class="math-container">$\mathbb{P}$</span> that interchanges 0's and 1's at every point of <span class="math-container">$X$</span>, i.e. if <span class="math-container">$p \in \mathbb{P}$</span> then <span class="math-container">$(\sigma_X p) (\alpha, \beta) = 1-p(\alpha, \beta)$</span> if <span class="math-container">$(\alpha,\beta) \in X$</span> and <span class="math-container">$(\sigma_X p) (\alpha, \beta) = p(\alpha, \beta)$</span> otherwise.</p>
<p>We let our automorphism group <span class="math-container">$\cal G$</span> be generated by <span class="math-container">$\{\sigma_X \mid X\subset \kappa\times\kappa, |X|<\kappa\}$</span> and by the group of all permutations of <span class="math-container">$\kappa$</span> (the ones in Asaf' answer).<br />
Then we let <span class="math-container">$\cal F$</span> be the filter on <span class="math-container">$\cal G$</span> generated by <span class="math-container">$\{\text{fix}(E) \mid E\subset \kappa, |E|<\kappa\}$</span> with <span class="math-container">$$\text{fix}(E)=\{\pi \in \mathcal{G} \mid \pi (\dot{a}_{\alpha, s}) = \dot{a}_{\alpha, s} \text{ for all } s \text{ and }\alpha \in E\}$$</span></p>
<p>Let <span class="math-container">$N$</span> be the symmetric model defined by <span class="math-container">$\cal F$</span> above, then by standard arguments all <span class="math-container">$a_{\alpha, s}$</span>'s, all <span class="math-container">$a_\alpha$</span>'s and <span class="math-container">$A$</span> are in <span class="math-container">$N$</span>.</p>
<p>As before, since both our forcing and our filter are <span class="math-container">$\kappa$</span>-closed then <span class="math-container">$\text{DC}_{<\kappa}$</span> holds in <span class="math-container">$N$</span>. Moreover <span class="math-container">$W_\kappa$</span> fails since <span class="math-container">$|A|$</span> and <span class="math-container">$\kappa$</span> are incomparable.</p>
<p>Now, regarding <span class="math-container">$(\forall \lambda \in \text{Ord}) \text{AC}_\lambda$</span>, suppose that we have <span class="math-container">$X:\lambda \rightarrow N\setminus \{\emptyset\}$</span> in <span class="math-container">$N$</span>. Let <span class="math-container">$\dot{X}$</span> be a symmetric name for <span class="math-container">$X$</span> and let <span class="math-container">$p \in G$</span> such that <span class="math-container">$p$</span> forces that <span class="math-container">$\dot{X}$</span> has the above properties.</p>
<p>Let <span class="math-container">$E \in [\kappa]^{<\kappa}$</span> be a support for <span class="math-container">$\dot{X}$</span>. Pick some <span class="math-container">$E'$</span> disjoint from <span class="math-container">$E$</span> and such that <span class="math-container">$|E'|^+ = \kappa$</span>. We will find a choice function with support <span class="math-container">$E \cup E'$</span>.</p>
<p>First assume that <span class="math-container">$s(p)\subseteq E\cup E'$</span>. For each <span class="math-container">$\gamma<\lambda$</span> let <span class="math-container">$q\leq p$</span> such that for some symmetric <span class="math-container">$\dot y_\gamma$</span> we have that <span class="math-container">$q\Vdash\dot y_\gamma\in\dot X(\check\gamma)$</span>. Let <span class="math-container">$F$</span> be a support for <span class="math-container">$\dot y_\gamma$</span> and assume <span class="math-container">$s(q) \subseteq F$</span>, now we can find some <span class="math-container">$\pi\in\text{fix}(E)$</span> such that <span class="math-container">$\pi''F\subseteq E\cup E'$</span> and such that <span class="math-container">$\pi q$</span> is compatible with <span class="math-container">$p$</span> (first we use a permutation on <span class="math-container">$\kappa$</span> that sends <span class="math-container">$F$</span> in <span class="math-container">$E\cup E'$</span> and then we flip all the eventual bits that would make <span class="math-container">$p$</span> and <span class="math-container">$\pi q$</span> incompatible).<br />
Define <span class="math-container">$\dot x_\gamma=\pi\dot y_\gamma$</span> and let <span class="math-container">$q'$</span> extend both <span class="math-container">$p$</span> and <span class="math-container">$\pi q$</span> with <span class="math-container">$s(q')\subseteq E\cup E'$</span>, then <span class="math-container">$q' \le p$</span> and <span class="math-container">$q' \Vdash \dot x_\gamma\in\dot X(\check\gamma)$</span>.</p>
<p>So, wrapping up, starting with a condition <span class="math-container">$p$</span> with <span class="math-container">$s(p)\subseteq E \cup E'$</span> forcing that <span class="math-container">$\dot{X}$</span> has the wanted properties and some <span class="math-container">$\gamma < \lambda$</span>, we have found a symmetric name <span class="math-container">$\dot x_\gamma$</span> with support <span class="math-container">$E\cup E'$</span> and a condition <span class="math-container">$q' \le p$</span> with <span class="math-container">$s(q')\subseteq E \cup E'$</span> forcing that <span class="math-container">$\dot x_\gamma$</span> is an element of <span class="math-container">$\dot X(\check \gamma)$</span>.</p>
<p>For each <span class="math-container">$\gamma<\lambda$</span> now we can pick a maximal antichain (maximal wrt the functions <span class="math-container">$q$</span> with domain in <span class="math-container">$E\cup E'$</span>) below <span class="math-container">$p$</span> of conditions as above, <span class="math-container">$D_\gamma$</span> and names <span class="math-container">$\dot x_\gamma(q)$</span> as above. Then <span class="math-container">$\{(q,(\check\gamma,\dot x_\gamma(q))^\bullet)\mid q\in D_\gamma,\gamma<\lambda\}$</span> is a choice function and <span class="math-container">$E\cup E'$</span> is clearly a support for it. <span class="math-container">$\square$</span></p>
|
4,600,992 | <p>I have two sequences of random variables <span class="math-container">$\{ X_n\}$</span> and <span class="math-container">$\{Y_n \}$</span>. I know that
<span class="math-container">$X_n \to^d D, Y_n \to^d D$</span>. Can I conclude that <span class="math-container">$X_n - Y_n \to^p 0$</span>?</p>
<p>If I cannot, what other conditions do I need for the conclusion to hold? Thanks.</p>
| donaastor | 251,847 | <p>This is a way you could formaly express your intuition:
<span class="math-container">$$f(x)=x\int_1^x\Big(\frac{1}{t}+\sum_{n=1}^\infty\frac{t^{n-1}}{n!}\Big)dt-e^x=x\int_1^x\frac{1}{t}dt+x\int_1^x\sum_{n=1}^\infty\frac{t^{n-1}}{n!}dt-e^x=$$</span>
<span class="math-container">$$=x\ln x+x\int_1^x\sum_{n=1}^\infty\frac{t^{n-1}}{n!}dt-e^x=x\ln x+x\sum_{n=1}^\infty\int_1^x\frac{t^{n-1}}{n!}dt-e^x=$$</span>
<span class="math-container">$$=x\ln x+x\sum_{n=1}^\infty\frac{x^n-1}{n\cdot n!}-e^x.$$</span>
The interchange of the integral and the sum is permitted by Fubini's theorem since all the summands are always positive. Now you just need to show that this final expression reaches infinity. You didn't include that in your intuitive part, but I will include it here too:
<span class="math-container">$$f(x)=x\ln x+x\sum_{n=1}^\infty\frac{x^n-1}{n\cdot n!}-e^x=x\ln x+x\sum_{n=1}^\infty\frac{x^n}{n\cdot n!}-x\sum_{n=1}^\infty\frac{1}{n\cdot n!}-e^x>$$</span>
<span class="math-container">$$>x\ln x+x\sum_{n=1}^\infty\frac{x^n}{(n+1)!}-x\sum_{n=1}^\infty\frac{1}{n!}-e^x=x\ln x+(e^x-x-1)-x\cdot e-e^x=$$</span>
<span class="math-container">$$=x(\ln x-1-e)-1>x\rightarrow\infty.$$</span>
The second to the last inequality holds whenever <span class="math-container">$\ln x>2+e+\frac{1}{x}$</span>, which happens for all <span class="math-container">$x$</span> sufficiently large (at about <span class="math-container">$113$</span>).</p>
|
78,243 | <p>A positive integer $n$ is said to be <em>happy</em> if the sequence
$$n, s(n), s(s(n)), s(s(s(n))), \ldots$$
eventually reaches 1, where $s(n)$ denotes the sum of the squared digits of $n$.</p>
<p>For example, 7 is happy because the orbit of 7 under this mapping reaches 1.
$$7 \to 49 \to 97 \to 130 \to 10 \to 1$$
But 4 is not happy, because the orbit of 4 is an infinite loop that does not contain 1.
$$4 \to 16 \to 37 \to 58 \to 89 \to 145 \to 42 \to 20 \to 4 \to \ldots$$</p>
<p>I have tabulated the happy numbers up to $10^{10000}$, and it appears that they have a limiting density, although the rate of convergence is slow. Is it known if the happy numbers do in fact have a limiting density? In other words, does $\lim_{n\to\infty} h(n)/n$ exist, where $h(n)$ denotes the number of happy numbers less than $n$?</p>
<p><img src="https://dl.dropbox.com/u/39561574/happiness.jpg" alt="Relative frequency of happy numbers up to 1e10000"></p>
| David Moews | 17,657 | <p>I started working on this question after it was posted to MathOverflow and found bounds similar to those found by Justin Gilmer: upper asymptotic density of the happy numbers 0.1962 or greater, lower asymptotic density no more than 0.1217. However, I was also able to prove that the upper asymptotic density of the happy numbers was no more than 0.38; Gilmer mentioned in his paper that the question of whether the upper asymptotic density was less than 1 was still open.</p>
<p>A writeup of the result is at <a href="http://djm.cc/dmoews/happy.zip">http://djm.cc/dmoews/happy.zip</a>. The method used to find an upper bound on the upper asymptotic density was to start with a random number with decimal expansion $??\dots{}??\hbox{\#}\hbox{\#}\dots{}\hbox{\#}\hbox{\#}$, where the digits # are independent and uniformly distributed, and the digits ? are arbitrarily distributed and may depend on each other, but are independent of the #s. Then if there are $n$ #s, asymptotic normality implies that after applying $s$, we get a mixture of translates of a distribution which is approximately
normal, with mean $28.5n$ and standard deviation proportional
to $\sqrt{n}$. If $10^{n'}/\sqrt{n}$ is sufficiently small, each translate
of this normal distribution will have its last $n'$ digits approximately
uniformly distributed, so we get a random number which can be approximated by the same form of decimal expansion we started with, $??\dots{}??\hbox{\#}\hbox{\#}\dots{}\hbox{\#}\hbox{\#}$, where now there are $n'$ digits #. Repeating this eventually brings us to numbers small enough to fit on a computer.</p>
<p>The method used to find the bounds similar to Gilmer's was to start with a random number of the form $dd\dots{}dd??\dots{}??\hbox{\#}\hbox{\#}\dots{}\hbox{\#}\hbox{\#}$, where the ?s and #s are as before, the $d$s are fixed digits, and there are the same number of $d$s and #s, but very few ?s. Then if the parameters are appropriately chosen, we can show that after applying $s$, we again get a random number which can be approximated by the same form of decimal expansion, $dd\dots{}dd??\dots{}??\hbox{\#}\hbox{\#}\dots{}\hbox{\#}\hbox{\#}$, and repeat this step until the number is small.</p>
|
118,545 | <p>I gather that the question whether the Bruck-Chowla-Ryser condition was sufficient used to top the list, but now that that's settled - what is considered the most interesting open question?</p>
| Chris Godsil | 1,266 | <p>Fix $\lambda>1$. Are there infinitely many symmetric $(v,k,\lambda)$ designs?
(The Hadamard conjectures would be at the top of my list though.)</p>
|
2,072,666 | <p>I have a set as : <b> {∀x ∃y P(x, y), ∀x¬P(x, x)}. </b>. In order to satisfy this set I know that there should exist an interpretation <b> I </b> such that it should satisfy all the elements in the set. For instance my interpretation for x is 3 and for y is 4. Should I apply the same numbers (3,4) to ∀x¬P(x, x) as well ? Moreover, there are two x's in the function argument so should I apply 3 and 3 which is the value of the x that I assigned ? Thanks.</p>
| Clayton | 43,239 | <p><strong>Hint:</strong> The function $f(x)=1/x$ is continuous, so $\lim_{n\to\infty}f(x_n)=f(\lim_{n\to\infty} x_n)$ as long as the sequence does not tend to zero.</p>
|
607,862 | <p>Let $f$ be a continuous function. What is the maximum of $\int_0^1 fg$ among all continuous functions $g$ with $\int_0^1 |g| = 1$?</p>
| ncmathsadist | 4,154 | <p>Put $M = \|f\|_\infty$. Note that
$$\int_0^1 f(x) g(x)\, dx \le \|f\|_\infty \|g\|_1 = M.$$
Let $\epsilon > 0$. Then choose a point $x$ so $|f(x)| = M$. Wlog, we may assume
$f(x) = M$. Choose an interval $I$ so that $f\ge M - \epsilon$ on $I$. </p>
<p>Define $g$ to be $1/|I|$ on $I$ and $0$ off $[f > 0]$; then extend this function continuously onto all of $[0,1]$ so it has values in $[0, 1/|I|]$. Then
$$\int_0^1 f(x) g(x)\, dx \ge {1\over |I|}\int_I f(x)g(x)\, dx \ge M - \epsilon.$$
The supremum must be $M$.</p>
<p>If you take $f(x) = x(1-x)$, $x\in [0,1]$, you will see this supremum is not attained.</p>
|
332,993 | <p>How do I approach the problem?</p>
<blockquote>
<p>Q: Let $ \displaystyle z_{n+1} = \frac{1}{2} \left( z_n + \frac{1}{z_n} \right)$ where $ n = 0, 1, 2, \ldots $ and $\frac{-\pi}{2} < \arg (z_0) < \frac{\pi}{2} $. Prove that $\lim_{n\to \infty} z_n = 1$.</p>
</blockquote>
| Michael Hardy | 11,667 | <p>Let $f(z) = \frac12\left(z+\frac1z\right)$. Clearly $f(1)=1$ and $f'(1)=0$. So suppose $z=1+\Delta z$. Then
$$
f(z) = 1 +f'(1)\,\Delta z + \text{higher-degree terms in $\Delta z$},
$$
so $f(z)$ is closer to $1$ than $z$ is. You have an attractive fixed point.</p>
<p><b>Later edit:</b></p>
<p>Or put it this way: $f'(1)=0$ and $f'(z)$ is close to $0$ when $z$ is close to $1$. In particular, certainly $f'(z)$ is between $\pm1/2$ when $z$ is close enough to $1$. Therefore $f(z)$ is changing less than half as fast as $z$ is changing when $z$ is close to $1$. That means $f(z)$ is less than half as far from $1$ as $z$ is. If you keep cutting the distance in half, the distance approaches $0$ as a limit.</p>
|
613,961 | <p>I got the following problem:</p>
<p>Let $V$ be a real vector space and let $q: V \to \mathbb R$ be a real quadratic form,<br/> Prove that if the set $L = \{v \in V | q(v) \ge 0\}$ forms a subspace of $V$
then q is definite (meaning $q$ is positive definite, positive semidefinite, negative definite or negative semidefinite)</p>
<p>I don't know where to begin</p>
| Jeremy Daniel | 115,164 | <p>Suppose by contradiction that there exists $x$ and $y$ such that $q(x) < 0 < q(y)$. Then for any real $ \lambda$, $q(x + \lambda y) = q(x) + \lambda^2 q(y) + 2\lambda B(x,y)$ where $B$ is the bilinear form associated to $q$. So when $\lambda$ is large, $q(x + \lambda y)$ is positive, so $x + \lambda y \in L$. Hence $x = (x + \lambda y) - \lambda y$ is in $L$, which is a contradiction.</p>
|
666,503 | <p>How to isolate $x$ in this equation: $px+(\frac{b}{a})px=m$</p>
<blockquote>
<p>Blockquote</p>
</blockquote>
<p>And get $\frac{a}{a+b}*\frac{m}{p}$</p>
| Eleven-Eleven | 61,030 | <p>$$px+\left(\frac{b}{a}\right)px=m$$
$$\left(1+\frac{b}{a}\right)px=m$$
$$px=\frac{m}{1+\frac{b}{a}}=\frac{am}{a+b}$$
$$x=\frac{am}{p(a+b)}$$</p>
|
405,449 | <blockquote>
<p>Is the polynomial $x^{105} - 9$ reducible over $\mathbb{Z}$?</p>
</blockquote>
<p>This exercise I received on a test, and I didn't resolve it. I would be curious in any demonstration with explanations. Thanks!</p>
| Zev Chonoles | 264 | <p><strong>Hint:</strong> Make a <a href="https://en.wikipedia.org/wiki/Newton_polygon" rel="nofollow noreferrer">Newton polygon</a> for the prime $p=3$. Use the corollary at the top of page 2 in <a href="http://www.math.umn.edu/~garrett/m/number_theory/newton_polygon.pdf" rel="nofollow noreferrer">these notes by Paul Garrett</a> (alternatively, here are screenshots: <a href="https://i.stack.imgur.com/gJ7Av.png" rel="nofollow noreferrer">page 1</a>, <a href="https://i.stack.imgur.com/gQ8Di.png" rel="nofollow noreferrer">page 2</a>).</p>
|
228,437 | <p>The ODE in question: <code>y'' + 3y' + 2y = 8t + 8</code></p>
<p>But I get something like this for my solution:</p>
<p><a href="https://i.stack.imgur.com/6QFoV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6QFoV.png" alt="enter image description here" /></a></p>
<p>I also tried getting the solution of <code>y'=y^2-y^3</code> but again the solution did not make sense to me.</p>
| Steffen Jaeschke | 61,643 | <p>"The ODE in question: y'' + 3y' + 2y = 8t + 8"</p>
<p>is a linear inhomogeneous ordinary differential equation with real constant coefficients. The inhomogeneity is a linear polynomial with constant real coefficients.</p>
<p>Solution:</p>
<pre><code>DSolve[y''[t] + 3 y'[t] + 2 y[t] == 8 t + 8, y, t]
(*{{y -> Function[{t}, 2 (-1 + 2 t) + E^(-2 t) C[1] + E^-t C[2]]}}*)
</code></pre>
<p>using DSolve.</p>
<p>The corresponding mathematical understanding is:
a) solve the homogeneous equation.</p>
<pre><code>DSolve[y''[t] + 3 y'[t] + 2 y[t] == 0, y, t]
</code></pre>
<hr />
<pre><code>{{y -> Function[{t}, E^(-2 t) C[1] + E^-t C[2]]}}
</code></pre>
<p>b) solve the corresponding inhomogeneous equation by variation of constants.
That is already done using DSolve. In proper mathematical work, this would be setting the coefficient functions dependent on t and differentiate and then match the coefficients.</p>
<p>To complete degrees of freedom in constants of the solution is that of the order of the linear inhomogeneous ordinary differential equation with real constant coefficients. That is two regarding to the second-order of the linear inhomogeneous ordinary differential equation with real constant coefficients.</p>
<p>The varied coefficient functions are completely defined by the inhomogeneity.</p>
<p>There are different formalisms available to solve that in mathematics.</p>
<p>Probe:</p>
<pre><code>D[2 (-1 + 2 t) + E^(-2 t) C[1] + E^-t C[2], t, t] +
3 D[2 (-1 + 2 t) + E^(-2 t) C[1] + E^-t C[2], t] +
2 (2 (-1 + 2 t) + E^(-2 t) C[1] + E^-t C[2])
</code></pre>
<hr />
<p>4 E^(-2 t) C[1] + E^-t C[2] + 3 (4 - 2 E^(-2 t) C[1] - E^-t C[2]) +
2 (2 (-1 + 2 t) + E^(-2 t) C[1] + E^-t C[2])</p>
<hr />
<p>% // FullSimplify</p>
<hr />
<p>8 (1 + t)</p>
<p>So the solution from DSolve solves really the linear inhomogeneous ordinary differential equation with real constant coefficients.</p>
<ol start="2">
<li><p>part of the question</p>
<p>DSolve[y'[t] == y[t]^2 - y[t]^3, y, t]</p>
</li>
</ol>
<hr />
<pre><code>{{y -> Function[{t},
InverseFunction[Log[1 - #1] - Log[#1] + 1/#1 &][-t + C[1]]]}}
</code></pre>
<p>This is a homogeneous nonlinear ordinary differential equation polynomial in the differentiated function and of order one. The ODE is separable and has therefore an exact solution.</p>
<p>y'==dy/dt==y^2-y^3</p>
<p>separates into</p>
<p>dt=dy/(y^2-y^3)</p>
<p>t-t0==Integrate[1/(y^2-y^3),y]</p>
<p>t-t0==-(1/y) - Log[1 - y] + Log[y]</p>
<p>As shown be DSolve there is not closed inverse function.</p>
|
315,235 | <p>I am learning about vector-valued differential forms, including forms taking values in a Lie algebra. <a href="http://en.wikipedia.org/wiki/Vector-valued_differential_form#Lie_algebra-valued_forms">On Wikipedia</a> there is some explanation about these Lie algebra-valued forms, including the definition of the operation $[-\wedge -]$ and the claim that "with this operation the set of all Lie algebra-valued forms on a manifold M becomes a graded Lie superalgebra".
The explanation on Wikipedia is a little short, so I'm looking for more information about Lie algebra-valued forms. Unfortunately, the Wikipedia page does not cite any sources, and a Google search does not give very helpful results.</p>
<blockquote>
<p>Where can I learn about Lie algebra valued differential forms?</p>
</blockquote>
<p>In particular, I'm looking for a proof that $[-\wedge -]$ turns the set of Lie algebra-valued forms into a graded Lie superalgebra. I would also appreciate some information about how the exterior derivative $d$ and the operation $[-\wedge -]$ interact.</p>
| Olivier Bégassat | 11,258 | <p>A $\frak g$-valued differential form is , as far as I know, just a section $\alpha$ of the tensor product of the exterior power of the cotangent bundle $\Lambda^{\bullet}T^*M$ of some manifold $M$ with the trivial vector bundle $M\times\frak{g}$. As such, locally over some chart domain $U$, $\alpha$ can be cast in the follwing form
$$\alpha\equiv\alpha_1\otimes x_1+\cdots+\alpha_n\otimes x_n$$
where $\alpha_1,\dots,\alpha_n$ are local differential forms on $M$ defined over the chart domain $U$, and $x_1,\dots,x_n$ is a basis of $\frak g$. The differential is then calculated by ignoring the Lie algebra terms:
$$d\alpha\equiv (d\alpha_1)\otimes x_1+\cdots+(d\alpha_n)\otimes x_n$$
Similarly, the product is defined by treating the differential forms and the Lie algebra elements as separate entities:
$$[\alpha\wedge\beta]=\sum_{1\leq i,j\leq n}\alpha_i\wedge\beta_j\otimes[x_i,x_j]$$
For instance, for a pure form $\alpha$ of degree $p$, what you know about the exterior differential immediately implies that
$$d[\alpha\wedge\beta]=[(d\alpha)\wedge\beta]+(-1)^p[\alpha\wedge(d\beta)]$$
Also, if $\alpha$ has degree $p$, and $\beta$ has degree $q$, then
$$[\beta\wedge\alpha]=(-1)^{pq+1}[\alpha\wedge\beta]$$</p>
<hr>
<p>I think the algebraic questions that arise are easy enough that I'm sure you can find all the relations you want on your own. However, you can always take a look at Peter W. Michor's <em>Topics in Differential Geometry</em>, in particular his chapter IV, §19, or Morita's <em>Geometry of Characteristic Classes</em>.</p>
|
9,416 | <p>Say I pass 512 samples into my FFT</p>
<p>My microphone spits out data at 10KHz, so this represents 1/20s.</p>
<p>(So the lowest frequency FFT would pick up would be 40Hz).</p>
<p>The FFT will return an array of 512 frequency bins
- bin 0: [0 - 40Hz)
- bin 1: [40 - 80Hz)
etc</p>
<p>So if my original sound contained energy at say 115Hz, how can I accurately retrieve this frequency?</p>
<p>That is going to lie in bin #2, but very close to bin #3. so I would expect both bins to contain something nonzero.</p>
<p>Question: how about the bins either side of this? Would they be guaranteed to be zero if there are no other frequencies close in the original signal?</p>
<p>Main question: is there some algorithm for deciphering the original frequency given the relative bin strengths?</p>
| Ben Voigt | 4,923 | <p>Remember that the FFT is circular. Inputs which contain an integer number of cycles will come out clean as a single point, in the corresponding bin. Those which do not, act as if they are multiplied by a rectangular pulse in the time domain, which creates convolution by a sinc function in the frequency domain. Since sinc has unlimited support, your supposition that all bins except the closest two would be zero is incorrect.</p>
<p>Finding a closed-form analytic solution may be impossible, in which case your best bet would be to start with the center frequency for the two strongest bins and use binary search to find the frequency in-between that most closely corresponds to your actual spectrum.</p>
|
9,416 | <p>Say I pass 512 samples into my FFT</p>
<p>My microphone spits out data at 10KHz, so this represents 1/20s.</p>
<p>(So the lowest frequency FFT would pick up would be 40Hz).</p>
<p>The FFT will return an array of 512 frequency bins
- bin 0: [0 - 40Hz)
- bin 1: [40 - 80Hz)
etc</p>
<p>So if my original sound contained energy at say 115Hz, how can I accurately retrieve this frequency?</p>
<p>That is going to lie in bin #2, but very close to bin #3. so I would expect both bins to contain something nonzero.</p>
<p>Question: how about the bins either side of this? Would they be guaranteed to be zero if there are no other frequencies close in the original signal?</p>
<p>Main question: is there some algorithm for deciphering the original frequency given the relative bin strengths?</p>
| Sebastian Reichelt | 1,386 | <p>I just noticed this old question and thought I'd expand on J.M.'s comment (that is, what I think he/she was hinting at).</p>
<p>First of all, a small remark on the "bins" you talk about: The frequencies associated with the coefficients should be thought of as lying in the center of their "bin," and you're also off by a factor of 2 unless I'm missing something. So the first bin would be 0-10Hz, the second 10-29Hz, the third 29-49Hz, and so on. However, as others have pointed out, anything but a sine/cosine wave with an exactly matching frequency will appear in more than one "bin" anyway, so it's better to drop the notion of a "bin" and just think of frequencies.</p>
<p>This is because one way to interpret the FFT is that it decomposes the signal into a sum of cosine waves with certain amplitudes (given by the absolute values of the coefficients) and time offsets (given by the phase). For example, if your input is a 39Hz sine wave with an amplitude of 1, the third coefficient will have an absolute value of 1 (or 512, depending on the algorithm) and a phase of $-\pi/2$ (or $\pi/2$). This translates to a cosine wave that is shifted to the right by $\pi$, giving a sine wave, and then stretched appropriately in both directions.</p>
<p>If you slowly shift your FFT window to the right, the phase will decrease, wrap around (because a phase of $2\pi$ is the same as a phase of $0$), and return to $-\pi/2$ after 1/39s or 256 samples.</p>
<p>The same also works for frequencies that are not exact multiples of 19.5Hz. So you can determine how many samples it takes for the phase to wrap around, and this will give you a frequency that is as accurate as you want: To increase accuracy, just let it wrap around once more. By the way, you don't actually need to compute the phase; you can just check when the real or imaginary part (whichever you like) crosses zero.</p>
|
55,232 | <p>I'm looking for a concise way to show this:
$$\sum_{n=1}^{\infty}\frac{n}{10^n} = \sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right)$$
With this goal in mind:
$$\sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right) =
\sum_{n=1}^{\infty}\left(\left(\frac{10}{9}\right){10^{-n}}\right) = \frac{10}{81}$$</p>
<p>So far I've been looking at it by replacing $n$ in the LHS with $(\sum_{m=1}^{n}1)$ like this:
$$\sum_{n=1}^{\infty}\left(\left(\sum_{m=1}^{n}{1}\right){10^{-n}}\right) = \sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right)$$</p>
<p>And here I hit a particularly uncreative brick wall. This equation is obvious to me in a common sense way - I could easily demonstrate it by writing out the RHS as a huge addition problem and showing that the LHS just has the digit columns added ahead of time - but I don't know what to do in between for a proof.</p>
| anon | 11,763 | <p>Personally, I'd go the following route:
$$\sum_{n=1}^\infty\frac{n}{10^n}=\left(\sum_{n=1}^\infty\frac{1}{10^n}\right)+\left(\sum_{n=2}^\infty\frac{1}{10^n}\right)+\left(\sum_{n=3}^\infty\frac{1}{10^n}\right)+\cdots$$
$$=\left(\sum_{n=1}^\infty\frac{1}{10^n}\right)+\frac{1}{10}\left(\sum_{n=1}^\infty\frac{1}{10^n}\right)+\frac{1}{10^2}\left(\sum_{n=1}^\infty\frac{1}{10^n}\right)+\cdots $$
$$=\left(1+\frac{1}{10}+\frac{1}{10^2}+\cdots \right)\left(\sum_{n=1}^\infty\frac{1}{10^n}\right) $$
$$=\left(\sum_{m=0}^\infty\frac{1}{10^m}\right)\left(\sum_{n=1}^\infty\frac{1}{10^n}\right)=\frac{10}{9}\cdot\frac{1}{9}=\frac{10}{81}.$$</p>
<p>But that's just me.</p>
<p>EDIT: I added a fun little schematic to justify the first step to the mind's eye (or whatever):
<img src="https://i.stack.imgur.com/8ACWU.png" alt="schematic"></p>
|
1,532,202 | <p>I want to find out $$\mathcal{L^{-1}}\{\frac{e^{-\sqrt{s+2}}}{s}\}$$
How do you find the inverse Laplace? </p>
<p>thanks</p>
| hbp | 131,476 | <p>Thank you for the interesting question. Here is a rather brute force solution, which may add a few steps to <a href="https://math.stackexchange.com/users/226665/jan-eerland">Jan Erland</a>'s solution.</p>
<p>First, let us recall that, if
$$
\mathcal L(f) = \int_0^\infty f(t) \, e^{-st} \, dt = F(s)
$$
Then
$$
\mathcal L(f') = \int_0^\infty f'(t) \, e^{-st} \, dt = s \, F(s) - f(0).
$$
In our case, $F(s) = e^{-\sqrt{s+2}}/s$, so, we shall seek a function $f'(t)$ whose Laplace transform is $sF(s) = e^{-\sqrt{s+2}}$. Then $f(t)$ can be found from integrating $f'(t)$ from $t = 0$, where $f(0) = 0$.</p>
<p>Using the <a href="https://en.wikipedia.org/wiki/Inverse_Laplace_transform" rel="nofollow noreferrer">inverse formula</a>, we have
$$
f'(t)
= \frac{1}{2\, \pi \, i}\int_{\gamma-i\infty}^{\gamma+i\infty} e^{-\sqrt{s+2} + st} \, ds,
$$
where $\gamma$ is a large positive number, such that all poles, if any, lie on the left side of the line $y = \gamma$.</p>
<p>For our case, there is no pole, but only a branch cut at $s = -2$. We shall let the branch cut extend to $-\infty$ from $s = -2$, and wrap the contour around the branch cut from $-\infty + 0^- \, i$ to $-2 + 0^- \, i$ (lower half) and then from $-2 + 0^+ \, i$ to $-\infty + 0^+ \, i$ (higher half).</p>
<p>Let $s = -2 + A\,e^{-i\pi}$ with $A \ge 0$, then
$$
\begin{aligned}
f'(t)
&= \frac{e^{-2t}}{\pi}
\int_0^{+\infty} \sin(\sqrt{A}) \, e^{- A t} dA \\
&= \frac{e^{-2t}}{\pi}
\int_{-\infty}^{+\infty} u \, \sin(u) \, e^{-u^2 t} du \\
&= \frac{e^{-2t}}{\pi}
\mathrm{Im} \left(\frac{\partial}{\partial v}
\int_{-\infty}^{+\infty} e^{-u^2 t + v u} d u \right)_{v = i} \\
&= \frac{e^{-2t}}{\pi}
\mathrm{Im} \left[\frac{\partial}{\partial v}\left(
\sqrt{\frac{\pi}{t}} \, \exp{\frac{v^2}{4t}} \right)\right]_{v = i} \\
&= \frac{1}{2 \, \sqrt{\pi \, t^3}} \, \exp\left(-2 \, t-\frac{1}{4\,t}\right).
\end{aligned}
$$
Here, we have used the fact that $\mathrm{Im} \, e^{iu} = \sin(u)$,</p>
<p>Finally, let us integrate $f'(t)$.
$$
\begin{aligned}
f(t)
&= \int_0^t f'(\tau) \, d\tau \\
&=\frac{1}{\sqrt\pi}
\int_{1/\sqrt{t}}^\infty
\exp\left(-\frac{x^2}{4}-\frac{2}{x^2}\right) \, dx \\
&=\frac{1}{\sqrt\pi}
\int_{1/\sqrt{t}}^\infty
\exp\left(-\frac{x^2}{4}-\frac{2}{x^2}\right) \,
d\left( \frac{x}{2}+\frac{\sqrt{2}}{x} \right) \\
&\quad+
\frac{1}{\sqrt\pi}
\int_{1/\sqrt{t}}^\infty
\exp\left(-\frac{x^2}{4}-\frac{2}{x^2}\right) \,
d\left( \frac{x}{2}-\frac{\sqrt{2}}{x} \right)
\\
&=\frac{1}{\sqrt\pi}
\left[
\int_{\frac{1}{2\sqrt{t}}+\sqrt{2t}}^\infty
\exp\left(\sqrt{2} -y^2\right) \, dy
+
\int_{\frac{1}{2\sqrt{t}}-\sqrt{2t}}^\infty
\exp\left(-\sqrt{2} -z^2\right) \, dz
\right] \\
&=
\frac{e^{\sqrt 2}}{2} \, \mathrm{erfc}\left(
\frac{1}{2\sqrt{t}}+\sqrt{2\, t} \right)
+\frac{e^{-\sqrt 2}}{2} \, \mathrm{erfc}\left(
\frac{1}{2\sqrt{t}}-\sqrt{2\, t} \right).
\end{aligned}
$$
Here, we have changed variables
$$
\begin{aligned}
x &\equiv \frac{1}{\sqrt{\tau}}, \\
y &\equiv \frac{x}{2} + \frac{\sqrt{2}}{x}, \\
z &\equiv \frac{x}{2} - \frac{\sqrt{2}}{x}.
\end{aligned}
$$
Our result agrees with <a href="https://math.stackexchange.com/q/1532213">Jan Erland's</a>.</p>
|
587,198 | <p>I am having problems with this question, it would be wonderful if someone can help.</p>
<p>Given that $f(x)= x^2 + x - 3$</p>
<p>1) Find $f(x + h)$</p>
<p>2) Then express $f(x+h)-f(x)$ in its simplest form</p>
<p>3) Deduce $\lim\limits_{h->0}\dfrac{f(x+h)-f(x)}{h}$</p>
<p>Thanks for the help, i was stuck on the second part.</p>
| Mufasa | 49,003 | <p>if $2a^2=b^2$ it means $b$ must be even (because only an even number squared leads to an even number). So let $b=2m$ - this leads to:$$2a^2=(2m)^2=4m^2$$$$\therefore a^2=2m^2$$and hence $a$ must be even (for same reasons as above).</p>
<p>Thus both $a$ and $b$ must be even.</p>
|
906,332 | <blockquote>
<p>Prove $\ell_{ki}\ell_{kj}=\delta_{ij}$</p>
</blockquote>
<p>where $\{\hat{\mathbf{e}}_i\}$ and $\{\hat{\mathbf{e}}_i'\}$ are sets of orthonormal basis vectors for $i\in\{1,2,3\}$, $\ell$'s are the direction cosines such that $\ell_{ij}=\cos{(\hat{\mathbf{e}}_i',\hat{\mathbf{e}}_j)}=\hat{\mathbf{e}}_i'\cdot\hat{\mathbf{e}}_j$, and $\delta$ is the Kronecker delta function such that
$$\delta_{ij}=\begin{cases}
1, i=j\\
0, i\ne j\end{cases}$$
This uses indicial notation (free indices and dummy indices and associated summation conventions). I'm not sure how universal the notation is but I'm using the notation from "The Linearized Theory of Elasticity" by William S. Slaughter. I think I've provided as much as I need, but please ask if you feel I'm missing information.</p>
<p>I know in the end I'm going to have to reduce the problem to $\ell_{ki}\ell_{kj}=\hat{\mathbf{e}}_i\cdot\hat{\mathbf{e}}_j=\delta_{ij}$ or $\ell_{ki}\ell_{kj}=\hat{\mathbf{e}}_i'\cdot\hat{\mathbf{e}}_j'=\delta_{ij}$. And I understand how to prove $\ell_{ik}\ell_{jk}=\delta_{ij}$:
$$\begin{align*}
\ell_{ik}\ell_{jk}&=(\hat{\mathbf{e}}_i'\cdot\hat{\mathbf{e}}_k)\ell_{jk}\\
&=\hat{\mathbf{e}}_i'\cdot(\ell_{jk}\hat{\mathbf{e}}_k)\\
&=\hat{\mathbf{e}}_i'\cdot\hat{\mathbf{e}}_j'\quad(\text{because in general }\hat{\mathbf{e}}_i'=\ell_{ij}\hat{\mathbf{e}}_j)\\
&=\delta_{ij}
\end{align*}$$</p>
<p>However, when I try to follow the same type of substitutions I seem to hit a dead end. Can someone provide me a hint, not a full solution? I would like to solve it on my own.</p>
<p>EDIT: I think I have to use $\hat{\mathbf{e}}_i=\ell_{ji}\hat{\mathbf{e}}_j'$, still working on that route, though.</p>
| BeaumontTaz | 147,480 | <p>Yeah, I needed to rewrite everything and start on a new piece of paper before I realized the EDITed reverse transformation was relevant.</p>
<p>The proof is as follows:
$$\begin{align*}
\ell_{ki}\ell_{kj}&=(\hat{\mathbf{e}}_k'\cdot\hat{\mathbf{e}}_i)\ell_{kj}\\
&=(\ell_{kj}\hat{\mathbf{e}}_k')\cdot\hat{\mathbf{e}}_i\\
&=\hat{\mathbf{e}}_j\cdot\hat{\mathbf{e}}_i\quad(\text{because of my edited relationship in the question})\\
&=\delta_{ji}\\
&=\delta_{ij}
\end{align*}$$</p>
|
906,332 | <blockquote>
<p>Prove $\ell_{ki}\ell_{kj}=\delta_{ij}$</p>
</blockquote>
<p>where $\{\hat{\mathbf{e}}_i\}$ and $\{\hat{\mathbf{e}}_i'\}$ are sets of orthonormal basis vectors for $i\in\{1,2,3\}$, $\ell$'s are the direction cosines such that $\ell_{ij}=\cos{(\hat{\mathbf{e}}_i',\hat{\mathbf{e}}_j)}=\hat{\mathbf{e}}_i'\cdot\hat{\mathbf{e}}_j$, and $\delta$ is the Kronecker delta function such that
$$\delta_{ij}=\begin{cases}
1, i=j\\
0, i\ne j\end{cases}$$
This uses indicial notation (free indices and dummy indices and associated summation conventions). I'm not sure how universal the notation is but I'm using the notation from "The Linearized Theory of Elasticity" by William S. Slaughter. I think I've provided as much as I need, but please ask if you feel I'm missing information.</p>
<p>I know in the end I'm going to have to reduce the problem to $\ell_{ki}\ell_{kj}=\hat{\mathbf{e}}_i\cdot\hat{\mathbf{e}}_j=\delta_{ij}$ or $\ell_{ki}\ell_{kj}=\hat{\mathbf{e}}_i'\cdot\hat{\mathbf{e}}_j'=\delta_{ij}$. And I understand how to prove $\ell_{ik}\ell_{jk}=\delta_{ij}$:
$$\begin{align*}
\ell_{ik}\ell_{jk}&=(\hat{\mathbf{e}}_i'\cdot\hat{\mathbf{e}}_k)\ell_{jk}\\
&=\hat{\mathbf{e}}_i'\cdot(\ell_{jk}\hat{\mathbf{e}}_k)\\
&=\hat{\mathbf{e}}_i'\cdot\hat{\mathbf{e}}_j'\quad(\text{because in general }\hat{\mathbf{e}}_i'=\ell_{ij}\hat{\mathbf{e}}_j)\\
&=\delta_{ij}
\end{align*}$$</p>
<p>However, when I try to follow the same type of substitutions I seem to hit a dead end. Can someone provide me a hint, not a full solution? I would like to solve it on my own.</p>
<p>EDIT: I think I have to use $\hat{\mathbf{e}}_i=\ell_{ji}\hat{\mathbf{e}}_j'$, still working on that route, though.</p>
| user_of_math | 161,022 | <p>There is a simpler proof. Since you are transforming from one set of orthonormal Cartesian coordinates to another, your change of basis matrix $[l]$ is orthogonal (its transpose is also its inverse).</p>
<p>Thus, $[l][l]^T = [l]^T[l]=[I]$.</p>
<p>Clearly then, $l_{kj}l_{ki}$ = $l^T_{ik}l_{kj} = \delta_{ij}$, since this is just the matrix product $[l]^T [l]$ in components. </p>
|
104,170 | <p>I am trying to solve a fundamental problem in analytical convective heat transfer: laminar free convection flow and heat transfer from a flat plate parallel to the direction of the generating body force.</p>
<p><strong>Brief History of the problem</strong></p>
<p>Effectively: a flat plate is vertical and parallel to the direction of gravity vector. The plate is hot and the ambient is not. Heat transfer occurs from the plate to the ambient through natural convection due to density stratification. </p>
<p><a href="https://en.wikipedia.org/wiki/Simon_Ostrach" rel="nofollow noreferrer">Simon Ostrach</a>, a distinguished scientist in the field of microgravity science <a href="https://dl.dropboxusercontent.com/u/13223318/Pohlhausen1952a.pdf" rel="nofollow noreferrer">solved this problem through a coupled set of equations</a>. <strong>In Ostrach's work, these equations were solved by an <a href="https://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV2198.html" rel="nofollow noreferrer">IBM Card Programmed Electronic Calculator</a></strong></p>
<p>$$ F''' + 3 FF'' - 2 (F')^2 + H = 0 $$
$$ H'' + 2 \text{Pr} F H' = 0 $$</p>
<p>The Boundary conditions are:
$$ F'(0) = F(0) = 0 $$
$$ H(0) = 1 $$
$$ F'(\infty) = H(\infty) = 0 $$</p>
<p>Here, $F$ provides the hydrodynamic solution while $H$ provides the thermal solution with Pr being the Prandtl number which is a property of the fluid that the plate is "immersed" in.</p>
<p><strong>My Mathematica code ... it runs selectively</strong></p>
<pre><code>Clear[max, Pr, T, f, η, p];
max = 50;
Pr = 0.72;
pohl = NDSolve[{f'''[η] + 3 f[η] f''[η] -
2 (f'[η])^2 + T[η] == 0,
T''[η] + 2 Pr f[η] T'[η] == 0, f[0] == f'[0] == 0,
f'[max] == 0, T[0] == 1, T[max] == 0}, {f, T}, {η, max}]
p4 = Plot[{Evaluate[f'[η] /. pohl]}, {η, 0, max},
PlotRange -> All,
PlotLabel ->
Style[Framed["Hydrodynamic development is depicted on this plot"],
10, Blue, Background -> Lighter[Yellow]], ImageSize -> Large,
BaseStyle -> {FontWeight -> "Bold", FontSize -> 18},
AxesLabel -> {"η", "f'[η]"}, PlotLegends -> "Expressions"]
</code></pre>
<p>For a Prandtl number of 0.72 (Air) I get a velocity profile ($F'$) as suggested by the Ostrach in his pivotal report. However, for, many Prandtl numbers, the following warning message is sometimes flashed and I get incorrect velocity profiles (negative velocities) per the publication. For instance try Pr=6.</p>
<blockquote>
<p>FindRoot::cvmit: Failed to converge to the requested accuracy or
precision within 100 iterations. >></p>
<p>NDSolve::berr: The scaled boundary value residual error of
2.9035865095898766`*^7 indicates that the boundary values are not satisfied to specified tolerances. Returning the best solution found.</p>
</blockquote>
<p>I have experimented with the <code>LSODA</code> <a href="https://mathematica.stackexchange.com/q/11630/204">method because this system of diff eqs is stiff</a> and LSODA has proven to be a 'magic wand' in the past. What gives? How do I select a method for this problem? I wonder if this is a problem with the method of choice (or default method with no options) or my definition of the "free stream limit" $\infty$.</p>
<p><strong>Pr=0.01</strong></p>
<p><a href="https://i.stack.imgur.com/FfJZM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FfJZM.png" alt="Pr=0.01"></a></p>
<p><strong>Pr=0.72</strong></p>
<p><a href="https://i.stack.imgur.com/ShFmq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ShFmq.png" alt="Pr=0.72"></a></p>
<p><strong>Pr=0.6 (what went wrong? Warning message was displayed too...)</strong>
<a href="https://i.stack.imgur.com/2MxSH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2MxSH.png" alt="Pr=0.6"></a></p>
| Michael E2 | 4,999 | <p>The problem is with the default starting initial conditions used by the shooting method in <code>NDSolve</code>. The shooting method is where <code>FindRoot</code> is being used internally, so the OP's error message is a strong hint that this is the problem. Getting convergence in a nonlinear system can depend greatly on the starting conditions. </p>
<p>Having luckily solved the system for <code>Pr = 0.72</code>, we can use its initial conditions as starting values for <code>Pr = 0.6</code>. We hope that it will be suitably close. (If not, we could have tried solving for, say, <code>Pr = 0.66</code> and edged our way bit by bit to <code>0.6</code>, hoping that the dependence on <code>Pr</code> is continuous.)</p>
<pre><code>Pr = 0.72;
pohl72 =
NDSolve[{f'''[η] + 3 f[η] f''[η] - 2 (f'[η])^2 +
T[η] == 0, T''[η] + 2 Pr f[η] T'[η] == 0,
f[0] == f'[0] == 0, f'[max] == 0, T[0] == 1, T[max] == 0},
{f, T}, {η, max}];
Pr = 0.6;
pohl = NDSolve[{f'''[η] + 3 f[η] f''[η] - 2 (f'[η])^2 + T[η] == 0,
T''[η] + 2 Pr f[η] T'[η] == 0, f[0] == f'[0] == 0,
f'[max] == 0, T[0] == 1, T[max] == 0},
{f, T}, {η, max},
Method -> {"Shooting",
"StartingInitialConditions" ->
Thread[{f[0], f'[0], f''[0], T[0], T'[0]} ==
({f[0], f'[0], f''[0], T[0], T'[0]} /. First@pohl72)]}]
</code></pre>
<p>Plot:</p>
<pre><code>Plot[{Evaluate[f'[η] /. pohl]}, {η, 0, max},
PlotRange -> All,
PlotLabel ->
Style[Framed["Hydrodynamic development is depicted on this plot"],
10, Blue, Background -> Lighter[Yellow]], ImageSize -> Large,
BaseStyle -> {FontWeight -> "Bold", FontSize -> 18},
AxesLabel -> {"η", "f'[η]"}, PlotLegends -> "Expressions"]
</code></pre>
<p><img src="https://i.stack.imgur.com/qDhNT.png" alt="Mathematica graphics"></p>
|
4,546,415 | <p>Say <span class="math-container">$I = \mathbb{N} \setminus \{0, 1\}$</span> and</p>
<p><span class="math-container">$$A(n) = \left\{x \in \mathbb{R}\,\middle|\, −1−
\frac{1}{n}
< x \leq
\frac{1}{n}
\text{ or } 1−
\frac{1}{n} \leq x < 2−
\frac{1}{n}
\right\}$$</span>
with <span class="math-container">$n \in I$</span>.</p>
<p>What is <span class="math-container">$\bigcap_{n\in I}A(n)$</span> equivalent to? And proof?</p>
<hr />
<p><span class="math-container">$A(2) = (-1.5<x\leq0.5)$</span> or <span class="math-container">$(0.5\leq x\leq1.5)$</span></p>
<p><span class="math-container">$A(3) = (-1.3<x0.33333...)$</span> or <span class="math-container">$(0.6≤x≤1.666...)$</span></p>
<p><span class="math-container">$\dots$</span></p>
<p><span class="math-container">$A(1000000)=(-1<x <0)$</span> or <span class="math-container">$(1<x<2)$</span></p>
<p>We see that if <span class="math-container">$n$</span> is arbitrarily large enough: <span class="math-container">$A(n)= (-1,0) \cup (1,2)$</span>
As distribution laws of indexed families apply:</p>
<p><span class="math-container">$$\bigcap_{i\in I}(A_i ∪ B_i) \supseteq \left(\bigcap_{i\in I} A_i\right) \cup \left(\bigcap_{i\in I} B_i\right)$$</span></p>
<p>We have</p>
<p><span class="math-container">$$ \begin{align*} \bigcap_{n\in I}A(n) &= \left[\bigcap_{n\in I}A(n)\left(−1− \frac{1}{n} < x \leq \frac{1}{n}\right)\right] \cup \left[\bigcap_{n\in I}A(n)\left(1− \frac{1}{n} \leq x < 2− \frac{1}{n}\right)\right] \\
&= (-1,0) \cup (1,2) \end{align*} $$</span></p>
<p>Which direction can I go to prove it?</p>
| Gino | 1,102,186 | <p>Actually, you can express your derivative in a more compact form.</p>
<p>Since</p>
<p><span class="math-container">$v(x)=A(x)(A(x)^{-1}v(x))$</span> (Eq. 1)</p>
<p>differentiating (Eq.1) w.r.t. <span class="math-container">$x$</span>, gives:</p>
<p><span class="math-container">$\frac{d}{dx}(v(x))=(\frac{d}{dx}A(x))A(x)^{-1}v(x)+A(x)\frac{d}{dx}(A(x)^{-1}v(x))$</span> (Eq. 2)</p>
<p>Thus, solving (Eq. 2) for <span class="math-container">$d(A(x)^{-1}v(x))/dx$</span> yields:</p>
<p><span class="math-container">$\frac{d}{dx}(A(x)^{-1}v(x))=A(x)^{-1}[\frac{d}{dx}v(x)-(\frac{d}{dx}A(x))(A(x)^{-1}v(x))]
$</span> (Eq.3)</p>
<p>Notice that Eq. (3) expresses the derivative of <span class="math-container">$A^{-1}v(x)$</span> as only function of the derivatives of <span class="math-container">$A(x),v(x)$</span> that are known in your problem. Also, <span class="math-container">$dv(x)/dx$</span> is an <span class="math-container">$n\times n$</span> matrix and <span class="math-container">$dA(x)/dx$</span> is an <span class="math-container">$n\times n\times n$</span> tensor. If you are not familiar about matrix derivatives with respect to a vector, have a look on some Matrix Calculus source, e.g. <a href="https://math.stackexchange.com/questions/822068/derivative-of-a-matrix-with-respect-to-a-vector">Click here</a></p>
<p>Example:</p>
<p>Let <span class="math-container">$v(x)=\begin{bmatrix}
v_1(x)\\
v_2(x)
\end{bmatrix}$</span> and <span class="math-container">$x=[x_1\,x_2\,x_3]$</span>. Then:</p>
<p><span class="math-container">$\frac{dv(x)}{dx}=\begin{bmatrix}
\frac{\partial v_1(x)}{\partial x_1}&\frac{\partial v_1(x)}{\partial x_2}&\frac{\partial v_1(x)}{\partial x_3}\\
\frac{\partial v_2(x)}{\partial x_1}&\frac{\partial v_2(x)}{\partial x_2}&\frac{\partial v_2(x)}{\partial x_3}
\end{bmatrix}$</span></p>
|
2,158,369 | <p>prove that :
$$a,b>0\\,0<x<\pi/2$$
$$a\sqrt{\sin x}+b\sqrt{\cos x}≤(a^{4/3}+b^{4/3})^{3/4}$$
my try :</p>
<p>$$a\sqrt{\sin x}+b\sqrt{\cos x}=a(\sqrt{\sin x}+\frac{a}{b}\sqrt{\cos x})$$</p>
<p>$$\frac{a}{b}=\tan y$$</p>
<p>$$a\sqrt{\sin x}+b\sqrt{\cos x}=a(\sqrt{\sin x}+\tan y\sqrt{\cos x})$$</p>
<p>$$\frac{\sin y}{\cos y}=\tan y$$</p>
<p>$$a\sqrt{\sin x}+b\sqrt{\cos x}=a(\sqrt{\sin x}+\frac{\sin y}{\cos y}\sqrt{\cos x})$$</p>
<p>now?</p>
| Léreau | 351,999 | <p>Let <span class="math-container">$n := \vert V \vert$</span>.
Since <span class="math-container">$G$</span> is connected, there is a walk <span class="math-container">$\gamma$</span> passing through all <span class="math-container">$n$</span> points of <span class="math-container">$G$</span> and having at least length <span class="math-container">$(n-1)$</span>.
If <span class="math-container">$\gamma$</span> has length <span class="math-container">$> (n -1)$</span> there is a vertex that <span class="math-container">$\gamma$</span> passes through twice, implying that it contains a cycle.
If its length is exactly <span class="math-container">$(n-1)$</span> there is some edge <span class="math-container">$e \in E$</span> which is unused in <span class="math-container">$\gamma$</span> (since <span class="math-container">$\vert E \vert \geq n$</span>) and by combining it with <span class="math-container">$\gamma$</span> we get a cycle.</p>
<hr />
<p>You can now use this to show that any graph with <span class="math-container">$\vert V\vert \leq \vert E \vert $</span> contains a cycle.</p>
<p>Let <span class="math-container">$G_k = (V_k, E_k)$</span> denote the connected components of <span class="math-container">$G$</span> and assume that <span class="math-container">$\forall k: \, \vert V_k \vert > \vert E_k \vert $</span>. Then we have
<span class="math-container">$$
\vert V \vert = \sum_k \vert V_k \vert > \sum_k \vert E_k \vert = \vert E \vert
$$</span>
which contradicts <span class="math-container">$\vert V\vert \leq \vert E \vert $</span>. So there must be a connected component <span class="math-container">$G_k$</span> with <span class="math-container">$\vert V_k \vert \leq \vert E_k \vert $</span>, which then contains a cycle by the first part.</p>
|
928,826 | <p>I have a function </p>
<p>$$f(x)=\frac{2x^2 - x - 1}{x^2 + 3x + 2}$$</p>
<p>from the interval $[0,\infty)$</p>
<p>The limit of this function is $2$. Is the range then simply from $f(0)$ to $2$, and if yes, would I write it as $[f(0],2]$ or $[f(0),2)$, i.e open brackets or closed? </p>
<p>Also, would i first need to argue that it's a monotonous - growing - function, thus $f(0)$ has to be the lower end of the range? THX</p>
| amWhy | 9,003 | <p>You need the open bracket at $2$. And since $f(0) = -\frac 12$, the range of $f$ is given by $$\left[-\frac{1}{2}, 2\right)$$</p>
<p>Yes, since $f$ is monotonically increasing, $f(0) = -\frac 12$ is its greatest lower bound. So all you need here is to prove (establish) that it is monotonically increasing. Furthermore, the limit as $x\to +\infty f(x) = 2$, although $f(x) \neq 2$ for any $x$, hence the half-open interval.</p>
|
2,189,818 | <p>I am having trouble finding the natural parameterization of these curves:</p>
<blockquote>
<p>$$\alpha(t)=\left(\sin^2\left(\frac{t}{\sqrt{2}}\right),\frac{1}{2}\sin \left(t\sqrt{2}\right), \left(\frac{t}{\sqrt{2}}\right)\right)$$</p>
</blockquote>
<p>The thing is when finding $$\|\alpha'(t)\|=\sqrt{\frac{3}{2}\sin^2\left({\sqrt{2}t}\right)+1}$$
I do not know how to integrate this.
The second one I have is </p>
<blockquote>
<p>$$\beta(t)=\left(\frac{4}{5}\cos t,1-\sin{t},-\frac{3}{5}\cos t\right)$$</p>
</blockquote>
<p>I get $s=4\left(1-\cos\left(\frac{t}{2}\right)\right)$ or $t=2\arccos({4-s})$</p>
<p>I am to find the Tangent, normal, binormal, tangent and curvature of the curves, but I am at a block, because when I try to naturally parameterize then I come to problems: </p>
<ol>
<li>For the first one, I cannot figure out the integral of $\|\alpha'(t)\|$</li>
<li>For the second one I think I have made a mistake because finding the derivative when putting $t$ in dependence of $s$ in $\beta(t)$ would be very messy business to find the derivative for example. </li>
</ol>
| Old Peter | 340,536 | <p>$$106202791239577=9996044^2+2506371^2$$</p>
<p>The target number is, as I expect you know, a Pythagorean prime <a href="https://en.wikipedia.org/wiki/Pythagorean_prime" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Pythagorean_prime</a>
As such, it has only one way to be expressed as the sum of two squares, </p>
<p>This document <a href="http://eulerarchive.maa.org/docs/translations/E228en.pdf" rel="nofollow noreferrer">http://eulerarchive.maa.org/docs/translations/E228en.pdf</a> in section 44 starting on page 17, as far as I can understand, gives a manual method to test a number of the form $4n+1$ to be prime, by finding then number of ways it can be expressed as the sum of two squares. I’ve not tried this method.</p>
<p>Consider the equation $n=x^2+y^2$, where $n$ does not have to be prime.
We could take the brute force methodology, and look at $y=(n-x^2)^{0.5}$ for all values of $x$ from $1$ to $n^{0.5}$, but this is not only wasteful, but gives two solutions for each “real” solution (in this case $9996044^2+2506371^2$ and $2506371^2+9996044^2$).</p>
<p>If we define $x>=y$, it’s sufficient to constrain $x$ to the range
$$((0.5n)^{0.5}),(n^{0.5})$$
In this case it’s $x=(7287069,10305473)$.</p>
<p>Perhaps surprisingly, I used Excel 2016 to find the solution, copying thousands of rows, each calculating the new $x$ from the line above, then the value of $y$. Next I searched for $y$ values containing $.000000$, and copied those values.
Clearly, there’s too much data for one sheet, so I copied the bottom $x$ value to the top.
This sounds a lot of trouble, but took less than ten minutes, easily found the solution and two near misses. Please, do let me know if you need more details.</p>
<p>There is a product which I’ve used, years ago, to solve this type of problem: Excel Solver Add In.</p>
|
337,252 | <p>I'm guessing that the free group on an empty set is either the trivial group or isn't defined.
Some clarification would be appreciated.</p>
| Michael Hardy | 11,667 | <p>My guess is that would be a group with one element. There should be results about what direct products of groups have to do with sets of generators. What happens when you take the <s>direct</s> free product of a no generators and hence no relations, with a group with some other specified set of generators and relations?</p>
<p><b>PS:</b> For now, I've struck out "direct" but left it visible, while substituting "free". But could this work just as well either way?</p>
<p><b>PPS:</b> 15 minutes ago I was going to post that by hindsight I should definitely have said "free product". I've spent the past 15 minutes attempting to log in, without success. Now I'm out of time for a couple of hours, so maybe in two or three hours I'll try again.</p>
|
337,252 | <p>I'm guessing that the free group on an empty set is either the trivial group or isn't defined.
Some clarification would be appreciated.</p>
| Alexander Gruber | 12,952 | <p><em>Short version:</em> It's the trivial group. The only element is the empty word.</p>
<p><em>Long version:</em> To elaborate, we write the set of words in $A$ as $\mathfrak{W}_A$ (considered as the monoid generated by $A\cup A^{-1}$ under concatenation) and the free group on $A$ as $\mathfrak{F}_A$. $$\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex}
\newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex}
\newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}}
\begin{array}{ccc}
\mathfrak{W}_\emptyset & \ra{\delta} & \mathfrak{F}_\emptyset \\
& \searrow & \da{\mu}\\
& & 1 &
\end{array}$$
While $\delta:\mathfrak{W}_A\rightarrow \mathfrak{F}_A$ is not usually injective (since $\mathfrak{F}_A$ is the set of equivalence classes of <em>cyclically reduced</em> words), in this case, the only word with letters in $\emptyset$ is the empty word, so actually $\delta$ is an isomorphism (of groups!), as is $\mu:\mathfrak{F}_\emptyset\rightarrow 1$ for the same reasons.</p>
|
337,252 | <p>I'm guessing that the free group on an empty set is either the trivial group or isn't defined.
Some clarification would be appreciated.</p>
| Martin Brandenburg | 1,650 | <p>Left adjoints preserve colimits, in particular the empty colimits, i.e. initial objects. In particular, the free group functor takes the initial set, i.e. $\emptyset$, to the initial group, i.e. the trivial group. Another example: The polynomial ring without variables is the base ring.</p>
<p>A must read: <a href="https://mathoverflow.net/questions/45951/sexy-vacuity">sexy vacuity</a></p>
|
3,362,916 | <p>I'm trying to graph <span class="math-container">$|x+y|+|x-y|=4$</span>. I rewrote the expression as follows to get a function that resembles the direction of unit vectors at <span class="math-container">$\pi/4$</span> to the horizontal axis (take it to be <span class="math-container">$x$</span>)<span class="math-container">$$\biggl|\dfrac{x+y}{\sqrt{2}}\biggr|+\biggl|\dfrac{x-y}{\sqrt{2}}\biggr|=2\sqrt{2}$$</span></p>
<p>However, I'm not able to proceed further. Any hints are appreciated. Notice this is an exam problem, so time-efficient methods are key. Please provide any hints accordingly.</p>
| David K | 139,123 | <p>I think you dismiss the solution by cases too quickly.
Actually you only need to do one case, and the rest is developed by symmetry.</p>
<p>The easiest case is when <span class="math-container">$x+y\geq 0$</span> and <span class="math-container">$x - y \geq 0,$</span>
equivalently <span class="math-container">$x\geq -y$</span> and <span class="math-container">$x \geq y,$</span>
or in other words when <span class="math-container">$(x,y)$</span> is to the right of both of the lines
<span class="math-container">$x = y$</span> and <span class="math-container">$x= -y.$</span></p>
<p>In that case the formula simplifies to
<span class="math-container">$$ (x+y) + (x - y) = 4, $$</span>
that is, <span class="math-container">$2x = 4,$</span> so <span class="math-container">$x = 2.$</span> We have a vertical line segment from <span class="math-container">$(2,-2)$</span> to <span class="math-container">$(2,2).$</span></p>
<p>Now observe that the result of <span class="math-container">$\lvert x+y\rvert +\lvert x-y\rvert$</span> does not change if we swap <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.
So we also have the mirror image through the line <span class="math-container">$y=x,$</span> which says we also have the horizontal line segment from <span class="math-container">$(-2,2)$</span> to <span class="math-container">$(2,2).$</span></p>
<p>Notice how this is two adjacent sides of the square with vertices
<span class="math-container">$(-2,2),$</span> <span class="math-container">$(2,2),$</span> <span class="math-container">$(2,-2),$</span> and <span class="math-container">$(-2,-2).$</span></p>
<p>Also observe that the result of <span class="math-container">$\lvert x+y\rvert +\lvert x-y\rvert$</span> does not change if we replace <span class="math-container">$x$</span> and <span class="math-container">$y$</span> with <span class="math-container">$-x$</span> and <span class="math-container">$-y$</span>.
So we also have symmetry via a reflection through the origin, which is also a <span class="math-container">$180$</span>-degree rotation around the origin.
That gives us the other two sides of the square.</p>
|
17,975 | <p>How to systematically classify Mathematica expressions? I can think of using <code>Head[]</code>, <code>Depth[]</code>, <code>Length[]</code>, and some special pattern based on the problems at hand. What other key words, or functions should I consider?</p>
<h2>Update</h2>
<p>I mostly want to group symbols by how nested its list are, and what kinds of elements the lists have. For example</p>
<pre><code>{_String, _Symbol}
{{_Integer}, _String}
_String
</code></pre>
<p>would be considered <em>three</em> distinct types.</p>
| s.s.o | 840 | <p>Did you considered looking at <a href="http://www.wolframscience.com/nksonline/toc.html" rel="nofollow">the book from S. Wolfram</a> a new kind of science. There he discuses some main <strong>principals and rules</strong> applied to mathematica in particular Chapter 11: The Notion of Computation and further chapters.</p>
|
17,975 | <p>How to systematically classify Mathematica expressions? I can think of using <code>Head[]</code>, <code>Depth[]</code>, <code>Length[]</code>, and some special pattern based on the problems at hand. What other key words, or functions should I consider?</p>
<h2>Update</h2>
<p>I mostly want to group symbols by how nested its list are, and what kinds of elements the lists have. For example</p>
<pre><code>{_String, _Symbol}
{{_Integer}, _String}
_String
</code></pre>
<p>would be considered <em>three</em> distinct types.</p>
| jVincent | 1,194 | <p>In your updated example, you would find the "classification" using your scheme by replacing the lowest level elements with a pattern based on their heads:</p>
<pre><code>classify[expression_] := Map[Blank[Head[#]] &, expression, {-1}]
</code></pre>
<p>Then you can apply this on template examples of the patterns you listed:</p>
<pre><code>a = {"string", symbol};
b = {{42}, "string"};
c= "string";
classify[a]
classify[b]
classify[c]
(* {_String, _Symbol} *)
(* {{_Integer}, _String} *)
(* _String *)
</code></pre>
|
1,386,307 | <p>If you consider that you have a coin, head or tails, and let's say tails equals winning the lottery. If I participate in one such event, I may not get tails. It's roughly 50%. But if a hundred people are standing with a coin and I or them get to flip it, my chances of having gotten a tail after these ten attempts, is higher, is it not? Way higher than 50% though I'm not sure how to calculate it.</p>
<p>So why is it different for lotteries? Or is it? I was once told that in a certain lottery, I had a one in 12 million chance of winning. And like the coin toss, each lottery is different with different odds, but would the accumulated odds be way higher if I participate, be it in this same lottery over a thousand times, or this lottery and thousand other lotteries around country, thereby increasing my chances of getting a win, a tail? </p>
<p>I appreciate a response, especially at level of high school or first year university (did not do math past first year university). Thank you. </p>
| ignoramus | 155,096 | <p>Your chances of winning the lottery <strong>does</strong> increase if you participate in more lotteries. Say you particpate in the lottery where you have a 1 in 12 million chance of winning 1000 times. Then the probability that you don't win a single time is $$\biggl(1-\frac{1}{12000000}\biggr)^{1000} \approx 99.992\%$$
So you would still be very unlikely to win a single lottery, but your chances have definitely improved.</p>
|
1,176,938 | <p>How do you show that $-1+(x-4)(x-3)(x-2)(x-1)$ is irreducible in $\mathbb{Q}$?</p>
<p>I don't think you can use the eisenstein criterion here</p>
| TorsionSquid | 202,777 | <p>Using the Gauss lemma as suggested, note that $1,2,3,4$ are clearly not roots of $p(x)$. Also, when $x\leq 0$ or $x\geq 5$ we have $p(x)\geq -1+24>0$. So there are no integer roots. So $p$ is irreducible over $\mathbb{Z}$ and hence $\mathbb{Q}$.</p>
|
2,869,442 | <blockquote>
<p>Check whether the series
$$\sum_{n=1}^{\infty}\int_0^{\frac{1}{n}}\frac{\sqrt{x}}{1+x^2}\ dx$$
is convergent.</p>
</blockquote>
<p>I tried to sandwich the function by $\dfrac{1}{1+x^2}$ and $\dfrac{x}{1+x^2}$ , but this did not help at all.
Any other way of approaching?</p>
| Doug M | 317,162 | <p>When $x\in [0,1], \frac {\sqrt x}{2} \le\frac {\sqrt x}{1+x^2} \le \sqrt x$</p>
<p>Which means that $\frac 12\int_0^\frac1n \sqrt x\ dx \le \int_0^\frac1n \frac {\sqrt x}{1+x^2} \ dx \le \int_0^\frac1n \sqrt x \ dx$</p>
<p>if $\sum_\limits{n=1}^{\infty} \int_0^\frac1n \sqrt x \ dx $ converges then $\sum_\limits{n=1}^{\infty} \int_0^\frac1n \frac{\sqrt x}{1+x^2} \ dx $ converges.</p>
<p>and if $\sum_\limits{n=1}^{\infty} \int_0^\frac1n \sqrt x \ dx $ diverges then $\sum_\limits{n=1}^{\infty} \int_0^\frac1n \frac 12\sqrt x \ dx $ diverges and $\sum_\limits{n=1}^{\infty} \int_0^\frac1n \frac{\sqrt x}{1+x^2} \ dx $ diverges.</p>
|
2,225,650 | <p>Given that $\vec{a}$ and $\vec{b}$ are two non-zero vector. The two vectors form 4 resultant vectors such that $\vec{a} + 3\vec{b}$ and $2\vec{a} - 3\vec{b}$ are perpendicular, $\vec{a} - 4\vec{b}$ and $\vec{a} + 2\vec{b}$ are perpendicular. How can I find the angle between $\vec{a}$ and $\vec{b}$?</p>
<p>The answer given here is 114.09. Any help is much appreciated.</p>
| user26872 | 26,872 | <p>Consider the function $\rho(\phi) = 1-d + d\tanh^2 c\phi$.
This has a dent of depth $d$ at $\phi=0$.
(There are many other possible functions $\rho(\phi)$.
For example, something of the form
$\rho(\phi) = (1-d+c^2\phi^2)/(1+c^2\phi^2)$
should also work well.)
The parameter $c$ is roughly the inverse angular width of the dent.
Consider
$${\bf r} =
\left(\begin{array}{ccc}x&y&z\end{array}\right)^T =
\left(\begin{array}{ccc}
\rho(\phi)\cos\theta\sin\phi &
\rho(\phi)\sin\theta\sin\phi &
\rho(\phi)\cos\phi\end{array}\right)^T.$$
We wish to rotate the dent into the first octant.
This can be achieved by multiplying on the left by the matrix $R$, where
$$R = R_z(\pi/4)R_x(\arccos1/\sqrt3)
= \left(
\begin{array}{ccc}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} &
\frac{1}{\sqrt{3}} \\
-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} &
\frac{1}{\sqrt{3}} \\
0 & -\sqrt{\frac{2}{3}} & \frac{1}{\sqrt{3}} \\
\end{array}
\right).$$
A parametric plot for $c=5$ and $d$ varying between $0$ and $1-\sqrt3/2$ gives the following:
<a href="https://i.stack.imgur.com/W8zrG.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W8zrG.gif" alt="enter image description here"></a></p>
|
2,225,650 | <p>Given that $\vec{a}$ and $\vec{b}$ are two non-zero vector. The two vectors form 4 resultant vectors such that $\vec{a} + 3\vec{b}$ and $2\vec{a} - 3\vec{b}$ are perpendicular, $\vec{a} - 4\vec{b}$ and $\vec{a} + 2\vec{b}$ are perpendicular. How can I find the angle between $\vec{a}$ and $\vec{b}$?</p>
<p>The answer given here is 114.09. Any help is much appreciated.</p>
| anderstood | 36,578 | <p>You have not been very precise on what you were exactly looking for, but this should serve as a good basis.</p>
<p>The idea is to use a <a href="https://upload.wikimedia.org/wikipedia/commons/thumb/c/ce/Gaussian_2d.png/300px-Gaussian_2d.png" rel="nofollow noreferrer">Gaussian</a> to make the dent. The number of dents in $\theta$ is denoted by $n_\theta$, and $n_\phi$ in $\phi$.</p>
<p>Then, introducing some parameters controlling depth ($d$), "sharpness" ($s$), you can use something like:</p>
<p>$$r(\theta,\phi)=1-d\exp\big(s(\sin(n_\theta \theta)^2+\sin(n_\phi \phi)^2)\big)$$</p>
<p>in the parametrization of the sphere $r(\cos(\theta)\sin(\phi),\sin(\theta)\sin(\phi),\cos(\theta))$.</p>
<p>The following <em>Mathematica code</em> provides an example and can be adapted to produce the following animations. The result is better when $n_\theta$ and $n_\phi$ are close.</p>
<pre><code>d = .2; c = 4.; d = 0.03; s = 1.2; np = 8; nt = 5.;
r[theta_,
phi_] := (1 - d*Exp[-s*(-Sin[np*phi]^2 - Sin[nt*theta]^2)]);
ParametricPlot3D[
r[theta, phi]*{Cos[theta] Sin[phi], Sin[theta] Sin[phi],
Cos[phi]}, {theta, 0, 2 Pi}, {phi, 0, Pi}, Boxed -> False,
Axes -> False, Mesh -> None, PlotPoints -> 25,
PlotLegends -> {"np=" <> ToString[np] <> ", nt=" <> ToString[2 nt]}]
</code></pre>
<p>Influence of depth $d$:</p>
<p><a href="https://i.stack.imgur.com/SG4gU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SG4gU.jpg" alt="enter image description here"></a></p>
|
2,291,310 | <p>I'm seeking an alternative proof of this result:</p>
<blockquote>
<p>Given $\triangle ABC$ with right angle at $A$. Point $I$ is the intersection of the three angle lines. (That is, $I$ is the incenter of $\triangle ABC$.) Prove that
$$|CI|^2=\frac12\left(\left(\;|BC|-|AB|\;\right)^2+|AC|^2\right)$$</p>
</blockquote>
<p><strong>My Proof.</strong> Draw $ID \perp AB$, $IE\perp BC$, and $IF\perp CE$.</p>
<p>We have $|ID|=|IE|=|IF|=x$. Since $\triangle ADI$ is right isosceles triangle, we also have that $|AD|=|ID|=x$. Respectively, we have: $$|ID|=|IF|=|IE|=|AD|=|AF|=x$$ </p>
<p>$\triangle BDI=\triangle BEI \Rightarrow |BD|=|BE|=y$. And $|CE|=|CF|=z$</p>
<p>We have:<br>
$$|CI|^2=|CE|^2+|IE|^2=x^2+z^2 \tag{1}$$</p>
<p>And
$$\begin{align}\frac12\left(\left(|BC|-|AB|\right)^2+|AC|^2\right) &=\frac12\left(\left(\;\left(y+z\right)-\left(x+y\right)\;\right)^2+\left(x+z\right)^2\right) \\[4pt]
&=\frac12\left(\left(x-z\right)^2+\left(x+z\right)^2\right) \\[4pt]
&=\frac22\left(x^2+z^2\right) \\[4pt]
&=x^2+z^2
\tag{2}\end{align}$$</p>
<p>From $(1);(2)$ we are done. $\square$</p>
| Blue | 409 | <p><a href="https://i.stack.imgur.com/1gMfOm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1gMfOm.png" alt="enter image description here"></a></p>
<p>Let the circle about $B$ through $C$ meet the extension of $\overline{AB}$ at the point $C^\prime$. By symmetry and a little angle chasing in isosceles $\triangle CBC^\prime$, we find that $\triangle CIC^\prime$ is an isosceles right triangle. Consequently,
$$|IC|^2 + |IC|^2 = |IC|^2 + |IC^\prime|^2 = |CC^\prime|^2 = |AC|^2 + |AC^\prime|^2 = |AC|^2 + ( |BC| - |AB| )^2$$</p>
|
2,919,841 | <blockquote>
<p><span class="math-container">$$\Large\bigcup\limits_{k\in\bigcup\limits_{i\in I}J_i}A_k=\bigcup\limits_{i\in I}\bigg(\bigcup\limits_{k\in J_i}A_k\bigg)$$</span></p>
</blockquote>
<hr />
<p><strong>My attempt:</strong></p>
<p><span class="math-container">$\large x\in\bigcup\limits_{k\in\bigcup\limits_{i\in I}J_i}A_k\iff (\exists k\in\bigcup\limits_{i\in I}J_i)(x\in A_k)\iff [(\exists i\in I) (k\in J_i)](x\in A_k)$</span></p>
<p><span class="math-container">$\large x\in \bigcup\limits_{i\in I}\bigg(\bigcup\limits_{k\in J_i}A_k\bigg) \iff (\exists i\in I)(x\in \bigcup\limits_{k\in J_i}A_k\bigg) \iff (\exists i\in I)[(\exists k\in J_i)(x\in A_k)]$</span></p>
<p>I think the equality holds if and only if we show that <span class="math-container">$$[(\exists i\in I) (k\in J_i)](x\in A_k) \iff (\exists i\in I)[(\exists k\in J_i)(x\in A_k)]$$</span></p>
<hr />
<blockquote>
<p>My questions:</p>
<ol>
<li><p>Are my above transformations fine?</p>
</li>
<li><p>How do I proceed to prove the last statement?</p>
</li>
</ol>
<p>Many thanks for your help!</p>
</blockquote>
| Peter Szilas | 408,605 | <p>$c>1$. $c^n= \exp (n\log c)$, where $\log c >0.$</p>
<p>$\dfrac{n^a}{\exp (n\log c)}$;</p>
<p>Set $b:=\dfrac{a}{\log c} >0$.</p>
<p>$(\dfrac{n^b}{\exp n})^{\log c}.$</p>
<p>Take the limit $n \rightarrow \infty$.</p>
|
2,231,487 | <p>In [Mathematical Logic] by Chiswell and Hodges, within the context of natural deduction and the language of propositions LP (basically like <a href="http://www.cs.cornell.edu/courses/cs3110/2011sp/lectures/lec13-logic/logic.htm" rel="nofollow noreferrer">here</a>) it is asked to show, by counter-example that a certain 'sequent rule' is 'unacceptable'.</p>
<p>I suppose the proof should follow an example a few pages earlier that shows that the sequent rule $$(p_0 \to p_1 \vdash p_1)$$ is unacceptable due to the following counter-example: let both $p_0$ and $p_1$ mean $(2=3)$. The book argues that indeed if $(2=3)$ then $(2=3)$, so the left side is true, but the right side: $(2=3)$ is false. The conclusion is that we found a counter-examples and the sequent rule is unacceptable.</p>
<p>It is now asked to prove that this sequent rule is unacceptable: </p>
<p>If $$(\Gamma \vdash (\phi \lor \psi))$$ is correct (i.e has a derivation), then at least one of $$(\Gamma \vdash (\phi))$$ and $$(\Gamma \vdash (\psi))$$ is also correct. </p>
<p>The book hints that one should first try to give a counter-example 'for both sequents $(\vdash p_0)$ and $(\vdash \lnot p_0)$'. What could such a $p_0$ be? Does the book mean using as $p_0$ something like 'red is square'? Or some liar kind of sentence 'this sentence is false'? Or some known undecidable statement (which I doubt due to the level of this book)? Or am I totally off and misunderstand something?</p>
| jadn | 70,766 | <p>$p_0$ could be: 'it is raining'. Then, while we can claim without any assumptions that '(it is raining) or (it is not raining)' is correct, it is clear that we cannot from that claim derive a claim, using no assumptions, that '(it is raining)' is correct, similarly for '(it is not raining)'.</p>
|
1,750,104 | <p>I've had this question in my exam, which most of my batch mates couldn't solve it.The question by the way is the Laplace Transform inverse of </p>
<p>$$\frac{\ln s}{(s+1)^2}$$</p>
<p>A Hint was also given, which includes the Laplace Transform of ln t.</p>
| Ashok Saini | 539,858 | <p>$$x(t) \rightleftharpoons X(s)$$</p>
<p>$$tx(t) \rightleftharpoons -\frac{dX(s)}{ds}$$</p>
<p>$$x_1(t) \rightleftharpoons ln(s)$$</p>
<p>$$tx_1(t) \rightleftharpoons -\frac{1}{s}$$</p>
<p>$$tx_1(t) = -u(t)$$</p>
<p>$$x_1(t) = -\frac{u(t)}{t}$$</p>
<p>$$x_2(t) \rightleftharpoons \frac{1}{(s+2)^2}$$</p>
<p>$$e^{-2t} u(t) \rightleftharpoons \frac{1}{s+2}$$</p>
<p>$$te^{-2t} u(t) \rightleftharpoons \frac{1}{(s+2)^2}$$</p>
<p>$$x_2(t) = te^{-2t}u(t)$$</p>
<p>$$x_1(t)*x_2(t) \rightleftharpoons X_1(s) X_2(s)$$</p>
<p>$$-\frac{u(t)}{t} * te^{-2t}u(t) \rightleftharpoons \frac{ln(s)}{(s+2)^2}$$</p>
|
19,285 | <p>Is anyone aware of Mathematica use/implementation of <a href="http://en.wikipedia.org/wiki/Random_forest">Random Forest</a> algorithm?</p>
| Seth Chandler | 5,775 | <p>I very much enjoy Dan's approach in part because it is so simple both in concept and implementation. I'm taking the liberty here of suggesting a few arguable improvements to his terrific code. For makeForest (a) the data is in the same format as is used in functions such as LinearModelFit (a simple array instead of a list of rules of features onto class); (b) standard machine learning vocabulary in naming variables; (c) direct use of RandomSample on the data.</p>
<pre><code>makeForest[data_, ntrees_, subdim_, sizes_] :=
Module[{dims = Dimensions[data], numberOfInstances,
numberOfAttributes, leaves, slots, subleaves, nf},
numberOfInstances = First[dims];
numberOfAttributes = Last[dims] - 1;
Table[(leaves = RandomSample[data, sizes];
slots = Sort[RandomSample[Range[dim], subdim]];
subleaves = Map[#[[slots]] -> #[[-1]] &, leaves];
nf = Nearest[subleaves];
{slots, nf}), {ntrees}]
]
</code></pre>
<p>For classify,(a) clarify with named variables how one extracts the slots and the NearestFunction from the forest; (b) use the built in Commonest instead of the Reverse SortBy Tally composition. This method does lose the number of votes each prediction received; (c) create an optional argument k that effectively implements kNN on the data instead of Dan's required 1NN.</p>
<pre><code>classify[instance_, forest_, n_, k_: 1] := Module[{predictions},
predictions = Flatten[Table[Module[{tree, nf, slots},
tree = forest[[j]];
slots = tree[[1]];
nf = tree[[2]];
nf[instance[[slots]], k]], {j, Length[forest]}]];
Commonest[predictions, n]]
</code></pre>
<p>Also, a wacky idea that follows up on Andy Ross's comment: could one not pretty easily create an ensemble of forests and then let the forests that perform well on the training data have breeding rights? Breeding might consist of some sort of reshuffling of the slots. To do this, we might take more liberties with Dan's code by creating a polymorphic makeForest that permits the slots to be input directly.</p>
<pre><code>makeForest[data_, slotList_, sizes_] :=
Module[{dims = Dimensions[data], numberOfInstances,
numberOfAttributes, leaves, slots, subleaves, nf},
numberOfInstances = First[dims];
numberOfAttributes = Last[dims] - 1;
Table[(leaves = RandomSample[data, sizes];
slots = Sort[slotList[[tree]]];
subleaves =
Map[leaf \[Function] leaf[[slots]] -> leaf[[-1]], leaves];
nf = Nearest[subleaves];
{slots, nf}), {tree, Length[slotList]}]
]
makeForest[data_, ntrees_, subdim_, sizes_] :=
Module[{dims = Dimensions[data], numberOfAttributes},
numberOfAttributes = Last[dims] - 1;
makeForest[
data,
Table[Sort[
RandomSample[Range[numberOfAttributes], subdim]], {ntrees}
],
sizes
]
]
</code></pre>
|
3,053,386 | <p>This might be a very basic question for some of you. Indeed in <span class="math-container">$\textbf Z$</span>, it's very easy. For example, <span class="math-container">$\textbf Z / \langle 2 \rangle$</span> consists of <span class="math-container">$\langle 2 \rangle$</span> and <span class="math-container">$\langle 2 \rangle + 1$</span>. Obviously just two elements. In general, if <span class="math-container">$p$</span> is a positive prime in <span class="math-container">$\textbf Z$</span>, then <span class="math-container">$\textbf Z / \langle p \rangle$</span> consists of one principal ideal and <span class="math-container">$p - 1$</span> cosets.</p>
<p>I guess it's also easy in imaginary quadratic integer rings, since we can visualize them in the complex plane, e.g., <span class="math-container">$\textbf Z[i] / \langle 1 + i \rangle$</span> consists of <span class="math-container">$\langle 1 + i \rangle$</span>, <span class="math-container">$\langle 1 + i \rangle + 1$</span> and <span class="math-container">$\langle 1 + i \rangle + i$</span>... wait a minute, three elements? I'm not sure that's quite right.</p>
<p>And I really have no idea how to go about, say, <span class="math-container">$\textbf Z[\sqrt{14}] / \langle 4 + \sqrt{14} \rangle$</span>. To say nothing of something like <span class="math-container">$\textbf Z[\sqrt{10}] / \langle 2, \sqrt{10} \rangle$</span>.</p>
<p>Given a ring <span class="math-container">$R$</span> of algebraic integers of degree <span class="math-container">$2$</span>, and a prime ideal <span class="math-container">$\mathfrak P$</span>, how do you determine how many elements there are in <span class="math-container">$R / \mathfrak P$</span>?</p>
| nguyen quang do | 300,700 | <p>I'm afraid there is no general approach if not going through the ring of integers of the number field. Recall that for a number field <span class="math-container">$K$</span> of degree <span class="math-container">$n$</span> over <span class="math-container">$\mathbf Q$</span>, the norm <span class="math-container">$N(x)$</span> of an element <span class="math-container">$x\in K^*$</span> is defined as the product <span class="math-container">$s_1(x)...s_n(x)$</span>, where the <span class="math-container">$s_i$</span> are the <span class="math-container">$n$</span> distinct embeddings of <span class="math-container">$K$</span> into <span class="math-container">$\mathbf C$</span>. If <span class="math-container">$x$</span> is a non zero element of the ring of integers <span class="math-container">$R$</span> of <span class="math-container">$K$</span>, it is known that <span class="math-container">$\mid N(x)\mid $</span> is finite, equal to card <span class="math-container">$(R/Rx)$</span>. This is easily shown by using the fact that <span class="math-container">$R$</span> is a free <span class="math-container">$\mathbf Z$</span>-module of rank <span class="math-container">$n$</span> and, because <span class="math-container">$\mathbf Z$</span> is a PID, there exists a <span class="math-container">$\mathbf Z$</span>-basis <span class="math-container">$(e_1 ,..., e_n)$</span> of <span class="math-container">$R$</span> and elements <span class="math-container">$c_j$</span> of <span class="math-container">$\mathbf Z$</span> s.t. <span class="math-container">$(c_1 e_1,..., c_n e_n)$</span> is a <span class="math-container">$\mathbf Z$</span>-basis of <span class="math-container">$Rx$</span>. Thus, to get card <span class="math-container">$(R/Rx)$</span>, it suffices to compute <span class="math-container">$N(x)$</span>, which is, up to a sign, the constant term of the minimal polynomial of <span class="math-container">$x$</span> over <span class="math-container">$\mathbf Q$</span>. <em>Warning</em> : if <span class="math-container">$K=\mathbf Q (x)$</span>, then <span class="math-container">$\mathbf Z [x]$</span> is contained in, but not necessarily equal to <span class="math-container">$R$</span>. However the discrepancy between these two rings can be dealt with when knowing enough parameters attached to <span class="math-container">$K$</span> such as the discriminant, etc. See e.g. the examples of quadratic and cubic fields given in Marcus' book "Number Fields", chap. 2. But there is <em>no general systematic</em> approach. <strong>Your example</strong>: <span class="math-container">$K=\mathbf Q(\sqrt 14), x= 4+\sqrt 14, N(x)=2$</span>. Here <span class="math-container">$R=\mathbf Z(\sqrt 14)$</span> because <span class="math-container">$14\equiv 2$</span> mod <span class="math-container">$4$</span>, so card <span class="math-container">$\mathbf Z[\sqrt 14]/(\sqrt 14)=2$</span>.</p>
<p>More generally, for a non zero ideal <span class="math-container">$I$</span> of <span class="math-container">$R$</span>, the norm <span class="math-container">$N(I)$</span> can be <em>defined</em> as card <span class="math-container">$(R/I)$</span>, but then, we must give an independent way to compute this cardinal. This is done by using the fact that <span class="math-container">$R$</span> is a Dedekind ring, in which any non zero <span class="math-container">$I$</span> can be be written uniquely as a product <span class="math-container">$I={P_1}^{m_1}...{P_r}^{m_r}$</span> of powers of prime ideals of <span class="math-container">$R$</span> (or maximal ideals, it's the same thing here). By the CRT, we are reduced to compute card <span class="math-container">$(R/P^m)$</span>. For this, consider the chain of ideals <span class="math-container">$P^m<P^{m-1}<...<P<R$</span>, in which two consecutive terms are of the form <span class="math-container">$J<PJ$</span>, hence without any ideal strictly squeezed between them. It follows that, denoting by <span class="math-container">$p$</span> be the prime of <span class="math-container">$\mathbf Z$</span> under <span class="math-container">$P$</span> and by <span class="math-container">$f_P$</span> the inertia index of <span class="math-container">$P$</span>, the quotient <span class="math-container">$PJ/P$</span> can be viewed as a <span class="math-container">$1$</span>-dimensional vector space over <span class="math-container">$R/P$</span>, hence a vector space of dimension <span class="math-container">$f_P$</span> over <span class="math-container">$\mathbf F_p$</span> . In other words, <span class="math-container">$[R/P^m : R/P]=mf_P$</span>, i.e. card <span class="math-container">$(R/P^m)=p^{mf_P}$</span>. <strong>Your example</strong>: <span class="math-container">$K=\mathbf Q(\sqrt 10), I=(2,\sqrt 10)$</span>. Here <span class="math-container">$R=\mathbf Z[\sqrt 10]$</span>, again because <span class="math-container">$10\equiv 2$</span> mod <span class="math-container">$4$</span>. You could factorize <span class="math-container">$I$</span> and apply the above result, but it's quicker to consider the isomorphism of <em>additive groups</em> <span class="math-container">$R/I\cong (R/\mathbf Z\sqrt 10)/ (I/\mathbf Z \sqrt 10)\cong\mathbf Z/2\mathbf Z$</span> (check).</p>
|
496,011 | <p>Suppose that $ p_n(t) $ is the probability of finding n particle at a time t. And the dynamics of the particle is described by this equation : </p>
<p>$$ \frac{d}{dt} p_n(t) = \lambda \Delta p_n(t) $$</p>
<p>Defining one - dimensional lattice translation operator $ E_m = e^{mk} $ with $ km - mk = 1 $ and $\Delta = E_1 + E_{-1} - 2 $ is a lattice laplacian.</p>
<p>So, here are my questions:</p>
<ol>
<li>What is the one-dimensional lattice translation operator ? I think this one is a term from statistical physics, can you give me a simple explanation or a reference ?</li>
<li>What is the lattice laplacian ? Is it similar to discrete laplacian ? Could you give me a reference for this one ?</li>
<li>Is it possible to solve this equation analytically ?</li>
</ol>
<p>Oh, all of this equation is about random walk with a boundary (random walk of a particle)</p>
<p>Thanks</p>
| Felix Marin | 85,343 | <p>\begin{align}
&\int x^{3}\,\sqrt{x^{2} + 1\,}\,{\rm d}x
=
\int x^{2}\,{\rm d}\left[{1 \over 3}\,\left(x^{2} + 1\right)^{3/2}\right]
\\[3mm]&=
x^{2}\,{1 \over 3}\left(x^{2} + 1\right)^{3/2}
-
\int{1 \over 3}\left(x^{2} + 1\right)^{3/2}
\,{\rm d}\left(x^{2} + 1\right)
\\[3mm]&=
{1 \over 3}\,x^{2}\left(x^{2} + 1\right)^{3/2}
-
{1 \over 3}\,{\left(x^{2} + 1\right)^{5/2} \over 5/2}
=
\left(x^{2} + 1\right)^{3/2}\left[%
{1 \over 3}\,x^{2}
-
{2 \over 15}\left(x^{2} + 1\right)
\right]
\\[3mm]&=
\left(x^{2} + 1\right)^{3/2}\,{3x^{2} - 2 \over 15}
\end{align}</p>
<p>$$
\begin{array}{|c|}\hline\\
\color{#ff0000}{\large\quad%
\int x^{3}\,\sqrt{x^{2} + 1\,}\,{\rm d}x
\color{#000000}{\ =\ }
{1 \over 15}\left(3x^{2} - 2\right)\left(x^{2} + 1\right)^{3/2}\
+\
\color{#000000}{\mbox{constant}}
\quad}
\\ \\ \hline
\end{array}
$$</p>
|
37,804 | <p>I'm trying to gain some intuition for the usefullness of the spectral theory for bounded self adjoint operators. I work in PDE and any interesting applications/examples I've ever encountered are concerning <em>compact operators</em> and <em>unbounded operators</em>. Here I have the examples of $-\Delta$, the laplacian and $(-\Delta)^{-1}$, the latter being compact.</p>
<p>The most common example I see of a bounded non-compact operator is the shift map on $l_2$ given by $T(u_1,u_2,\cdots) = (u_2,u_3,\cdots)$. While this nicely illustrates the different kind of spectra, I don't see why this is useful or where this may come up in practice.<em>Why does knowing things about the spectrum of the shift operator help you in any practical way?</em></p>
<p>Secondly, concerning the spectral theorem for bounded, <em>self adjoint</em> operators. All useful applications I have encountered concern <em>compact or unbounded operators</em>. Is there an example arising in PDE (preferably) or some other applied field where knowing the spectral representation for a bounded, non-compact operator is useful? I have yet to encounter one that didn't just reduce to the compact case. Any insight/suggestions are appreciated.</p>
<p>Best,
dorian</p>
| user36539 | 36,539 | <p>Pseudodifferential operators are bounded (Theorem of Calderon-Vaillancourt) in contrast to Differential operators which are unbounded (they are defined on a dense subspace of $L^2$). We can try to use the spectral to obtain explicitely the spectral function $E(\lambda)$. </p>
<p>Another example is given by the paper of Safarov <a href="http://www2.imperial.ac.uk/~alaptev/Papers/Berez.pdf" rel="nofollow">http://www2.imperial.ac.uk/~alaptev/Papers/Berez.pdf</a> which gives another application of the spectral theorem to obtain the Berezin inequality which is important in the study of spectral properties of differential and pseudodifferential operators. </p>
|
2,201,085 | <p>Let
$$x_{1},x_{2},x_{3},x_{5},x_{6}\ge 0$$ such that
$$x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}=1$$
Find the maximum of the value of
$$\sum_{i=1}^{6}x_{i}\;x_{i+1}\;x_{i+2}\;x_{i+3}$$
where
$$x_{7}=x_{1},\quad x_{8}=x_{2},\quad x_{9}=x_{3}\,.$$</p>
| Michael Rozenberg | 190,319 | <p>For $x_i=\frac{1}{6}$ we get $\frac{1}{216}$.</p>
<p>We'll prove that it's a maximal value.</p>
<p>Indeed, let $x_1=\min\{x_i\}$, $x_2=x_1+a$, $x_3=x_1+b$, $x_4=x_1+c$, $x_5=x_1+d$ and $x_6=x_1+e$.</p>
<p>Hence, $a$, $b$, $c$, $d$ and $e$ are non-negatives and we need to prove that:
$$216\sum_{i=1}^6x_ix_{i+1}x_{i+2}x_{i+3}\leq\left(\sum_{i=1}^6x_i\right)^4$$ or
$$216(a^2+b^2+c^2+d^2+e^2-ab-bc-cd-de)x_1^2+$$
$$24((a+b+c+d+e)^3-9(2abc+abd+abe+acd+ade+2bcd+bce+bde+2cde))x_1+$$
$$+(a+b+c+d+e)^4-216(abcd+bcde),$$
which is true because
$$a^2+b^2+c^2+d^2+e^2-ab-bc-cd-de\geq$$
$$\geq a^2+b^2+c^2+d^2+e^2-ab-bc-cd-de-ea=\frac{1}{2}\sum_{cyc}(a-b)^2\geq0,$$
$$216(abcd+bcde)=216bcd(a+e)\leq216\left(\frac{a+b+c+d+e}{4}\right)^4=$$
$$=\frac{216}{256}(a+b+c+d+e)^4\leq(a+b+c+d+e)^4$$ and
$$(a+b+c+d+e)^3\geq9(2abc+abd+abe+acd+ade+2bcd+bce+bde+2cde),$$
but my proof of this statement is very ugly.</p>
|
203,114 | <p>If we have a pair of coordinates <span class="math-container">$(x,y)$</span>, let's say</p>
<pre><code>pt = {1,2}
</code></pre>
<p>then we can easily rotate the coordinates, by an angle <span class="math-container">$\theta$</span>, by using the rotation matrix</p>
<pre><code>R = {{Cos[\[Theta]], -Sin[\[Theta]]}, {Sin[\[Theta]], Cos[\[Theta]]}};
</code></pre>
<p>as</p>
<pre><code>pt2 = pt.R;
</code></pre>
<p>Now let's assume that we have a collection of points in the form</p>
<pre><code>data = {{1}, {-0.3, 1}, {2, -0.2}, {2}, {-2, 1}, {4,-2}, {3}, {1, 1}, {-0.2, -0.3}}
</code></pre>
<p>where the integers 1, 2 and 3 count the subsets of the list <code>data</code>. </p>
<p>My question: how can we rotate the <span class="math-container">$(x,y)$</span> coordinates of the list <code>data</code> by and angle, let's say <span class="math-container">$2\pi/3$</span> and create a new list, <code>data2</code> of the form</p>
<pre><code>data2 = {{1}, {rotated x, rotated y}, {rotated x, rotated y}, {2}, {rotated x, rotated y}, {roatetd x, rotated y}, ...}
</code></pre>
<p>Any suggestions?</p>
| Henrik Schumacher | 38,178 | <p>Maybe this way?</p>
<pre><code>data = {{1}, {-0.3, 1}, {2, -0.2}, {2}, {-2, 1}, {4, -2}, {3}, {1, 1}, {-0.2, -0.3}};
R = {{Cos[\[Theta]], -Sin[\[Theta]]}, {Sin[\[Theta]], Cos[\[Theta]]}};
data2 = data;
data2[[2 ;; ;; 3]] = data[[2 ;; ;; 3]].Transpose[R];
data2[[3 ;; ;; 3]] = data[[3 ;; ;; 3]].Transpose[R];
</code></pre>
<p>However, I advice not to store your data this way because, as a ragged list, it cannot be <a href="https://mathematica.stackexchange.com/q/3496">packed</a>.</p>
|
1,348,127 | <p>I'm struggling with this problem, because I'm not sure how to integrate $1/\ln(x)$</p>
<blockquote>
<p>Suppose that you have the following information about a function
$F(x)$:</p>
<p>$$F(0)=1, F(1)=2, F(2)=5$$ $$F'(x)=\frac1{\ln(x)}$$</p>
<p>Using the Fundamental Theorem of Calculus evaluate $$\int_0^2
\frac2{\ln(x)}$$</p>
</blockquote>
| Daniel Fischer | 83,702 | <p>Since $\cos$ is an even function, you have in fact a telescoping series:</p>
<p>\begin{align}
\sum_{n = 1}^N \sin x\sin (nx) &= \frac{1}{2}\sum_{n = 1}^N \bigl(\cos\bigl((n-1)x\bigr) - \cos \bigl((n+1)x\bigr)\bigr)\\
&= \frac{1}{2}\bigl( 1 + \cos x - \cos (Nx) - \cos \bigl((N+1)x\bigr)\bigr).
\end{align}</p>
|
3,794,507 | <p>On <span class="math-container">$(5),$</span> <span class="math-container">$(6),$</span> and <span class="math-container">$(7),$</span> what's the difference between <span class="math-container">$S^2$</span> and <span class="math-container">$\sigma_x^2$</span>?</p>
<p>Also, why does:</p>
<p><span class="math-container">$$\sigma_X^2 = \sum \limits_{i=1}^{n} \frac{1}{n^2} \sigma^2 = \frac{\sigma^2}{n}$$</span></p>
<p>?</p>
<p>I'm assuming <span class="math-container">$\sigma^2$</span> is the population variance.</p>
<p>It seems like S is a random variable since I can take the expectation of it, but, <span class="math-container">$\sigma_x$</span> is the same thing except not a random variable?</p>
<hr />
<p>Let <span class="math-container">$(X_1, \cdots, X_n)$</span> be a random sample of <span class="math-container">$X$</span> having unknown mean <span class="math-container">$\mu$</span>, and variance <span class="math-container">$\sigma_x^2$</span></p>
<p><span class="math-container">\begin{align}
S^2 &= \frac{1}{n} \sum (X_i - \bar{X})^2 \tag{0}\\[4ex]
E[S^2] &= E\Big[\frac{1}{n} \sum (X_i - \bar{X})^2 \Big]\tag{1}\\[2ex]
&= E\Bigg[\frac{1}{n} \sum \limits_{i=1}^{n}\Big[~[(X_i - \mu)-(\bar{X}-\mu)]^2~\Bigg]\tag{2}\\[2ex]
&= E\Bigg[ \frac{1}{n} \sum \limits_{i=1}^{n} \Big[~(X_i-\mu)^2-2(X_i-\mu)(\bar{X}-\mu)+(\bar{X}-\mu)^2~\Big] ~\Bigg]\tag{3}\\[2ex]
&= E\Bigg[~\frac{1}{n} \Big[~\sum \limits_{i=1}^{n} (X_i - \mu)^2 - n(\bar{X} - \mu)^2 \Big]~\Bigg]\tag{4}\\[2ex]
&= \frac{1}{n} \sum \limits_{i=1}^{n} E\big[(X_i-\mu)^2\big] - E\big[(\bar{X}-\mu)^2\big]\tag{5}\\[2ex]
&= \sigma^2 - \sigma_X^2\tag{6}\\[2ex]
&= \sigma^2 - \frac{1}{n}\sigma^2\tag{7}\\[2ex]
&= \frac{n-1}{n}\sigma^2\tag{8}
\end{align}</span></p>
<p>Equation (8) shows that <span class="math-container">$S^2$</span> is a biased estimator of <span class="math-container">$\sigma^2$</span></p>
| Wim Nevelsteen | 799,896 | <p><span class="math-container">$\sigma$</span> is the population standard deviation of the random variable <span class="math-container">$X$</span>.</p>
<p><span class="math-container">$X_i$</span> represents the value of the i-th sample. If you have <span class="math-container">$n$</span> different such samples, the standard deviation of the <span class="math-container">$n$</span> samples is the random variable <span class="math-container">$S$</span>.</p>
<p>The average of a <span class="math-container">$n$</span> samples is the random variable <span class="math-container">$\bar{X}$</span>. This variable will have a standard deviation <span class="math-container">$\sigma_X$</span>.</p>
|
96,377 | <p>I have a polynomial with the coefficients of {a1, b1, b2}</p>
<pre><code>x = 1/8 (a1^4 E^(4 I τ ω) - 2 a1^2 E^( 2 I τ ω) (b2 Sqrt[1 - t] + b1 Sqrt[t])^2 + (b2 Sqrt[ 1 - t] + b1 Sqrt[t])^4);
</code></pre>
<p><a href="https://i.stack.imgur.com/6W2Ob.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6W2Ob.png" alt="enter image description here"></a></p>
<p>After expanding, it is :</p>
<pre><code>Expand[x]
b2^4/8 - 1/4 a1^2 b2^2 E^(2 I τ ω) + 1/8 a1^4 E^(4 I τ ω) + 1/2 b1 b2^3 Sqrt[1 - t] Sqrt[t] - 1/2 a1^2 b1 b2 E^(2 I τ ω) Sqrt[1 - t] Sqrt[t] + 3/4 b1^2 b2^2 t - (b2^4 t)/4 - 1/4 a1^2 b1^2 E^(2 I τ ω) t + 1/4 a1^2 b2^2 E^(2 I τ ω) t + 1/2 b1^3 b2 Sqrt[1 - t] t^(3/2) - 1/2 b1 b2^3 Sqrt[1 - t] t^(3/2) + ( b1^4 t^2)/8 - 3/4 b1^2 b2^2 t^2 + (b2^4 t^2)/8
</code></pre>
<p>There are two problems :</p>
<ol>
<li><p>The terms <code>1/2 b1 b2^3 Sqrt[1 - t] Sqrt[t]</code> and <code>-(1/2) b1 b2^3 Sqrt[1 - t] t^(3/2)</code> have the same coefficient b1 b2^3 , but they donot combine automatically. I need all the terms with the same coefficient in {a1, b1,
b2} be combined. e.g.,</p>
<pre><code>1/2 b1 b2^3 Sqrt[1 - t] Sqrt[t] - 1/2 b1 b2^3 Sqrt[1 - t] t^(3/2) = 1/2 b1 b2^3 (1 - t)^(3/2) Sqrt[t]
</code></pre></li>
<li><p>I need the polynomial be reordered in a descending order of the variables {a1, b1, b2}, as shown below, but this is done by hand. I hope how to do so automatically by Mathematica.</p>
<pre><code>1/8 a1^4 E^(4 I τ ω) - 1/4 a1^2 b1^2 E^(2 I τ ω) t - 1/2 a1^2 b1 b2 E^(2 I τ ω) Sqrt[1 - t] Sqrt[t] - 1/4 a1^2 b2^2 E^(2 I τ ω) + 1/4 a1^2 b2^2 E^(2 I τ ω) t + (b1^4 t^2)/8 + 1/2 b1^3 b2 Sqrt[1 - t] t^(3/2) + 3/4 b1^2 b2^2 t - 3/4 b1^2 b2^2 t^2 + 1/2 b1 b2^3 Sqrt[1 - t] Sqrt[t] - 1/2 b1 b2^3 Sqrt[1 - t] t^(3/2) + b2^4/8 - (b2^4 t)/4 + (b2^4 t^2)/8
</code></pre></li>
</ol>
<p>After reordering in a descending order, and combining all the terms with the same coefficient, I achieve the final result of</p>
<pre><code>1/8 a1^4 E^(4 I τ ω) - 1/4 a1^2 b1^2 E^(2 I τ ω) t - 1/2 a1^2 b1 b2 E^(2 I τ ω) Sqrt[1 - t] Sqrt[t] +1/4 a1^2 2^2 E^(2 I τ ω) (-1 + t) + (b1^4 t^2)/8 + 1/2 b1^3 b2 Sqrt[1 - t] t^(3/2) + -(3/4) b1^2 b2^2 (-1 + t) t + 1/2 b1 b2^3 (1 - t)^(3/2) Sqrt[t] + 1/8 b2^4 (-1 + t)^2
</code></pre>
<p>Or in figure format:</p>
<p><a href="https://i.stack.imgur.com/PIEnb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PIEnb.png" alt="enter image description here"></a></p>
<p>I can do this work manually in this example with only 14 terms, but in my next step I need to process a polynomial with more than 30 terms. Doing this reordering and combining work automatically is very necessary.</p>
<p>I have checked the previous questions and answers on stackexchange, but my problems can not be solved. Thank you very much if you can help me!</p>
| Jason B. | 9,490 | <p>I think this would do what you are asking for:</p>
<pre><code>Normal[Series[x, {a1, 0, 4}, {b1, 0, 4}, {b2, 0, 4}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/If7gs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/If7gs.png" alt="enter image description here"></a></p>
<p>or in copyable form:</p>
<pre><code>(1/8)*a1^4*E^(4*I*τ*ω) + (1/8)*b2^4*(-1 + t)^2 + (1/2)*b1*b2^3*(1 - t)^(3/2)*Sqrt[t] + (1/2)*b1^3*b2*Sqrt[1 - t]*t^(3/2) + (b1^4*t^2)/8 +
a1^2*((1/4)*b2^2*E^(2*I*τ*ω)*(-1 + t) - (1/2)*b1*b2*E^(2*I*τ*ω)*Sqrt[1 - t]*Sqrt[t] - (1/4)*b1^2*E^(2*I*τ*ω)*t) + b1^2*b2^2*((3*t)/4 - (3*t^2)/4)
</code></pre>
|
222,480 | <p>How many $10$-digit numbers have two digits $1$, two digits $2$, three digits $3$, three digits $4$ so that between the two digits $1$ it has at least <strong>other two digits</strong> and between two digits $2$ it has at least <strong>other two digits</strong> (not necessarily distinct)? Thanks!</p>
| Community | -1 | <p>The motivation behind Frobenius method is to seek a power series solution to ordinary differential equations.</p>
<p>Let $y(x) = \displaystyle \sum_{n=0}^{\infty} a_n x^n$. Then we get that $$y'(x) = \sum_{n=0}^{\infty} na_n x^{n-1}$$
$$3xy'(x) = \sum_{n=0}^{\infty} 3na_n x^{n}$$
$$y''(x) = \sum_{n=0}^{\infty} n(n-1)a_n x^{n-2}$$
$$xy''(x) = \sum_{n=0}^{\infty} n(n-1)a_n x^{n-1} = \sum_{n=0}^{\infty} n(n+1)a_{n+1} x^{n}$$
$$x^2y''(x) = \sum_{n=0}^{\infty} n(n-1)a_n x^{n}$$
The ODE is $$xy'' - x^2 y'' -3xy' - y = 0$$
Plugging in the appropriate series expansions, we get that
$$\sum_{n=0}^{\infty} \left(n(n+1)a_{n+1} - n(n-1)a_n - 3na_n - a_n\right)x^n = 0$$
Hence, we get that
$$n(n+1)a_{n+1} = (n(n-1) +3n+1)a_n = (n+1)^2 a_n \implies a_{n+1} = \dfrac{n+1}{n}a_n$$
First note that $a_0 = 0$. Choose $a_1$ arbitrarily. Then we get that $a_2 = 2a_1$, $a_3 = 3a_1$, $a_4 = 4a_1$ and in general, $a_{n} = na_1$.
Hence, the solution is given by
$$y(x) = a_1 \left(x+2x^2 + 3x^3 + \cdots\right)$$
This power series is valid only within $\vert x \vert <1$. In this region, we can simplify the power series to get
\begin{align}
y(x) & = a_1 x \left(1 + 2x + 3x^2 + \cdots \right)\\
& = a_1 x \dfrac{d}{dx} \left(x + x^2 + x^3 + \cdots \right)\\
& = a_1 x \dfrac{d}{dx} \left(\dfrac{x}{1-x}\right)\\
& = a_1 \dfrac{x}{(1-x)^2}
\end{align}</p>
|
395,618 | <p>If n squares are randomly removed from a $p \ \cdot \ q$ chessboard, what will be the expected number of pieces the chessboard is divided into? </p>
<p>Can anybody please provide how can I approach the problem? There are numerous cases and when I go through case consideration it becomes extremely complex.</p>
| user78391 | 78,391 | <p>2 = = 2 piece
3 = (3+1)/2 = 2 piece
4 = = 2 piece
5 = (5+1)/2 = 3 piece
6 = 3 piece
7 = (7+1)/2 = 4 piece
8 = = 4 piece</p>
<p>Series = (n+1)/2 pieces</p>
<p>Regards,
Yuvaraj</p>
|
33,389 | <p>Consider Schrödinger's <em>time-independent</em> equation
$$
-\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=E\psi.
$$
In typical examples, the potential $V(x)$ has discontinuities, called <em>potential jumps</em>.</p>
<p>Outside these discontinuities of the potential, the wave function is required to be twice differentiable in order to solve Schrödinger's equation.</p>
<p>In order to control what happens at the discontinuities of $V$ the following assumption seems to be standard (see, for instance, Keith Hannabus' <em>An Introduction to Quantum Theory</em>):</p>
<blockquote>
<p><strong>Assumption</strong>: The wave function and its derivative are continuous at a potential jump.</p>
</blockquote>
<p><strong>Questions</strong>:</p>
<p>1) Why is it necessary for a (physically meaningful) solution to fulfill this condition?</p>
<p>2) Why is it, on the other hand, okay to abandon twofold differentiability?</p>
<p>Edit: One thing that just became clear to me is that the above assumption garanties for a well-defined probability/particle current.</p>
| José Figueroa-O'Farrill | 394 | <p>To answer your first question:</p>
<p>Actually the assumption is <em>not</em> that the wave function and its derivative are continuous. That follows from the Schrödinger equation once you make the assumption that the probability amplitude $\langle \psi|\psi\rangle$ remains finite. That is the physical assumption. This is discussed in Chapter 1 of the first volume of <em>Quantum mechanics</em> by Cohen-Tannoudji, Diu and Laloe, for example. (Google books only has the second volume in English, it seems.)</p>
<p>More generally, you may have potentials which are distributional, in which case the wave function may still be continuous, but not even once-differentiable.</p>
<p>To answer your second question:</p>
<p>Once you deduce that the wave function is continuous, the equation itself tells you that the wave function cannot be twice differentiable, since the second derivative is given in terms of the potential, and this is not continuous.</p>
|
33,389 | <p>Consider Schrödinger's <em>time-independent</em> equation
$$
-\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=E\psi.
$$
In typical examples, the potential $V(x)$ has discontinuities, called <em>potential jumps</em>.</p>
<p>Outside these discontinuities of the potential, the wave function is required to be twice differentiable in order to solve Schrödinger's equation.</p>
<p>In order to control what happens at the discontinuities of $V$ the following assumption seems to be standard (see, for instance, Keith Hannabus' <em>An Introduction to Quantum Theory</em>):</p>
<blockquote>
<p><strong>Assumption</strong>: The wave function and its derivative are continuous at a potential jump.</p>
</blockquote>
<p><strong>Questions</strong>:</p>
<p>1) Why is it necessary for a (physically meaningful) solution to fulfill this condition?</p>
<p>2) Why is it, on the other hand, okay to abandon twofold differentiability?</p>
<p>Edit: One thing that just became clear to me is that the above assumption garanties for a well-defined probability/particle current.</p>
| Jiahao Chen | 1,674 | <p>Here is a tangential response to your first question: sometimes these discontinuities do have physical significance and are not just issues of mathematical trickery surrounding pathological cases. Wavefunctions for molecular Hamiltonians become pointy where the atomic nuclei lie, which indicate places where the 1/r Coulomb operator becomes singular. There are equations like the Kato cusp conditions (T. Kato, Comm. Pure Appl. Math. 10, 151 (1957)) that relate the magnitude of the discontinuity at the nucleus to the size of the nuclear charge. I have heard this explained as a result of requiring the energy (which is the Hamiltonian's eigenvalue) to remain finite everywhere, thus at places where the potential is singular, the kinetic energy operator must also become singular at those places. Since the kinetic energy operator also controls the curvature of the wavefunction, the wavefunction at points of discontinuity must change in a nonsmooth way.</p>
|
4,551,407 | <p>Here's the question:<br />
If we have m loaves of bread and want to divide them between n people equally what is the minimum number of cuts we should make?<br />
example:<br />
3 loaves of bread and 15 people the answer is 12 cuts.<br />
6 loaves of bread and 10 people the answer is 8 cuts.</p>
<p>for example 1, I found that I should cut each piece of bread 4 times so that we can have 15 pieces in total, but I can't find an algorithm for it. Any help would be appreciated.</p>
| Kevin Dietrich | 1,103,878 | <p>If you say that <span class="math-container">$y = f(x)$</span> you can solve it like this (homogeneous):</p>
<p><em>step 2: Assume a solution to this Euler-Cauchy equation will be proportional to for some constant <span class="math-container">$λ$</span>. Substitute <span class="math-container">$y(x) := x^{λ}$</span> into the differentialequation.</em>
<span class="math-container">$$
\begin{align*}
f(x) &= x^{2} \cdot y'' + 4 \cdot x \cdot y' + 2 \cdot y\\
y &= x^{2} \cdot y'' + 4 \cdot x \cdot y' + 2 \cdot y \quad\mid\quad \text{step 2}\\
0 &= x^{2} \cdot {x^{λ}}'' + 4 \cdot x \cdot {x^{λ}}' + x^{λ}\\
0 &= x^{2} \cdot \frac{\operatorname{d}^{2}}{\operatorname{d}x^{2}}x^{λ} + 4 \cdot x \cdot \frac{\operatorname{d}}{\operatorname{d}x}x^{λ} + x^{λ} \quad\mid\quad \frac{\operatorname{d}^{2}}{\operatorname{d}x^{2}}x^{λ} := λ \cdot (λ - 1) \cdot x^{λ - 2} \text{ and } \frac{\operatorname{d}}{\operatorname{d}x}x^{λ} = λ \cdot x^{λ - 1}\\
0 &= x^{2} \cdot λ \cdot (λ - 1) \cdot x^{λ - 2} + 4 \cdot x \cdot λ \cdot x^{λ - 1} + x^{λ}\\
0 &= λ^{2} \cdot x^{λ} + 3 \cdot λ \cdot x^{λ} + x^{λ}\\
0 &= (λ^{2} + 3 \cdot λ + 1) \cdot x^{λ} \quad\mid\quad\text{solve for } λ \text{ with assuming } x \ne 0 \\
0 &= (λ^{2} + 3 \cdot λ + 1) \cdot x^{λ} \quad\mid\quad\text{Zero product theorem}\\
0 &= (λ^{2} + 3 \cdot λ + 1) \cdot x^{λ} \quad\mid\quad\text{solve for } λ \\
0 &= λ^{2} + 3 \cdot λ + 1 \quad\mid\quad \text{use the pq formula}\\
λ &= - \frac{3}{2} \pm \sqrt{\frac{5}{4}}\\
λ &= - \frac{3}{2} \pm \frac{\sqrt{5}}{2}\\
\\
y_{1} &= \mathrm{c}_{1} \cdot x^{- \frac{3}{2} + \frac{\sqrt{5}}{2}}\\
y_{2} &= \mathrm{c}_{2} \cdot x^{- \frac{3}{2} - \frac{\sqrt{5}}{2}}\\
\\
y &= y_{1} + y_{2} = \mathrm{c}_{1} \cdot x^{- \frac{3}{2} + \frac{\sqrt{5}}{2}} + \mathrm{c}_{2} \cdot x^{- \frac{3}{2} - \frac{\sqrt{5}}{2}}
\end{align*}
$$</span></p>
<p>If you want to to it with a nonhomogeneous function...</p>
<p><em>step 3:</em> Try <span class="math-container">$t := \log(t) \Rightarrow x := e^{t}$</span>:
<span class="math-container">$$
\begin{align*}
f(x) &= x^{2} \cdot y'' + 4 \cdot x \cdot y' + 2 \cdot y\\
y &= x^{2} \cdot y'' + 4 \cdot x \cdot y' + 2 \cdot y \quad\mid\quad \text{step 2}\\
y &= e^{2 \cdot t} \cdot y'' + 4 \cdot e^{t} \cdot y' + 2 \cdot y\\
y(t) &= \frac{\operatorname{d}^{2}}{\operatorname{d}t^{2}}y(t) + 3 \cdot \frac{\operatorname{d}}{\operatorname{d}t}y(t) + 2 \cdot y(t)\\
0 &= \frac{\operatorname{d}^{2}}{\operatorname{d}t^{2}}y(t) + 3 \cdot \frac{\operatorname{d}}{\operatorname{d}t}y(t) + y(t)\\
0 &= \frac{\operatorname{d}^{2}}{\operatorname{d}t^{2}}e^{λ \cdot t} + 3 \cdot \frac{\operatorname{d}}{\operatorname{d}t}e^{λ \cdot t} + e^{λ \cdot t}\\
0 &= λ^{2} \cdot e^{λ \cdot t} + 3 \cdot λ \cdot e^{λ \cdot t} + e^{λ \cdot t}\\
0 &= (λ^{2} + 3 \cdot λ + 1) \cdot e^{λ \cdot t} \quad\mid\quad \text{Zero product theorem}\\
0 &= λ^{2} + 3 \cdot λ + 1 \quad\mid\quad \text{use the pq formula}\\
λ &= - \frac{3}{2} \pm \sqrt{\frac{5}{4}}\\
λ &= - \frac{3}{2} \pm \frac{\sqrt{5}}{2}\\
\\
y_{1}(t) &= \mathrm{c}_{1} \cdot e^{(- \frac{3}{2} + \frac{\sqrt{5}}{2}) \cdot t}\\
y_{2(t)} &= \mathrm{c}_{2} \cdot e^{(- \frac{3}{2} - \frac{\sqrt{5}}{2}) \cdot t}\\
\\
y &= x^{- \frac{3}{2} - \frac{\sqrt{5}}{2}} \cdot (\mathrm{c}_{1} + \mathrm{c}_{2} \cdot x^{\sqrt{5}})
\end{align*}
$$</span></p>
|
4,551,407 | <p>Here's the question:<br />
If we have m loaves of bread and want to divide them between n people equally what is the minimum number of cuts we should make?<br />
example:<br />
3 loaves of bread and 15 people the answer is 12 cuts.<br />
6 loaves of bread and 10 people the answer is 8 cuts.</p>
<p>for example 1, I found that I should cut each piece of bread 4 times so that we can have 15 pieces in total, but I can't find an algorithm for it. Any help would be appreciated.</p>
| user | 1,053,451 | <p>Treat it as if it's a quadratic and you're searching for roots (this is not true, it's a second order ODE, but you'll notice that the following method feels familiar to you:)</p>
<p>Rewrite it like this:
<span class="math-container">$$ x^2y^{(2)}+4xy^{(1)}+2y-f(x)=0. $$</span></p>
<p>Notice how this is similar to <span class="math-container">$ax^2+bx+c=0$</span>?</p>
<p>This has what is called a characteristic equation. The other user, Kevin Dietrich, has supplied the gist of the bruteforce to solving it.</p>
<p>I am answering this because I want you to know the origin of the terminology, notation, and the fact this is freely taught at Paul's Online Differential Equation notes---found elsewhere online.</p>
<p>SOLUTION:</p>
<p>Using the HINT given above, this can be solved with observation of the product rule <span class="math-container">$\alpha \beta)'=\alpha' \beta+\alpha \beta'$</span> for <span class="math-container">$$\alpha=y \implies \alpha'=y'$$</span> and <span class="math-container">$$\beta=x^2\implies \beta'=2x$$</span> which results in <span class="math-container">$$(\alpha \beta)'=f(x)$$</span> giving immediately <span class="math-container">$$\alpha \beta=\int f(x)dx$$</span> which should imply <span class="math-container">$\alpha=\frac{1}{\beta}\int f(x)dx$</span>. Basically, <span class="math-container">$y=\alpha$</span>.</p>
<p>I think this is correct, but I may be wrong. This is completely different from the usual brute force I initially suggested thanks to the HINT given by the other answer.</p>
<p>Update:
The original work is incomplete.</p>
<p><span class="math-container">$$(x^2y')'+(2xy)'=f(x)$$</span> is a transformed differential equation of form <span class="math-container">$(g(x)y')'+(g'(x)y)'=f(x)$</span>. This via letting <span class="math-container">$g(x)=x^2$</span> and consequently <span class="math-container">$\frac{d}{dx}g=2x$</span>. However, the whole equation still has outer derivatives, so we must integrate the sum: <span class="math-container">$g(x)y'+g'(x)y=\int f(x)dx.$</span> So, the original part was off by a bit (a single integral). The solution is therefore:</p>
<p><span class="math-container">$$g(x)y'+g'(x)y=\int f(x)dx\implies (gy)'=\int f(x)dx \implies gy=\int \int f dxdx$$</span>
<span class="math-container">$$y=\frac{1}{g}\int\int fdxdx$$</span></p>
|
2,432,213 | <p>I am having a really hard time understanding this problem. I know that for uniqueness we need that the derivative is continuous and that the partial derivative is continuous. I also know that the lipschitz condition gives continuity. I can't figure out what to do with this problem though. </p>
<p><a href="https://i.stack.imgur.com/xktm5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xktm5.png" alt="enter image description here"></a></p>
| H. H. Rugh | 355,946 | <p>The problem has some inherent unboundedness in the way it is posed. You need to get rid of that in order to pursue. Some suggestions:</p>
<p>Pick first $r_0>0$ arbitrary and note that by compactness + continuity, $g([0,r_0])=[a,b]$ is a compact subset of $(-\infty,r_0]$. Therefore,
$$ R = \sup \left\{ |\phi(u)| : u \in g([0,r_0])\cap (-\infty,0])\right\} < +\infty$$
Intuitively, this gives a bound on the delayed $x$ when the delay goes negative.</p>
<p>Let $ M = R + 1+ |\phi(0)|$.
Again by compactness we have that:
$$ C = {\rm max}\; f([0,r_0]\times[-M,M]\times[-M,M]) +1 \in (0, + \infty)$$
This provides bounds upon $f$ on the 'relevant' domain. We now shrink $r$ to
take into account that $x$ should not become too big:
$$ r = {\rm min} \{ r_0, (R+1)/C, 1/(4L)\} >0$$
The first is clear, the second because we don't want an integrated $x$ to go beyond $M$ and the last because we need a contraction in the end. We
define</p>
<p>$$ {\cal C} = \{ x\in C ((-\infty,r] ; {\Bbb R}): x(t)=\phi(t), t\leq 0; |x(t)|\leq M, t\in [0,r]\}$$
This is a closed subset of the Banach space where $M=+\infty$ and
$\|x\| = \sup_{0\leq t\leq r} |x(t)|$ (note that the negative part is fixed, you don't have to consider it in the norm). Finally, we define as usual:
$$ Tx(t) = \phi(0) + \int_0^t f(t,x(t),x(g(t)) dt, \; \; 0\leq t\leq r .$$
Our choice of $r$ (calculation...) ensures that for $x,x_1,x_2\in {\cal C}$:
$$ \|Tx\|\leq M \; \; {\rm and} \; \; \|Tx_1-Tx_2\|\leq \frac12 \|x_1-x_2\|$$
so $T$ is a contraction on ${\cal C}$ and the rest is standard. One rather crucial point (which makes the problem tractable) is that the delay does not depend upon $x$, only upon $t$.</p>
|
4,073,821 | <p>If <span class="math-container">$\sum a_n$</span> converges, then does <span class="math-container">$\sum |a_n|$</span> converge as well? This is the same as "absolute convergence", where if <span class="math-container">$\sum a_n$</span> converges, then <span class="math-container">$\sum|a_n|$</span> might actually diverge, for example, <span class="math-container">$\sum\frac{(-1)^n}{n}$</span>. So this would serve as a counterexample to the question, and prove it to be wrong, correct?</p>
| Community | -1 | <p>Assume a series of positive terms <span class="math-container">$a_n$</span>. The corresponding alternating series <span class="math-container">$a_1-a_2+a_3-\cdots$</span> can be seen as the series with terms <span class="math-container">$b_1=a_1-a_2,b_2=a_3-a_4,b_3=a_5-a_6,\cdots$</span> As those terms are differences, they decrease faster than the <span class="math-container">$a$</span>'s. E.g.</p>
<p><span class="math-container">$$a_n=\dfrac1n$$</span> and <span class="math-container">$$b_n=\frac1{(2n-1)(2n)}=\Theta\left(\frac1{n^2}\right).$$</span></p>
<p>For this reason, we can have a series <span class="math-container">$a_n$</span> that diverges, while the <span class="math-container">$b_n$</span> converge.</p>
|
2,614,127 | <p>I have been trying to show the statement below using the $AC$ but I am starting to think that it is not strong enough to do it. </p>
<p><strong>Context:</strong> Let $\Gamma$ be an uncountable linearly ordered set with a smallest element (not necessarily well-ordered). </p>
<p>For each $\alpha\in\Gamma$, let $C_\alpha$ be a non-empty set such that $C_\alpha\supsetneqq C_\beta$ whenever $\alpha<\beta$.
For each $\alpha\in\Gamma$, let $P(C_\alpha)$ be a partition of $C_\alpha$ such that: whenever $\alpha<\beta$, for all $B\in P(C_\beta)$, there exists $A\in P(C_\alpha)$ such that $B\subsetneqq A$.</p>
<blockquote>
<p><strong>Statement:</strong> There exists $\{A_\alpha\}_{\alpha\in\Gamma}$ such that $A_\alpha\in P(C_\alpha)$ and $A_\alpha\supsetneqq A_\beta$ whenever
$\alpha<\beta$.</p>
</blockquote>
<p>By using the AC we can see that there exists $\{A_\alpha\}_{\alpha\in\Gamma}$ such that $A_\alpha\in P(C_\alpha)$, but (I think) there is nothing to ensure the monotonicity condition: $A_\alpha\supsetneqq A_\beta$ whenever $\alpha<\beta$.</p>
<p>Maybe the statement is a well-known result or conjecture that I am not aware of, I would appreciate some answer or reference.</p>
| Asaf Karagila | 622 | <p>The answer is negative. This is not provable, even when assuming the axiom of choice. Even under the assumption that $\Gamma$ is a well-ordered set.</p>
<p>Let $T$ be a tree of height $\omega_1$ without a branch (either an Aronszajn tree, assuming choice; or any counterexample to $\sf DC_{\omega_1}$ otherwise).</p>
<p>Let $C_\alpha$ be $T\setminus T\restriction\alpha$, namely all the nodes of height at least $\alpha$. The partition is easy, $P(C_\alpha)$ is simply the set of subtrees above each node in the $\alpha$th level of $T$.</p>
<p>Easily, the sets $C_\alpha$ are descending, and the partitions refine each other. But now, if $A_\alpha$ is as descending choice sequence, by the fact that each $A_\alpha$ has a unique root, this would give us a branch. But the assumption on $T$ is that it has no uncountable branches, which is a contradiction.</p>
|
19,373 | <p>I posted this question earlier today on the Mathematics site (<a href="https://math.stackexchange.com/q/3988907/96384">https://math.stackexchange.com/q/3988907/96384</a>), but was advised it would be better here.</p>
<p>I had a heated argument with someone online who claimed to be a school mathematics teacher of many years standing. The question which spurred this discussion was something along the lines of:</p>
<p>"A horseman was travelling from (location A) along a path through a forest to (location B) during the American War of Independence. The journey was of 22 miles. How far was it in kilometres?"</p>
<p>To my mind, the answer is trivially obtained by multiplying 22 by 1.6 to get 35.2 km, which can be rounded appropriately to 35 km.</p>
<p>I was roundly scolded by this ancient mathematics teacher for a) not using the official conversion factor of 1.60934 km per mile and b) not reporting the correct value as 35.405598 km.</p>
<p>Now I have serious difficulties with this analysis. My argument is: this is a man riding on horseback through a forest in a pre-industrial age. It would be impractical and impossible to measure such a distance to any greater precision than (at best) to the nearest 20 metres or so, even in this day and age. Yet the answer demanded was accurate to the nearest millimetre.</p>
<p>But when I argued this, I was told that it was not my business to round the numbers. I was to perform the conversion task given the numbers I was quoted, and report the result for the person asking the question to decide how accurately the numbers are to be interpreted.</p>
<p>Is that the way of things in school? As a trained engineer, my attitude is that it is part of the purview of anybody studying mathematics to be able to estimate and report appropriate limits of accuracy, otherwise you get laughably ridiculous results like this one.</p>
<p>I confess I have never had a good relationship with teachers, apart from my A-level physics teacher whom I adored, so I expect I will be given a hard time over my inability to understand the basics of what I have failed to learn during the course of the above.</p>
| practical man | 15,302 | <p><em>IF</em> the horse ride were 22.00000000 miles then the other person would be right.</p>
<p>Else if it were 22 miles then you should round the answer to zero decimal places.</p>
<p>Some people are illogically pedantic without any rational reason for what they promulgate.</p>
|
747,519 | <p><img src="https://i.stack.imgur.com/jYzfz.png" alt="enter image description here"></p>
<p>I tried this problem on my own, but got 1 out of 5. Now we are supposed to find someone to help us. Here is what I did:</p>
<p>Let $f:[a,b] \rightarrow \mathbb{R}$ be continuous on a closed interval $I$ with $a,b \in I$, $a \leq b$</p>
<p>If $f(a), f(b) \in f(I)$ let $f(a)\leq y \leq f(b)$. Then by IVT there exists $x$, $a\leq x \leq b$ where $f(x)=y$ $Rightarrow$ The image is also an interval. </p>
<p>Show closed: Let m be the lowest upper bound and M the greatest lower bound of the image interval. $I=[a,b]$ must be a subset of $[M,m]$ and the function attains its bounds,
$m,M\in f(I)$. so $f(I)$ is a subset of $[M,m]$, thus is closed. </p>
<p>Can anyone provide a proof of this statement? Thanks!</p>
| Patrick Stevens | 259,262 | <p>Another way of showing "closed", because it's useful to be able to switch between the various definitions of these concepts: recall that continuous functions preserve the convergence of sequences, and that a closed set is precisely one which contains all its limit points.</p>
<p>Let $[f(c_i)]$ be a sequence in $f([a,b])$, and suppose it tends to $y \in \mathbb{R}$. Then the $c_i$ are elements of the closed bounded interval $[a,b]$, so they have a convergent subsequence $c_{n_i}$, say; by closure of $[a,b]$, the limit $x$ of the $c_{n_i}$ lies in $[a,b]$.</p>
<p>Now since $c_{n_i} \to x$, have $f(c_{n_i}) \to f(x)$ by continuity of $f$, so the limit of $[f(c_{n_i})]$ lies in $f([a,b])$.
Finally, since $[f(c_{n_i})]$ is a subsequence of the convergent-by-assumption $[f(c_i)]$, it converges to the same limit; so $f(x) = y$.</p>
<p>This proves that $f([a,b])$ contains all its limit points.</p>
<p>The extreme value theorem tells you that $f([a,b])$ is bounded; it can also be used to prove that it's closed (which I did above in a more roundabout way). I'm sure there's a way to show "bounded" without that, but I haven't given any thought to that.</p>
<p>That $f([a,b])$ is an interval follows in one line from a certain very important theorem from any first course in real analysis.</p>
|
613,105 | <p>I was observing some nice examples of equalities containing the numbers $1,2,3$ like $\tan^{-1}1+\tan^{-1}2+\tan^{-1}3=\pi$ and $\log 1+\log 2+ \log 3=\log (1+2+3)$. I found out this only happens because $1+2+3=1*2*3=6$.<br> I wanted to find other examples in small numbers, but I failed. How can we find all of the solutions of $a+b+c=abc$ in natural numbers?The question seemed easy, but it seems difficult to find. I would prefer an elementary way to find them!<br><br> What I did: We know if $a+b+c=abc$, $a|a+b+c$ so $a|b+c$. Similarly, $b|a+c$ and $c|a+b$.<br> Other than that, if we multiply both sides by $b$, we get $b^2+1=(bc-1)(ab-1)$.<br> If we also divide both sides by $abc$, we get $\frac{1}{bc}+\frac{1}{ac}+\frac{1}{ab}=1$.<br><br> I don't know how to go further using any of these, but I think they are a good start. I would appreciate any help.</p>
| Chris Taylor | 4,873 | <p>If $a=0$ then you require $b+c=0$ and hence $b=c=0$.</p>
<p>Note that you can assume $a\leq b \leq c$. If $a, b, c \geq 2$ then $abc \geq 4c > c + b + a$. Hence at least one of $a,b,c$ is equal to $1$.</p>
<p>Wlog assume $a=1$, and look for solutions to $b+c+1 = bc$. If $b,c\geq 3$ then $bc \geq 3c > b + c + 1$, hence at least one of $b,c$ is less than $3$</p>
<p>Wlog assume $b=2$, and look for solutions to $c+3 = 2c$, which implies $c=3$.</p>
<p>So the only solutions are $(0,0,0)$ and $(1,2,3)$ and their permutations.</p>
|
794,389 | <p>Q1. I was fiddling around with squaring-the-square type algebraic maths, and came up with a family of arrangements of $n^2$ squares, with sides $1, 2, 3\ldots n^2$ ($n$ odd). Which seems like it would work with ANY odd $n$. It's so simple, surely it's well-known, but I haven't seen it in my (brief) web travels. I have a page with pics <a href="http://www.adamponting.com/squaring-the-plane/" rel="nofollow noreferrer">here</a> of the $n=7,9,11$ versions, and description of how to construct them; $n=11$ is:</p>
<p><img src="https://i.stack.imgur.com/jsBgg.png" alt="enter image description here"></p>
<p>It seems the same method could square the plane, well, fill greater than any specified area, no matter how huge, at least. Which is what 'infinite' means, practically, isn't it? Anyway, it seems, if somehow not known (which it must be, surely - if anyone has links etc to where it's discussed I would be very grateful) then it's another way of squaring the plane.</p>
<p>Q2. Each of these arrangements can be extended, but the ones I've tried (5 or 6) have a little gap to the south-east, (i.e. where the squares don't fit neatly together) but otherwise can be extended forever. Is there an $n$ for which there is no gap? There are more than 1 of each sized square in this arrangement, but still, it would be a nice tessellation with integer squares. <a href="http://www.adamponting.com/squaring-the-plane-ii/" rel="nofollow noreferrer">Here's</a> a picture of the 7x7 version extended, and detail of the centre.</p>
<p>Thanks for any answers or help.</p>
<hr>
<p>P.s. This was too long for the comments section:</p>
<p>I wrote to Jim Henle the other day asking about this method, he hadn't seen it, thought it was nice, but not plane-filling in the way his method is. He wrote in part:</p>
<blockquote>
<p>You are sort of "squaring the plane," but not in the sense that we did it. You are squaring larger and larger areas of the plane, but you don't square the whole thing. ... There are many meanings to "infinite". Aristotle distinguished between the "potential infinite" (more and more, without bound) and the "actual infinite" (all the numbers, all at once). Your procedure is the first sort, and ours is the second sort.</p>
</blockquote>
<p>Which is what I had thought. </p>
<p>But the more I think about it.. the less clear the difference seems. Well, e.g. 'there are an infinite number of primes' means: there is no highest one; any number you can say, there's a higher prime. That's how that is defined, spoken about, to my (non-mathematician's) understanding. Similarly, there are an infinite number of these $n\times n$ square groups, there's no biggest one, any area you name, there's a bigger one. The Henle's method consists in adding more squares, ideally forever, but practically, you stop at some point and say 'and so on forever'. The procedure requires an infinite number of steps. I can't quite see how this series of $n\times n$ squares is so different. You have to start drawing again with each new $n$, sure, but I can't see that matters so much - there are an infinite number of arrangements, and there's the same 'and so on forever'.. i.e. "there is, strictly speaking, no such thing as an infinite summation or process."</p>
<p>Nov 2015. [I can't add comments to questions below, not sure why.]
Ross M, sorry about the delay! It seems you might be confused between the 2 parts, my fault for combining them in one question. (I still haven't heard anything much about either 2 from anyone.)</p>
<p>The first part is the basic $n^2$ arrangements of squares. (The second part takes just one of these and tries to extend it outwards, wonders about the possibility and mathematics of there being no gaps, and has many more than 1 of each sized square)
I still don't quite see why 'having to rearrange each step' makes a huge difference to anything. Imagine I had a method of going from $n^2$ to $(n+1)^2$ squares by adding more around the edges. Then, according to what people seem to be saying, I 'could tile the whole thing'? The way I've done it has exactly the same area as that would be, just it has to be redrawn. I don't see how that affects whether it 'tiles the plane' or not, or anything else. If someone could explain that to me, I would be very grateful. </p>
| TonyK | 1,508 | <p>There is a big difference between "arbitrarily large" and "infinite". This example shows the difference clearly:</p>
<blockquote>
<p>For any positive integer $n$, there exists a strictly decreasing
sequence of positive integers of length $n$.</p>
</blockquote>
<p>True, obviously. But this is false:</p>
<blockquote>
<p>There exists a strictly decreasing sequence of positive integers of
infinite length.</p>
</blockquote>
<p>If you like, you can call these "potential infinite" and "actual infinite".</p>
|
1,815,662 | <blockquote>
<p>Prove that $X^4+X^3+X^2+X+1$ is irreducible in $\mathbb{Q}[X]$, but it has two different irreducible factors in $\mathbb{R}[X]$.</p>
</blockquote>
<p>I've tried to use the cyclotomic polynomial as:
$$X^5-1=(X-1)(X^4+X^3+X^2+X+1)$$</p>
<p>So I have that my polynomial is
$$\frac{X^5-1}{X-1}$$ and now i have to prove that is irreducible. </p>
<p><strong>The lineal change of variables are ok*(I don't know why) so I substitute $X$ by $X+1$ then I have:
$$\frac{(X+1)^5-1}{X}=\frac{X^5+5X^4+10X^3+10X^2+5X}{X}=X^4+5X^3+10X^2+10X+5$$
And now we can apply the Eisenstein criterion with p=5. So my polynomial is irreducible in $\mathbb{Q}$</strong></p>
<p>Now let's prove that it has two different irreducible factors in $\mathbb{R}$</p>
<p>I've tryed this way: $X^4+X^3+X^2+X+1=(X^2+AX+B)(X^2+CX+D)$
and solve the system. But solve the system is quite difficult. Is there another way?</p>
| Community | -1 | <p>Every polynomial splits completely over the complexes. There are only three possibitilies:</p>
<ul>
<li>There are four real roots</li>
<li>There are two real roots and one pair of complex conjugate roots</li>
<li>There are two pairs of complex conjugate roots</li>
</ul>
<p>The roots of the polynomial are fifth roots of unity other than $1$, so there are no real roots. Thus we are in the third case, and the factorization consists of two irreducible real quadratics.</p>
|
1,815,662 | <blockquote>
<p>Prove that $X^4+X^3+X^2+X+1$ is irreducible in $\mathbb{Q}[X]$, but it has two different irreducible factors in $\mathbb{R}[X]$.</p>
</blockquote>
<p>I've tried to use the cyclotomic polynomial as:
$$X^5-1=(X-1)(X^4+X^3+X^2+X+1)$$</p>
<p>So I have that my polynomial is
$$\frac{X^5-1}{X-1}$$ and now i have to prove that is irreducible. </p>
<p><strong>The lineal change of variables are ok*(I don't know why) so I substitute $X$ by $X+1$ then I have:
$$\frac{(X+1)^5-1}{X}=\frac{X^5+5X^4+10X^3+10X^2+5X}{X}=X^4+5X^3+10X^2+10X+5$$
And now we can apply the Eisenstein criterion with p=5. So my polynomial is irreducible in $\mathbb{Q}$</strong></p>
<p>Now let's prove that it has two different irreducible factors in $\mathbb{R}$</p>
<p>I've tryed this way: $X^4+X^3+X^2+X+1=(X^2+AX+B)(X^2+CX+D)$
and solve the system. But solve the system is quite difficult. Is there another way?</p>
| Jyrki Lahtonen | 11,619 | <p>A different route to the factorization over the reals (obviously the end result is same as in Egreg's post, but I give the factors explicitly).</p>
<p>Let $p(x)$ be your polynomial. By a direct calculation we see that
$$
(x^2+\frac x2+1)^2=x^4+x^3+\frac94x^2+x+1=p(x)+\frac54 x^2.
$$
This calculation is aided by palindromic symmetry of both $p(x)$ and this quadratic. Anyway, this gives us the factorization
$$
\begin{aligned}
p(x)&=(x^2+\frac x2+1)^2-(\frac{\sqrt5}2\,x)^2\\
&=(x^2+\frac{1-\sqrt5}2\, x+1)(x^2+\frac{1+\sqrt5}2\,x+1)
\end{aligned}
$$
by the usual
$$
a^2-b^2=(a-b)(a+b)
$$
formula.</p>
<p>So the Golden ratio (not surprisingly given that the zeros are vertices of a regular pentagon) makes an appearance.</p>
|
167,575 | <p>I have 6 sets of 4D points. Here is an example of one set :</p>
<pre><code>{{30., 5., 111.925, 113.569}, {30., 7.5, 114.7, 158.286}, {30., 10., 115.625, 206.023},
{30., 12.5, 115.625, 257.528}, {30., 15., 117.475, 294.663}, {30., 17.5, 119.325, 328.03},
{30., 20., 121.175, 357.982}, {30., 22.5, 122.1, 393.646}, {30., 25., 122.1, 437.384},
{30., 27.5, 122.1, 481.123}}
</code></pre>
<p>I want to plot the x,y coordinates of the points on the 2D plane and use the z coordinate to define the size of the symbol (bubble radius or area) and the last coordinate to define a color for that bubble. So the color will be different depending on the fourth coordinate. Any help would be appreciated !</p>
<p>I would like to have a 4D graphic like that :
<a href="https://i.stack.imgur.com/wk30Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wk30Z.png" alt="Bubble chart 4D"></a></p>
| OkkesDulgerci | 23,291 | <p>Just join all your data and use J.M's code.</p>
<pre><code>data = {{30., 5., 111.925, 113.569}, {30., 7.5, 114.7, 158.286}, {30.,
10., 115.625, 206.023}, {30., 12.5, 115.625, 257.528}, {30., 15.,
117.475, 294.663}, {30., 17.5, 119.325, 328.03}, {30., 20.,
121.175, 357.982}, {30., 22.5, 122.1, 393.646}, {30., 25., 122.1,
437.384}, {30., 27.5, 122.1, 481.123}};
data2 = Join[{data}, {data}, {data}];
data2[[2, All, 1]] += 30;
data2[[2, All, 2]] += 10;
data2[[2, All, 3]] += 15;
data2[[2, All, 4]] += 100;
data2[[3, All, 1]] += 60;
data2[[3, All, 2]] += 15;
data2[[3, All, 3]] += 25;
data2[[3, All, 4]] += 500;
data2 = Join @@ data2;
sc = {"ThermometerColors", MinMax[data2[[All, -1]]]};
cf = ColorData[sc];
Legended[Graphics[{cf[#4], Disk[{#, #2}, #3/30]} & @@@ data2,
Frame -> True, ImageSize -> 600, GridLines -> Automatic,
GridLinesStyle -> Directive[Gray, Dotted]], BarLegend[sc]]
</code></pre>
<p><a href="https://i.stack.imgur.com/CRXRZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CRXRZ.png" alt="enter image description here"></a></p>
<p><strong>Edit</strong> Here is a workaround. I am not sure this is what you want. You can use <code>Ellipse</code> instead of <code>Disk</code> and scale radius differently to overcome distortion.</p>
<pre><code>p1 = ListPlot[{{-1, -1}}, Frame -> True, Axes -> False,
PlotRange -> {{0, 100}, {0, 50}}, ImageSize -> 500,
AspectRatio -> 1/3, GridLines -> Automatic,
GridLinesStyle -> Directive[Gray, Dotted]];
p2 = Graphics[{cf[#4], Ellipsoid[{#, #2}, {#3/30, #3/20}]} & @@@
data2];
Legended[Show[{p1, p2}], BarLegend[sc]]
</code></pre>
<p><a href="https://i.stack.imgur.com/BqNF4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BqNF4.png" alt="enter image description here"></a></p>
|
3,443,082 | <p>Find all the positive integral solutions of, <span class="math-container">$\tan^{-1}x+\cos^{-1}\dfrac{y}{\sqrt{y^2+1}}=\sin^{-1}\dfrac{3}{\sqrt{10}}$</span></p>
<p>Assuming <span class="math-container">$x\ge1,y\ge1$</span> as we have to find positive integral solutions of <span class="math-container">$(x,y)$</span></p>
<p><span class="math-container">$$\tan^{-1}x=\tan^{-1}3-\tan^{-1}\dfrac{1}{y}$$</span></p>
<p>As <span class="math-container">$3>0$</span> and <span class="math-container">$\dfrac{1}{y}>0$</span>
<span class="math-container">$$\tan^{-1}x=\tan^{-1}\left(\dfrac{3-\dfrac{1}{y}}{1+\dfrac{3}{y}}\right)$$</span>
<span class="math-container">$$\tan^{-1}x=\tan^{-1}\dfrac{3y-1}{y+3}$$</span></p>
<p><span class="math-container">$$x=\dfrac{3y-1}{y+3}$$</span></p>
<p><span class="math-container">$y+3\in[4,\infty)$</span> as <span class="math-container">$y\ge1$</span>, <span class="math-container">$3y-1\in [2,\infty)$</span> as <span class="math-container">$y\ge1$</span></p>
<p>For <span class="math-container">$x$</span> to be positive integer, <span class="math-container">$3y-1$</span> should be multiple of <span class="math-container">$y+3$</span></p>
<p><span class="math-container">$$3y-1=m(y+3) \text { where } m\in Z^{+}$$</span>
<span class="math-container">$$3y-my=3m-1$$</span>
<span class="math-container">$$(3-m)y=3m-1$$</span></p>
<p>Here R.H.S is positive, so L.H.S should also be positive.</p>
<p>So <span class="math-container">$3-m>0$</span>, hence <span class="math-container">$m<3$</span></p>
<p>So possible values of <span class="math-container">$m$</span> are {<span class="math-container">$1$</span>,<span class="math-container">$2$</span>}.</p>
<p>For <span class="math-container">$m=1$</span>, <span class="math-container">$$3y-1=y+3$$</span>
<span class="math-container">$$2y=4$$</span>
<span class="math-container">$$y=2$$</span></p>
<p><span class="math-container">$$x=\dfrac{3\cdot2-1}{2+3}$$</span>
<span class="math-container">$$x=1$$</span></p>
<p>For <span class="math-container">$m=2$</span>, <span class="math-container">$$3y-1=2(y+3)$$</span>
<span class="math-container">$$3y-1=2y+6$$</span>
<span class="math-container">$$y=7$$</span></p>
<p><span class="math-container">$$x=\dfrac{3\cdot7-1}{7+3}$$</span>
<span class="math-container">$$x=\dfrac{20}{10}$$</span>
<span class="math-container">$$x=2$$</span></p>
<p>Is there some other nicer way to solve this problem.</p>
| lab bhattacharjee | 33,337 | <p>Another way</p>
<p>As we need <span class="math-container">$x,y>0$</span></p>
<p>If <span class="math-container">$x/y<1$</span></p>
<p><span class="math-container">$$\tan^{-1}x+\tan^{-1}(1/y)=\tan^{-1}\dfrac{xy+1}{y-x}$$</span></p>
<p><span class="math-container">$$xy+1=3(y-x)$$</span></p>
<p><span class="math-container">$$\iff x=\dfrac{3y-1}{y+3}=3-\dfrac{10}{y+3}$$</span></p>
<p>So, <span class="math-container">$y+3(>3), $</span> must divide <span class="math-container">$10$</span> and must honor <span class="math-container">$x<y$</span></p>
<p>If <span class="math-container">$x/y >1,x>y$</span> as <span class="math-container">$x,y>0$</span></p>
<p><span class="math-container">$$\tan^{-1}x+\tan^{-1}\dfrac1y>\tan^{-1}y+\cot^{-1}y=\dfrac\pi2>\tan^{-1}3$$</span></p>
|
3,167,571 | <p>Let consider a square <span class="math-container">$10\times 10$</span> and write in the every unit square the numbers from <span class="math-container">$1$</span> to <span class="math-container">$100$</span> such that every two consecutive numbers are in squares which has a common edge. Then there are two perfect squares on the same line or column. Can you give me an hint? How to start?</p>
| nonuser | 463,553 | <p>Well you could try with Am-Gm inequality:</p>
<p><span class="math-container">$$ 5\sqrt[5]{p_1p_2...p_5x_1(x_2-x_1)...(x_5-x_4)}\leq 1$$</span></p>
<p>So <span class="math-container">$$p_1p_2...p_5 \leq {1\over 5^5x_1(x_2-x_1)...(x_5-x_4)}$$</span></p>
<p>and this value is achivable if <span class="math-container">$$ p_1 ={1\over 5x_1}$$</span></p>
<p><span class="math-container">$$ p_2 ={1\over 5(x_2-x_1)}$$</span>
<span class="math-container">$$ p_3 ={1\over 5(x_3-x_2)}$$</span>
<span class="math-container">$$...$$</span></p>
|
65,059 | <p>I have two points ($P_1$ & $P_2$) with their coordinates given in two different frames of reference ($A$ & $B$). Given these, what I'd like to do is derive the transformation to be able to transform any point $P$ ssfrom one to the other.</p>
<p>There is no third point, but there <em>is</em> an extra constraint, which is that the y axis of Frame $B$ is parallel to the $X$-$Y$ plane of Frame $A$ (see sketch below). I <em>believe</em> that is enough information to be able to do the transformation.</p>
<p><img src="https://i.stack.imgur.com/2d6QH.png" alt="Two frames"></p>
<p>Also:</p>
<ul>
<li>The points are the same distance apart in both frames (no scaling).</li>
<li>The points don't coincide.</li>
<li>The origins don't necessarily coincide.</li>
</ul>
<p>As you may have gathered, I'm <em>not</em> a mathematician (ultimately this will end up as code), so please be gentle...</p>
<p><sub>I've seen this question (<a href="https://math.stackexchange.com/questions/23197/finding-a-rotation-transformation-from-two-coordinate-frames-in-3-space">Finding a Rotation Transformation from two Coordinate Frames in 3-Space</a>), but it's not <em>quite</em> the same as my problem, and unfortunately I'm not good enough at math to extrapolate from that case to mine.</sub></p>
<p><strong>EDIT</strong> I've updated the diagram, which makes it a bit cluttered, but (I hope) shows all the 'bits': $P3_B$ is what I'm trying to calculate...</p>
| Benjol | 16,163 | <p>OK, I've been thinking about this (like an engineer, not a mathematician), and here's my (half-baked) take:</p>
<p>I take Frame A, and translate it (TA) such that it's origin is at P1, then rotate it (RA) around Z and Y such that P2 is on the X axis: this gives me a new Frame A'.</p>
<p>I do the <em>same</em> thing with Frame B, translate (TB), and rotate (RB), which gives me Frame B'.</p>
<p>At this point, Frame A' = Frame B', and I have a 'route' from A to B:</p>
<p>$$TA \rightarrow RA \rightarrow -RB \rightarrow -TA$$</p>
<p>It's not the answer, but it's a start. Please tell me if I'm completely up the creek.</p>
|
3,340,093 | <p>Is the following statement true?</p>
<blockquote>
<p>Two real numbers a and b are equal iff for every ε > 0, |a − b| < ε.</p>
</blockquote>
<p>I got that if a and b are equal then |a-b|=0 which is less than ε.
But I'm not sure if the converse also holds.</p>
| Community | -1 | <p>This is outright proven in <a href="https://rads.stackoverflow.com/amzn/click/com/1493927116" rel="nofollow noreferrer" rel="nofollow noreferrer"><em>Understanding Analysis</em> (2016 2 edn)</a>. pp 9 - 10.</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/q2dvi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q2dvi.jpg" alt="enter image description here"></a></p>
</blockquote>
|
1,199,746 | <p>Let $f : (−\infty,0) → \mathbb{R}$ be the function given by $f(x) = \frac{x}{|x|}$. Use the $\epsilon -\delta$ definition of a $\lim\limits_{x \to 0^-} f(x) = -1.$</p>
<p>Workings:</p>
<p>Informal Thinking:
We want $|f(x) - L| < \epsilon$</p>
<p>$\left|\frac{x}{|x|} - -1\right| < \epsilon$</p>
<p>$\left|\frac{x}{|x|} + 1 \right| < \epsilon$</p>
<p>$\left|\frac{x + |x|}{|x|} \right| < \epsilon$</p>
<p>$x + |x| < |x|\epsilon$</p>
<p>$|x| < |x| \epsilon - x$</p>
<p>Take $\delta = |x|\epsilon - x$</p>
<p>Proof:</p>
<p>Suppose $\epsilon > 0$ and let $\delta = |x|\epsilon - x$</p>
<p>So $0 < |x| < \delta = |x| \epsilon - x$</p>
<p>I'm wondering if what I did so far is correct and what I should do next. Any help will be appreciated.</p>
| graydad | 166,967 | <p>Since you are approaching $x$ from the left, that means $x<0$. As such, we have $|x| = -x$. So, when you get to your third line we have $$\left|\frac{x + |x|}{|x|} \right|=\left|\frac{x + (-x)}{|x|} \right| = 0 < \varepsilon$$ The inequality clearly holds for all $x<0$ and for all $\varepsilon>0$. Can you find a $\delta$ now?</p>
|
3,700,938 | <p>I know that the equations are equivalent by doing the math with the same value for x, but I don't understand the rules for changing orders or operations.<br>
When it is not the first addition or subtraction happening in the equation, parentheses make the addition subtraction and vice versa? Are there any other rules?</p>
<p><span class="math-container">$x^2+x-x-1 = (x^2+x)-(x+1)$</span>? </p>
<p>What if you put the parentheses around the two <span class="math-container">$x$</span>s in the middle?<br>
<span class="math-container">$x^2+(x-x)+1$</span>? Should that be (x+x) in the middle?</p>
| Prime Mover | 466,895 | <p>Let's simplify it, and suppose you have <span class="math-container">$x^2 - x - 1$</span>.</p>
<p>Examine what you are doing. You start with <span class="math-container">$x^2$</span>.</p>
<p>Then you subtract <span class="math-container">$x$</span> from <span class="math-container">$x^2$</span>.</p>
<p>Then you subtract <span class="math-container">$1$</span> from what you have got after the previous step, that is, <span class="math-container">$x^2 - x$</span>.</p>
<p>So you have subtracted first <span class="math-container">$x$</span>, then <span class="math-container">$1$</span>, from <span class="math-container">$x^2$</span>.</p>
<p>What have you in total removed from <span class="math-container">$x^2$</span>? You have removed <span class="math-container">$x$</span> and <span class="math-container">$1$</span>, that is, <span class="math-container">$x+1$</span>, from <span class="math-container">$x^2$</span>.</p>
<p>So subtracting <span class="math-container">$x$</span> and then <span class="math-container">$1$</span> from <span class="math-container">$x^2$</span> is exactly the same as subtracting <span class="math-container">$x + 1$</span> from <span class="math-container">$x^2$</span>.</p>
<p>There you see, <span class="math-container">$x^2 - x - 1 \equiv x^2 - (x + 1)$</span>.</p>
<p>You can do it generally, and see that <span class="math-container">$a - b - c \equiv a - (b + c)$</span>, whatever <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span> are.</p>
|
1,319,767 | <p>If we know that $\frac{2^n}{n!}>0$ for every $n\in \mathbb{N}$ and $$\frac{2^n}{n!}=\frac{2}{1}\frac{2}{2}...\frac{2}{n}$$ how to bound this sequence above?</p>
| Atvin | 215,617 | <p><strong>Hint:</strong> <a href="http://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow">Stirling's approximation</a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.