qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,174,359 | <p>I have the following problem where n is a positive integer $(n >= 1)$:</p>
<p>Prove that $\frac{1}{2n}\le\frac{1*3*5*...*(2n-1)}{2*4*...*2n}$</p>
<p>I know that I must start with the basic step showing that $P(1)$ is true as follows:
$1/(2*1) = 1/2$ so $P(1)$ is true.</p>
<p>Now follows the induction step where I must show that "if $P(k)$ then $P(k+1)$" where
$P(k)$ is $\frac{1}{2k}\le\frac{1*3*5*...*(2k-1)}{2*4*...*2k}$
and $P(k+1)$ is $\frac{1}{2(k+1)}\le\frac{1*3*5*...*(2k+1)}{2*4*...*2(k+1)}$</p>
<p>I will very much appreciate any help about how to continue.</p>
| abel | 9,252 | <p>look at what the function $$y=f(x) = (5x)^{1/2} \text{ does at } x = 16/5, y = 4.$$ the derivative of $f$ is $$f'(x) = \frac12 (5x)^{-1/2}\times 5,\quad
f'(16/5) = \frac 52 \times \frac 14= \frac58 $$</p>
<p>so far we have at $$x = {16}/{5}, y=f(16/5) = 4, f'(16/5) = \frac58.$$ therefore
$$ y = 4, f^{-1}(4) = 16/5, \left(f^{-1}\right)'(4) = \frac85$$</p>
|
3,856,180 | <p><span class="math-container">$X = \{0 ,1\}^{\mathbb N}$</span> be the metric space. Can anyone please tell me how to define a continuous injective function from <span class="math-container">$X = \{0 ,1\}^{\mathbb N}$</span> to the cantor set ?</p>
<p>Can anyone please give an idea ?</p>
| Henno Brandsma | 4,280 | <p>The Cantor set is the intersection <span class="math-container">$$\bigcap_{n=1}^\infty C_n$$</span></p>
<p>where <span class="math-container">$C_0 = [0, 1]$</span> and <span class="math-container">$C_{n+1} = \frac{1}{3}C_n \cup \frac{2}{3}C_n$</span> for each <span class="math-container">$n\ge 0$</span>. Every <span class="math-container">$C_n$</span> consists of <span class="math-container">$2^n$</span> disjoint parts of length <span class="math-container">$\frac{1}{3^n}$</span> each. (Shown by induction).</p>
<p>So if <span class="math-container">$(x_n)_{n=1}$</span> is a sequence of <span class="math-container">$0,1$</span>'s we can use it to determine whether you take a left or a right turn at the next <span class="math-container">$C_n$</span>, as it were. In the end we get an intersection of subintervals of the <span class="math-container">$C_n$</span>, which is a decreasing intersection of sets of diameter <span class="math-container">$\frac{1}{3^n}$</span> so whose intersection is a unique point (by Cantor's intersection theorem). This then defines the image of the sequence.</p>
<p>Continuity is clear: when we have a small open interval around a point of intersection <span class="math-container">$y$</span>, we can find <span class="math-container">$N$</span> so large that it contains a whole subinterval of that stage's <span class="math-container">$C_N$</span> that contains <span class="math-container">$y$</span> and then the product basic set that just fixes the first <span class="math-container">$N$</span> coordinates (so as to land in that subinterval) maps inside the open interval, showing continuity.</p>
<p>Think about it. It's usually presented as a ternary expansion map, or even as a formula like <span class="math-container">$$f((x_n)_n) = \sum_n \frac{1}{3^n}2x_n$$</span> etc. but the visual desciption is clearer to me. The Cantor set comes from the branches of a splitting tree and the <span class="math-container">$0,1$</span>'s encode the path through that tree.</p>
|
667,293 | <p>We are given a $n \times m$ rectangular matrix in which every cell there is a light bulb, together with the information whether the bulb is ON or OFF.</p>
<p>Now i am required to switch OFF all the bulbs but i can perform only one operation that is as follows:</p>
<ul>
<li>I can simultaneously flip all the bulbs from ON to OFF and vice versa in any submatrix.</li>
</ul>
<p>I need to switch OFF all the bulbs in a minimum number of moves and tell the number of moves needed as well as the moves themselves. Is there an efficient algorithm for this?</p>
<p>EXAMPLE: Let us assume $n=2$ and $m=3$ .The initial grid is as follow if $0$ stands for OFF and $1$ for ON:</p>
<p>$$\begin{matrix}1 & 1 & 1 \\ 0 & 1 & 1 \end{matrix}$$</p>
<p>Now we can switch OFF all bulbs in 2 moves which are as follow : </p>
<p>Move 1: Flip bulbs in subarray from $(1,1)$ to $(1,1)$</p>
<p>Move 2: Flip bulbs in subarray from $(1,2)$ to $(2,3)$</p>
| J. J. | 3,776 | <p>I don't know of an efficient algorithm that produces the minimal number of moves, but there's an algorithm that gives pretty good results.</p>
<p>The algorithm is based on the following observation: From the original $m \times n$ matrix construct a derived matrix of size $m+1 \times n+1$ by considering all $2\times2$ submatrices of the original matrix (imagine a border of zeros around it) and marking $0$ if the $2 \times 2$ submatrix has an even number of $1$s and $1$ if it has an odd number of $1$s. Now in the derived matrix a move on the original matrix corresponds to a move that switches just the four corners of a rectangle.</p>
<p>This all is best explained by an example. For your original matrix
$$\begin{matrix}1 & 1 & 1 \\ 0 & 1 & 1\end{matrix}$$
the derived matrix would be
$$\begin{matrix}1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1\end{matrix}$$
After the first move it becomes
$$\begin{matrix}1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0\end{matrix}$$
and after the second one it is cleared.</p>
<p>The algorithm then looks for rectangles in the derived matrix with as many corners $1$ as possible and removes them in a greedy fashion. This is not optimal, but leads to quite good results.</p>
<p>Note in particular, that it follows that we must make at least $u/4$ moves, where $u$ is the number of $1$s in the derived matrix.</p>
|
2,426,892 | <blockquote>
<p>Between which two integers does <span class="math-container">$\sqrt{2017}$</span> fall? </p>
</blockquote>
<p>Since <span class="math-container">$2017$</span> is a prime, there's not much I can do with it. However, <span class="math-container">$2016$</span> (the number before it) and <span class="math-container">$2018$</span> (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares, so if I multiply them by a number to make them perfect squares, they're no longer close to <span class="math-container">$2017.$</span> How can I solve this problem?</p>
<p>Update:
Okay, since <span class="math-container">$40^2 = 1600$</span> and <span class="math-container">$50^2 = 2500$</span>, I just tried <span class="math-container">$45$</span> and <span class="math-container">$44$</span> and they happened to be the answer - but I want to be more mathematical than that... </p>
| eyeballfrog | 395,748 | <p>You could use the root extraction algorithm to find it directly. It's sort of like long division.</p>
<ol>
<li>Starting from the decimal place, divide the number into pairs of digits. So $20\,17$.</li>
<li>Find the largest integer whose square is less than the first pair. $4^2 < 20 < 5^2$. This is the first digit of the square root.</li>
<li>Subtract off the square and bring down the next two digits. $20 - 4^2 = 4 \Longrightarrow 417$. This is our remainder.</li>
<li>Find the largest $d$ such that the product $(20\cdot r + d)\cdot d$ is less than the current remainder, where $r$ is the part of the root already found. $(20\cdot 4 + 4)\cdot 4 < 417 < (20 \cdot 4 + 5)\cdot 5$, so $d= 4$.</li>
<li>Add $d$ to the end of the digits already found, subtract the product from the current remainder, and bring down the next two digits. So we have 44 for our root and $417 - 84\cdot4 = 81 \Longrightarrow 8100$ for our new remainder.</li>
<li>Repeat 4 and 5 until you have enough digits or the remainder and all remaining digits of the number are zero. Since we now have enough digits for the integer part, we can stop here.</li>
</ol>
<p>So $44 < \sqrt{2017} < 45$.</p>
<p>I do want to comment on one suggestion you had, though.</p>
<blockquote>
<p>However, 2016 (the number before it) and 2018 (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares</p>
</blockquote>
<p>$2016 = 2^5\cdot3^2\cdot7$, which is divisible by a lot of perfect squares. Trial division by some of those perfect squares may result in a quotient that is itself close to a perfect square, which would give a guess as to what $\sqrt{2016}$ is. $2016 = 2^2\cdot504 = 3^3\cdot224 = 4^2\cdot126 = 6^2\cdot56 = 12^2\cdot14$. We should then recognize $224 \approx 225 = 15^2$ and $126\approx 121 = 11^2$. This gives $44^2 < 2016 < 45^2$, so clearly $44^2 < 2017 < 45^2$ as well.</p>
|
2,040,293 | <p>I am trying to follow this tutorial: <a href="http://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum&section=SystemModeling" rel="nofollow noreferrer">http://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum&section=SystemModeling</a></p>
<p>I am stuck to understand how to make a state-space representation from these transfer functions</p>
<p>$$ \frac{\Phi(s)}{U(s)} = \frac{\frac{ml}{q}s}{s^3+\frac{b(I+ml^2)}{q}s^2-\frac{(M+m)mgl}{q}s-\frac{bmgl}{q}} \\ \frac{X(s)}{U(s)} = \frac{\frac{(I+ml^2)s^2-gml}{q}}{s^4+\frac{b(I+ml^2)}{q}s^3-\frac{(M+m)mgl}{q}s^2-\frac{bmgl}{q}s} $$</p>
<p>The text gives a hint "<em>The linearized equations of motion from above can also be represented in state-space form if they are rearranged into a series of first order differential equations. Since the equations are linear, they can then be put into the standard matrix form shown below.</em>"</p>
<p>But I do not understand this hint, I tried to research on to reduce the order like on <a href="http://tutorial.math.lamar.edu/Classes/DE/ReductionofOrder.aspx" rel="nofollow noreferrer">http://tutorial.math.lamar.edu/Classes/DE/ReductionofOrder.aspx</a> but maybe you can give me more input to my brain.</p>
<p>Solution:
$$ \begin{bmatrix} \dot x \\ \ddot x \\ \dot \phi \\ \ddot \phi \end{bmatrix}
= \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & \frac{-(I+ml^2)b}{I(M+m)+Mml²} & \frac{m^2gl^2}{I(M+m)+Mml^2} & 0 \\ 0 & 0 & 0 & 1 \\ 0 & \frac{-mlb}{I(M+m)+Mml^2} & \frac{mgl(M+m)}{I(M+m)+Mml^2} & 0 \end{bmatrix} \begin{bmatrix} x \\ \dot x \\ \phi \\ \dot \phi \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{I+ml^2}{I(M+m)+Mml²} \\ 0 \\ \frac{ml}{I(M+m)+Mml^2} \end{bmatrix} u $$</p>
<p>$$ y = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} x \\ \dot x \\ \phi \\ \dot \phi \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \end{bmatrix} u $$</p>
| Aleksejs Fomins | 250,854 | <p>Warning: very hand-wavy argument :D</p>
<p>1) Real number to the real power can be defined as the limit of a real number to the quotient power. If some law holds for quotient powers, it holds for real powers in the limit</p>
<p>$x^r = \lim_{\frac{a}{b} \rightarrow r} (x^{\frac{a}{b}}) $</p>
<p>2) Real number to quotient power is defined as taking the denominator order square root, followed by numerator order power (or vice versa). </p>
<p>$x^{\frac{a}{b}} = (\sqrt[b]{x})^a$</p>
<p>3) For integer powers, the additivity is simply justified by the count of arguments to be multiplied with itself. For the integer roots it is a little bit more tricky</p>
<p>$(\sqrt[b1]{x})^{a1} \cdot (\sqrt[b2]{x})^{a2} = \sqrt[b1b2]{((\sqrt[b1]{x})^{a1} \cdot (\sqrt[b2]{x})^{a2})^{b1b2}} = \sqrt[b1b2]{x^{a1b2} \cdot x^{a2b1}} = \sqrt[b1b2]{x^{a1b2+a2b1}} = x^{\frac{a1}{b1} + \frac{a2}{b2}}$</p>
|
4,038,392 | <p>This question is inspired by the problem <a href="https://projecteuler.net/problem=748" rel="nofollow noreferrer">https://projecteuler.net/problem=748</a></p>
<p>Consider the Diophantine equation
<span class="math-container">$$\frac{1}{x^2}+\frac{1}{y^2}=\frac{k}{z^2}$$</span>
<span class="math-container">$k$</span> is a squarefree number. <span class="math-container">$A_k(n)$</span> is the number of solutions of the equation such that <span class="math-container">$1 \leq x+y+z \leq n$</span>, <span class="math-container">$x \leq y$</span> and <span class="math-container">$\gcd(x,y,z)=1$</span>. This equation has infinite solutions for <span class="math-container">$k=1$</span> and <span class="math-container">$k>1$</span> that can be expressed as sum of two perfect squares.</p>
<p>Let <span class="math-container">$$A_k=\lim_{n \to \infty}\frac{A_k(n)}{\sqrt{n}}$$</span></p>
<p><span class="math-container">\begin{array}{|c|c|c|c|c|}
\hline
k& A_k\left(10^{12}\right)& A_k\left(10^{14}\right)& A_k\left(10^{16}\right)& A_k\left(10^{18}\right)& A_k \\ \hline
1& 127803& 1277995& 12779996& 127799963& 0.12779996...\\ \hline
2& 103698& 1037011& 10369954& 103699534& 0.1036995...\\ \hline
5& 129104& 1291096& 12911049& 129110713& 0.129110...\\ \hline
10& 90010& 900113& 9000661& 90006202& 0.0900062...\\ \hline
13& 103886& 1038829& 10388560& 103885465& 0.103885...\\ \hline
17& 86751& 867550& 8675250& 86752373& 0.086752...\\ \hline
\end{array}</span></p>
<p>From these data, it seems that these limits converge. I wonder if it is possible to write them in terms of known constants or some sort of infinite series. For Pythagorean triples, there are about <span class="math-container">$\frac{n}{2\pi}$</span> triples with hypotenuse <span class="math-container">$\leq n$</span>.</p>
<p>The Python code to calculate <span class="math-container">$A_1(n)$</span>:</p>
<pre><code>def gcd(a, b):
if b == 0:
return a
return gcd(b, a % b)
N = 10 ** 14
cnt = 0
for a in range(1, 22000):
a2 = a * a
for b in range(1, a):
if (a + b) % 2 == 1 and gcd(a, b) == 1:
b2 = b * b
x = 2 * a * b * (a2 + b2)
y = a2 * a2 - b2 * b2
z = 2 * a * b * (a2 - b2)
if x + y + z > N:
continue
cnt += 1
print(cnt)
</code></pre>
<p>The Python code to calculate $A_2(n)$:</p>
<pre><code>def gcd(a, b):
if b == 0:
return a
return gcd(b, a % b)
N = 10 ** 14
cnt = 1 # (1, 1, 1)
for a in range(1, 22000):
a2 = a * a
a4 = a2 * a2
for b in range(1, a):
if gcd(a, b) == 1:
b2 = b * b
b4 = b2 * b2
x = 2 * a * b * (a2 + b2) - (a4 - b4)
y = 2 * a * b * (a2 + b2) + (a4 - b4)
z = 6 * a2 * b2 - (a4 + b4)
if x > 0 and y > 0 and z > 0 and x + y + z <= N and gcd(x, gcd(y, z)) == 1:
cnt += 1
x = (a4 - b4) - 2 * a * b * (a2 + b2)
y = 2 * a * b * (a2 + b2) + (a4 - b4)
z = (a4 + b4) - 6 * a2 * b2
if x > 0 and y > 0 and z > 0 and x + y + z <= N and gcd(x, gcd(y, z)) == 1:
cnt += 1
print(cnt)
</code></pre>
| piepie | 365,969 | <p>This is a partial answer. Consider the case <span class="math-container">$k=1$</span>. The parametric solutions are given by <span class="math-container">$(x,y,z)=\left(2ab(a^2+b^2),a^4-b^4,2ab(a^2-b^2)\right)$</span>, where <span class="math-container">$\gcd(a,b)=1$</span>, <span class="math-container">$a>b>0$</span> and <span class="math-container">$a+b$</span> is odd. In the triples <span class="math-container">$(x,y,z)$</span> generated by this parametrization, it is not necessarily always <span class="math-container">$x\leq y$</span>. For example, <span class="math-container">$(20,15,12)$</span> is a triple generated by this parametrization, and <span class="math-container">$(15,20,12)$</span> will not be generated. This parametrization generates only the unique solutions. So we can have two possible cases, either <span class="math-container">$0<a^2-b^2\leq 2ab$</span> or <span class="math-container">$0<2ab\leq a^2-b^2$</span>.</p>
<p>The sum of the triples = <span class="math-container">$x+y+z=4a^3b+a^4-b^4 \leq n$</span>.</p>
<p>Let us consider the first case. Denote the number of solutions by <span class="math-container">$A_1^1(n)$</span> and <span class="math-container">$A_1^1=\lim_{n \to \infty}\frac{A_1^1(n)}{\sqrt{n}}$</span>.</p>
<p><span class="math-container">$$0<a^2-b^2\leq 2ab \implies \sqrt{2}-1\leq\frac{b}{a}=x<1$$</span>
Number of pairs <span class="math-container">$(a,b)$</span> satisfying above constraints and also <span class="math-container">$4a^3b+a^4-b^4 \leq n$</span> can be approximated by
<span class="math-container">\begin{align}
\begin{split}
p(n)&\approx\int_{\frac{b}{a}=\sqrt{2}-1}^{1}\int_{4a^3b+a^4-b^4 \leq n}a \ da\ db\\
&=\int_{x=\sqrt{2}-1}^{1}\int_{0<a^4\leq \frac{n}{1+4x-x^4}}a \ da\ dx \\
&=\frac{\sqrt{n}}{2}\int_{x=\sqrt{2}-1}^{1}\frac{1}{\sqrt{1+4x-x^4}}\ dx
\end{split}
\end{align}</span></p>
<p>The natural density of coprime numbers is <span class="math-container">$\frac{6}{\pi^2}$</span>. The condition <span class="math-container">$gcd(a,b)=1$</span> will contribute a multiplication factor of <span class="math-container">$\frac{6}{\pi^2}$</span>. Among all the coprime pairs <span class="math-container">$(a,b)$</span> such that <span class="math-container">$a>b>0$</span>, fraction of which have odd value of <span class="math-container">$a+b$</span> is <span class="math-container">$\frac{2}{3}$</span>. We have to multiply this factor.</p>
<p><span class="math-container">\begin{align}
\begin{split}
A_1^1 &=\lim_{n \to \infty}\frac{A_1^1(n)}{\sqrt{n}}\\
&= \lim_{n \to \infty} \frac{2}{3} \cdot \frac{6}{\pi^2} \cdot\frac{p(n)}{\sqrt{n}}\\
&= \frac{2}{\pi^2} \int_{x=\sqrt{2}-1}^{1}\frac{dx}{\sqrt{1+4x-x^4}} \\
\end{split}
\end{align}</span>
Similarly, it can be shown that for the other case exactly the same limit exists. So, <span class="math-container">$$A_1=2A_1^1=\frac{4}{\pi^2} \int_{x=\sqrt{2}-1}^{1}\frac{dx}{\sqrt{1+4x-x^4}} \approx 0.1277999513464289...$$</span></p>
<p>Using a similar approach it can be shown that
<span class="math-container">\begin{align}
\begin{split}
A_2&=\frac{2}{\pi^2} \left[ \int_{x=0}^{\sqrt{2}-1}\frac{dx}{\sqrt{3-6x^2-x^4}} +\int_{x=\sqrt{2}-1}^{1}\frac{dx}{\sqrt{4x+6x^2+4x^3-1-x^4}} \right]\\
& \approx 0.1036994744684913...
\end{split}
\end{align}</span>
<span class="math-container">\begin{align}
\begin{split}
A_{13}&=\frac{52}{7\pi^2} \int_{x=(\sqrt{13}-3)/2}^{(\sqrt{26}-1)/5}\frac{dx}{\sqrt{20x+36x^2-5-7x^4}}\approx 0.1038855856479065...
\end{split}
\end{align}</span></p>
|
216,421 | <p>How do I go about proving this? Do I have to show total boundedness (I don't know how to use the finiteness of the residue field, and this seems like something that it might pertain to).</p>
| Martin Brandenburg | 1,650 | <p>Since this was asked in the comments: Compactness refers to the topology on $R$, which is induced by the absolute value, which is again induced by the valuation.</p>
<p>Here is a hint for the solution: Think of power series with coefficients in the residue field and apply Tychonov's Theorem.</p>
|
255,660 | <p>I want to get a solution of a equation using NSolve.</p>
<p><span class="math-container">$BesselI[1,x]/(x*BesselI[0,x])=0.2$</span></p>
<p>So I plugged this equation to NSolve:</p>
<pre><code>NSolve[BesselI[1,x]/(x*BesselI[0,x])==0.2, x]
</code></pre>
<p>But when I use this, the Mathematica gives the same expression. I know that this equation has such a solution from plot:</p>
<p><a href="https://i.stack.imgur.com/VmbyF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VmbyF.png" alt="enter image description here" /></a></p>
<p>Could you let me know how to solve this problem?</p>
<p>Any helps will be appreciated. Thank you!</p>
| Bob Hanlon | 9,362 | <p>Bound the value for <code>x</code></p>
<pre><code>Solve[{BesselI[1, x]/(x*BesselI[0, x]) == 1/5, -5 < x < 5}, x]
(* {{x -> Root[{(-5) BesselI[1, #] +
BesselI[
0, #] #& , -4.38411711031472304526702680222165674734`18.}]}, {x ->
Root[{(-5) BesselI[1, #] + BesselI[0, #] #& ,
4.38411711031472304526702680222165674734`18.}]}} *)
NSolve[{BesselI[1, x]/(x*BesselI[0, x]) == 1/5, -5 < x < 5}, x]
(* {{x -> -4.38412}, {x -> 4.38412}} *)
</code></pre>
|
3,653,148 | <p>Let <span class="math-container">$w$</span> be a primitive 5th root of unity. Then the difference equation <span class="math-container">$$x_nx_{n+2}=x_n-(w^2+w^3)x_{n+1}+x_{n+2}$$</span> generates a cycle of period 5 for general initial values:
<span class="math-container">$$u,v,\frac{u-(w^2+w^3)v}{u-1},\frac{uv-(w^2+w^3)(u+v)}{(u-1)(v-1)},\frac{v-(w^2+w^3)u}{v-1},u,v, ...$$</span> </p>
<p>For equations of the form <span class="math-container">$$x_nx_{n+2}=w^{a+b}x_n-(w^a+w^b)x_{n+1}+x_{n+2},\text{ for }w^a+w^b\ne 0$$</span> with the same globally periodic property, I can show that the only possible periods are 5,8,12,18 and 30.</p>
<p>Curiously, the 'same' equation works for all these periods:
<span class="math-container">$$x_nx_{n+2}=w^5x_n-(w^2+w^3)x_{n+1}+x_{n+2},$$</span>
where <span class="math-container">$w$</span> is a primitive <span class="math-container">$p$</span>th root of unity for <span class="math-container">$p=5,8,12,18,30$</span>. Is this a fluke or is there a way of seeing why this 'family' of equations always generate cycles? </p>
| Aravind | 4,959 | <p>Too long for a comment:
This is to show one periodic sequence that satisfies a similar recurrence in a natural way.</p>
<p>The recurrence <span class="math-container">$x_nx_{n+2}=x_n+tx_{n+1}+x_{n+2}-(1+t)$</span> is 6-periodic, for every <span class="math-container">$t \neq 0$</span> (and non-degenerate initial values).</p>
<p>We start by observing that the sequence <span class="math-container">$u,v,v/u,1/u,1/v,u/v,u,v,\ldots$</span> is 6-periodic, and satisfies the recurrence
<span class="math-container">$y_ny_{n+2}=y_{n+1}$</span>.
Now, we can check that the recurrence <span class="math-container">$y_ny_{n+2}=ry_{n+1}$</span> is also periodic for any non-zero choice of <span class="math-container">$r$</span>.
In this latter recurrence, substitute <span class="math-container">$y_n=r(x_n-1)$</span> and <span class="math-container">$r=1/t$</span> to get the first recurrence stated above.</p>
<p>The OP's first recurrence also simplifies in form with the substitution <span class="math-container">$y_n=x_n-1$</span>, but I don't see a similar natural construction for period 5 (or other periods).</p>
|
3,653,148 | <p>Let <span class="math-container">$w$</span> be a primitive 5th root of unity. Then the difference equation <span class="math-container">$$x_nx_{n+2}=x_n-(w^2+w^3)x_{n+1}+x_{n+2}$$</span> generates a cycle of period 5 for general initial values:
<span class="math-container">$$u,v,\frac{u-(w^2+w^3)v}{u-1},\frac{uv-(w^2+w^3)(u+v)}{(u-1)(v-1)},\frac{v-(w^2+w^3)u}{v-1},u,v, ...$$</span> </p>
<p>For equations of the form <span class="math-container">$$x_nx_{n+2}=w^{a+b}x_n-(w^a+w^b)x_{n+1}+x_{n+2},\text{ for }w^a+w^b\ne 0$$</span> with the same globally periodic property, I can show that the only possible periods are 5,8,12,18 and 30.</p>
<p>Curiously, the 'same' equation works for all these periods:
<span class="math-container">$$x_nx_{n+2}=w^5x_n-(w^2+w^3)x_{n+1}+x_{n+2},$$</span>
where <span class="math-container">$w$</span> is a primitive <span class="math-container">$p$</span>th root of unity for <span class="math-container">$p=5,8,12,18,30$</span>. Is this a fluke or is there a way of seeing why this 'family' of equations always generate cycles? </p>
| Pavel Kozlov | 143,912 | <p>Let us prove that for all nonzero <span class="math-container">$a,b\in \mathbb{C}$</span> the difference equation <span class="math-container">$$x_nx_{n+2}=ax_n+bx_{n+1}+x_{n+2}$$</span>
never generates a cycle of length <span class="math-container">$8$</span>.
Let us denote <span class="math-container">$u=x_0, y=x_1$</span>. </p>
<p>Suppose the contrary. We know that
<span class="math-container">$$x_{n+2}=\frac{ax_n+bx_{n+1}}{x_n-1}, x_{n-1}=\frac{bx_n+x_{n+1}}{x_{n+1}-a},$$</span>
so for every integer <span class="math-container">$k$</span> we can represent <span class="math-container">$x_k$</span> as rational function in variables <span class="math-container">$u,v$</span>:
<span class="math-container">$$x_k=\frac{P_{k}(u,v)}{Q_k(u,v)}.$$</span></p>
<p>If the equality <span class="math-container">$x_{-4}=x_4$</span> is true then <span class="math-container">$P_{-4}(u,v)Q_{4}(u,v)=P_{4}(u,v)Q_{-4}(u,v)$</span>.</p>
<p>Calculations in Wolfram Mathematica gives the next result for the left side</p>
<p><a href="https://i.stack.imgur.com/RBYRW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RBYRW.png" alt="Left side"></a></p>
<p>and for the right side</p>
<p><a href="https://i.stack.imgur.com/Dz3z4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dz3z4.png" alt="Right side"></a></p>
<p>Looking at some coefficients at <span class="math-container">$u^iv^j$</span> we get
<span class="math-container">$$[u^0v^1]: 2a^2b-ab^3=-2a^6b+a^5b^3,$$</span>
<span class="math-container">$$[u^1v^1]: -a^7+a^6b^2=-a^3+3a^2b^2-ab^4,$$</span>
<span class="math-container">$$[u^3v^2]: b-ab+b^2-ab^2=-a^2b+a^3b-ab^2+a^2b^2.$$</span></p>
<p>These equations after excluding some terms turn into (we remember that <span class="math-container">$a,b\not =0$</span>:</p>
<p><span class="math-container">$$ (a^4+1)(2a-b^2)=0, \tag{1}$$</span>
<span class="math-container">$$[u^1v^1]: b^4+(a^5-3a)b^2+a^2-a^6=0, \tag{2}$$</span>
<span class="math-container">$$[u^3v^2]: (a-1)(b(a+1)+a^2+1)=0. \tag{3}$$</span></p>
<p>If <span class="math-container">$a=1$</span>, then from <span class="math-container">$(1)$</span> we have that <span class="math-container">$b^2=2$</span>. In this case we have
<span class="math-container">$$P_{-4}(u,v)Q_{4}(u,v)-P_{4}(u,v)Q_{-4}(u,v)\not =0,$$</span> as it equals to</p>
<p><a href="https://i.stack.imgur.com/RAZUK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RAZUK.png" alt="1, \sqrt{2} case"></a></p>
<p>So <span class="math-container">$a\not =1$</span>, and equation <span class="math-container">$(3)$</span> gives us <span class="math-container">$a\not =-1$</span> and the next relationship
<span class="math-container">$$b=-\frac{a^2+1}{a+1}. \tag{4}$$</span></p>
<p>Let us consider the equation <span class="math-container">$(1)$</span> and suppose that <span class="math-container">$b^2=2a$</span>. Then from <span class="math-container">$(2)$</span> we have that <span class="math-container">$a^2-a^6=0$</span>. As <span class="math-container">$a\not =0,\pm 1$</span> then <span class="math-container">$a=\pm i$</span>. But in that case <span class="math-container">$a=\pm i$</span>, so <span class="math-container">$b=0$</span> by <span class="math-container">$(4)$</span>, contradiction with our assumption.</p>
<p>Hence <span class="math-container">$a^4+1=0$</span>, so for some odd <span class="math-container">$k$</span> we have
<span class="math-container">$$a=e^{ik\pi/4}=\frac{\pm 1 \pm i}{\sqrt{2}}. \tag{5}$$</span></p>
<p>Joining together <span class="math-container">$b^2=2a$</span> and equation <span class="math-container">$(4)$</span> we have that <span class="math-container">$a$</span> is also a root of polynomial <span class="math-container">$x^4-2x^3-2x^3-2x+1$</span> but that's not true, so we get final contradiction.</p>
|
2,602,799 | <p>This is exactly what is written in Walter Rudin chapter 2, Theorem 2.41:</p>
<p>If $E$ is not closed, then there is a point $\mathbf{x}_o \in \mathbb{R}^k$ which is a limit point of $E$ but not a point of $E$. For $n = 1,2,3, \dots $ there are points $\mathbf{x}_n \in E$ such that $|\mathbf{x}_n-\mathbf{x}_o| < \frac{1}{n}$. Let $S$ be the set of these points $\mathbf{x}_n$. Then $S$ is infinite (otherwise $|\mathbf{x}_n-\mathbf{x}_o|$ would have a constant positive value, for infinitely many $n$), $S$ has $\mathbf{x}_o$ as a limit point, and $S$ has no other limit point in $\mathbb{R}^k$. For if $\mathbf{y} \in \mathbb{R}^k, \mathbf{y} \neq \mathbf{x}_o$, then
\begin{align}
|\mathbf{x}_n-\mathbf{y}| \geq{} &|\mathbf{x}_o-\mathbf{y}| - |\mathbf{x}_n-\mathbf{x}_o|\\
\geq {} & |\mathbf{x}_o-\mathbf{y}| - \dfrac{1}{n} \geq \dfrac{1}{2} |\mathbf{x}_o-\mathbf{y}|
\end{align}
for all but finitely many $n$. This shows that $\mathbf{y}$ is not a limit point of $S$.</p>
<p>The question:</p>
<p>I'm stuck in understanding the reason behind why $S$ is infinite.
Also I need clarification why the last inequality holds. May someone help, please?</p>
| Atbey | 327,944 | <p>If $S$ was finite, then you can take minimum of $|x_n - x_0|$, since there are finitely many $x_n\in S$. (minimum is not $0$ since $x_0\notin S$) But this contradicts with the fact that there are $x_n$'s satisfying $|x_n-x_0| < 1/n$ for all $n$.</p>
<p>For the last inequality;</p>
<p>\begin{equation}
|\mathbf{x}_o-\mathbf{y}| - \dfrac{1}{n} \geq \dfrac{1}{2}|\mathbf{x}_o-\mathbf{y}|
\iff \dfrac{1}{2}|\mathbf{x}_o-\mathbf{y}| \geq \dfrac{1}{n}
\end{equation}
and this is true for all but finitely many $n$, since $\dfrac{1}{2}|\mathbf{x}_o-\mathbf{y}|$ is a fixed positive number and $1/n\to 0$.</p>
|
4,176,152 | <p>I tried to simplify it</p>
<p><span class="math-container">$$f(z)=\frac{3z^4 -2z^3 +8z^2 -2z +5}{z-i}$$</span>
<span class="math-container">$$f(z)=\frac{{3z^2 -2z +5}{z^2+1}}{z-i}$$</span>
<span class="math-container">$$f(z)=\frac{(3z^2 -2z +5)(z+i)(z-i)}{z-i}$$</span></p>
<p><span class="math-container">$$f(z)=(3z^2-2z+5)(z+i)$$</span></p>
<p>How to prove that it won't be continuous at <span class="math-container">$i$</span> from here on?</p>
| mjw | 655,367 | <p><span class="math-container">$$f(z) = \left\{ \begin{aligned} (z+i)(3z-5)(z+1), \quad z & \ne i \\
\text{undefined}, \quad z & =i
\end{aligned} \right.$$</span></p>
<p>Since <span class="math-container">$f(z)$</span> is undefined at <span class="math-container">$z=i$</span>, it cannot be continuous there.</p>
<p>It is, though, a removable discontinuity. Just define it to be <span class="math-container">$\lim_{z\to i} f(z).$</span></p>
|
4,176,152 | <p>I tried to simplify it</p>
<p><span class="math-container">$$f(z)=\frac{3z^4 -2z^3 +8z^2 -2z +5}{z-i}$$</span>
<span class="math-container">$$f(z)=\frac{{3z^2 -2z +5}{z^2+1}}{z-i}$$</span>
<span class="math-container">$$f(z)=\frac{(3z^2 -2z +5)(z+i)(z-i)}{z-i}$$</span></p>
<p><span class="math-container">$$f(z)=(3z^2-2z+5)(z+i)$$</span></p>
<p>How to prove that it won't be continuous at <span class="math-container">$i$</span> from here on?</p>
| ultralegend5385 | 818,304 | <p>Recall the limit definition of continuity.</p>
<blockquote>
<p>A function <span class="math-container">$f$</span> is continuous at <span class="math-container">$x=c$</span> iff <span class="math-container">$$\lim_{x\to c}f(x)=f(c)$$</span></p>
</blockquote>
<p>When <span class="math-container">$f(c)$</span> is not defined, it doesn't make sense to compare it to the limit. So, the function is not continuous. <span class="math-container">$\square $</span></p>
<p>However, it is a removable discontinuity as we can <strong>define</strong> (this function is different from the original function) <span class="math-container">$f(z)$</span> to be the original fraction for <span class="math-container">$x\neq i$</span> and <span class="math-container">$f(i)$</span> as the limit of <span class="math-container">$f(z)$</span> as <span class="math-container">$z\to i$</span>.</p>
<hr>
<p>Also, kudos to you to post the problem with context and details. A great welcome to MathSE!</p>
<p>Hope this helps. Ask anything if not clear :)</p>
|
22,340 | <p>Prove that for all natural numbers statement n, statement is dividable by 7 </p>
<p>$$15^n+6$$</p>
<p><strong>Base.</strong> We prove the statement for $n = 1$</p>
<p>15 + 6 = 21 it is true</p>
<p><strong>Inductive step.</strong></p>
<p><em>Induction Hypothesis.</em> We assume the result holds for $k$. That is, we assume that</p>
<p>$15^k+6$</p>
<p>is divisible by 7</p>
<p><em>To prove:</em> We need to show that the result holds for $k+1$, that is, that</p>
<p>$15^{k+1}+6=15^k\cdot 15+6$</p>
<p>and I don't know what to do </p>
| lhf | 589 | <p>By induction hypothesis, you have $15^k=7t-6$.</p>
|
22,340 | <p>Prove that for all natural numbers statement n, statement is dividable by 7 </p>
<p>$$15^n+6$$</p>
<p><strong>Base.</strong> We prove the statement for $n = 1$</p>
<p>15 + 6 = 21 it is true</p>
<p><strong>Inductive step.</strong></p>
<p><em>Induction Hypothesis.</em> We assume the result holds for $k$. That is, we assume that</p>
<p>$15^k+6$</p>
<p>is divisible by 7</p>
<p><em>To prove:</em> We need to show that the result holds for $k+1$, that is, that</p>
<p>$15^{k+1}+6=15^k\cdot 15+6$</p>
<p>and I don't know what to do </p>
| Bill Dubuque | 242 | <p>Often textbook solutions to induction problems like this are magically "pulled out of a hat" - completely devoid of intuition. Below I explain the intuition behind the induction in this proof. Namely, I show that the proof easily reduces to the completely trivial induction that <span class="math-container">$\rm\ \color{#c00}{1^n \equiv 1}$</span>.</p>
<p>Since <span class="math-container">$\rm\ 15^n + 6 = 15^n-1 + 7\:,\: $</span> it suffices to show that <span class="math-container">$\rm\ 7\ |\ 15^n - 1\:.\: $</span> The base case <span class="math-container">$\rm\ n=1\ $</span> is clear. The inductive step, slightly abstracted, is simply the following</p>
<p><span class="math-container">$\ \ \ \ \ \ \ \begin{align} &7\ |\ \ \color{#0a0}{c\ -1},\ \ \ \color{#90f}{d\ -\ 1}\ \ \Rightarrow\ \ 7\ |\ cd\,-\,1 = (\color{#0a0}{c-1})\ d + \color{#90f}{d-1}\\[.2em]
{\rm thus} \ \ \ \ &7\ |\ 15-1,\ 15^n-1\ \ \Rightarrow\ \ 7\ |\ 15^{n+1}-1\end{align}$</span></p>
<p><span class="math-container">$\rm Said\ \ mod\ 7,\ \ 15\equiv 1\ \Rightarrow\ 15^n\equiv \color{#c00}{1^n\equiv 1}\ $</span> by inductively multiplying ("powering") using this:</p>
<p><strong>Lemma</strong> <span class="math-container">$\rm\ \ \ \ \ A\equiv a,\ \ B\equiv b\ \Rightarrow\ AB\equiv ab\ \ (mod\ m)\quad\ $</span> [<strong>Congruence Product Rule</strong>)</p>
<p><strong>Proof</strong> <span class="math-container">$\rm\ \ m\: |\: A-a,\:\:\ B-b\ \Rightarrow\ m\ |\ (A-a)\ B + a\ (B-b)\ =\ AB - ab $</span></p>
<p>Notice how this transformation from divisibility form to congruence arithmetic form has served to reduce the induction to the triviality <span class="math-container">$\rm\, \color{#c00}{1^n \equiv 1}$</span>. Many induction problems can similarly be reduced to trivial inductions by appropriate conceptual preprocessing. Always think before calculating!</p>
<p>See <a href="https://math.stackexchange.com/a/1179145/242">here</a> and <a href="https://math.stackexchange.com/a/794719/242">here</a> for much further discussion on this topic.</p>
|
1,713,778 | <p>Let $P=\{p_1,p_2,\ldots ,p_n\}$ the set of the first $n$ prime numbers and let $S\subseteq P$. Let
$$A=\prod_{p\in S}p$$ and $$B=\prod_{p\in P-S}p.$$ Show that if
$A+B<p_{n+1}^2$, then the number $A+B$ is prime.
Also, if
$$1<|A-B|<p_{n+1}^2,$$ then the number $|A-B|$ is prime.</p>
| Community | -1 | <p>I believe due to Bertrand's postulate there is a finite (and relatively small) number of pairs $(n, S)$ for which $A + B < p_{n+1}^2$. An interesting satellite problem is to identify all such pairs; then the proposed problem is solved by inspection.</p>
|
3,183,274 | <p>This is a reinterpretation of my old question <a href="https://math.stackexchange.com/questions/3177594/fit-data-to-function-gt-frac1001-alpha-e-beta-t-by-using-least-s">Fit data to function $g(t) = \frac{100}{1+\alpha e^{-\beta t}}$ by using least squares method (projection/orthogonal families of polynomials)</a>. I need to understand things in terms of orthogonal projections and inner products and the answers were for common regression techniques.</p>
<blockquote>
<p>t --- 0 1 2 3 4 5 6</p>
<p>F(t) 10 15 23 33 45 58 69</p>
<p>Adjust <span class="math-container">$F$</span> by a function of the type <span class="math-container">$$g(t) = \frac{100}{1+\alpha
e^{-\beta t}}$$</span> by the discrete least squares method</p>
</blockquote>
<p>First of all, we cannot work with the function <span class="math-container">$g(t)$</span> as it is. The way I'm trying to see the problem is via projections. </p>
<p>So let's try to transform the problem like this:</p>
<p><span class="math-container">$$\frac{100}{g(t)}-1 = \alpha e^{-\beta t}\implies \ln \left(\frac{100}{g(t)}-1\right) = \ln \alpha -\beta t$$</span></p>
<p>Since we want to fit the function to the points, we want to minimize the distance of the function from the set of points, that is:</p>
<p><span class="math-container">$$\min_{\alpha,\beta} \left(\ln\left(\frac{100}{g(t)}-1\right)-\ln\alpha + \beta t\right)$$</span></p>
<p>Without using derivative and equating things to <span class="math-container">$0$</span>, there's a way to see this problem as an orthogonal projection problem. </p>
<p>I know I need to end up with something like this:</p>
<p><span class="math-container">$$\langle \ln\left(\frac{100}{g(t)}-1\right)-\ln\alpha + \beta t, 1\rangle = 0\\ \langle \ln\left(\frac{100}{g(t)}-1\right)-\ln\alpha + \beta t, t\rangle=0$$</span> </p>
<p>And I know this comes from the knowledge that our minimum is related to some projection and this projection lives in a space where the inner product with <span class="math-container">$span\{1, t\}$</span> (because of <span class="math-container">$\ln\alpha,\beta t$</span>), gives <span class="math-container">$0$</span>.</p>
<p>In order to end up with </p>
<p><span class="math-container">$$\begin{bmatrix}
\langle 1,1\rangle & \langle t,1\rangle \\
\langle 1,t\rangle & \langle t,t\rangle \\
\end{bmatrix} \begin{bmatrix}
\ln \alpha \\
-\beta \\
\end{bmatrix}= \begin{bmatrix}
\langle \ln\left(\frac{100}{g(t)}-1\right) , 1\rangle \\
\langle \ln\left(\frac{100}{g(t)}-1\right) , t\rangle \\
\end{bmatrix}$$</span></p>
<p>Where the inner product is </p>
<p><span class="math-container">$$\langle f,g\rangle = \sum f_i g_i $$</span></p>
<p>*why?</p>
<p>Can someone tell me what reasoning gets me to the inner products above, if I did everything rigth and how to finish the exercise?</p>
| Max | 2,633 | <p>Linear regression <strong>is</strong> linear algebra in disguise.</p>
<p>You are searching for a function <span class="math-container">$$l(t)= c_1 +c_2t$$</span> (where in your case <span class="math-container">$c_1= \ln \alpha$</span> and <span class="math-container">$c_2=-\beta$</span>), that is a linear combination of functions <span class="math-container">$v_1(t)=1$</span> and <span class="math-container">$v_2(t)=t$</span>. Your goal is to minimize <span class="math-container">$$e(l,h)=\sum (l(t_i)-h(t_i))^2$$</span> (where in your case <span class="math-container">$h(t)=\ln \left(\frac{100}{g(t)}-1 \right)$</span>).</p>
<p>The "sum of squares" formula is suggestive of Pythagoras theorem/norm on some vector space. We want to view <span class="math-container">$e(l,h)$</span> as a square of distance on, say, the vector <span class="math-container">$F$</span> space of functions <span class="math-container">$f: \mathbb{R}\to\mathbb{R}$</span>, coming from the dot product</p>
<p><span class="math-container">$$<f,g>=\sum_i f(t_i) g(t_i)$$</span></p>
<p>(Recall that square distance between two vectors in a vector space with a dot product is <span class="math-container">$d(u,v)^2=<u-v, u-v>$</span>, so we recover <span class="math-container">$e=d^2$</span> from the dot product above.) </p>
<p>A slight problem is that on this vector space of functions <span class="math-container">$F$</span> the "distance" <span class="math-container">$d(l,h)=\sqrt{e(l,h)}$</span> is not really a distance, since it vanishes as soon as <span class="math-container">$l(t_i)=h(t_i)$</span> for all <span class="math-container">$i$</span> (in math-speak we get only a pseudometric, not a metric). We can either ignore this, or use the standard solution which is to work on the quoutient space <span class="math-container">$V=F/F_0$</span> of functions modulo subspace <span class="math-container">$F_0=\{f: \mathbb{R}\to\mathbb{R}| f(t_i)=0\}$</span> -- the ones that are "distance zero from the origin". This has an advantage that <span class="math-container">$V$</span> is now a finite dimensional vector space (of dimension equal to the number of data points), so we can be more confident using standard linear algebra. Note that <span class="math-container">$V$</span> has the dot product <span class="math-container">$<f,g>=\sum_i f(t_i) g(t_i)$</span>.</p>
<p>In any case, we are now looking for a function <span class="math-container">$l(t)= c_1 +c_2t$</span> that is closest to <span class="math-container">$h(t)$</span> in the sense of the Euclidean distance <span class="math-container">$d$</span>, that is a point in subspace spanned by <span class="math-container">$1, t$</span> (in <span class="math-container">$F$</span>, or more precisely by their equivalence classes in <span class="math-container">$V$</span>). We can forget all the complicated setup, and just think: given a point <span class="math-container">$h$</span> and a plane spanned by two vectors, how do we find a point <span class="math-container">$l$</span> in the plane closest to <span class="math-container">$h$</span>? Of course we must project <span class="math-container">$h$</span> onto the plane! That is, <span class="math-container">$l$</span> must be such that <span class="math-container">$h-l$</span> is orthogonal to the plane, meaning orthogonal to both spanning vectors. Thus, we are looking for <span class="math-container">$l=c_1+tc_2$</span> such that <span class="math-container">$<h-l, 1>=0$</span> and <span class="math-container">$<h-l, t>=0$</span> (where the dot product is still <span class="math-container">$<f,g>=\sum_i f(t_i) g(t_i)$</span>). These are the equations in your question.</p>
<p>Now you just need to solve them. To do so, plug in <span class="math-container">$l=c_1+c_2 t$</span> and rewrite the equations as </p>
<p><span class="math-container">$<h,1>=c_1<1,1>+c_2<1,t>$</span></p>
<p><span class="math-container">$<h,t>=c_1<1,t>+c_2<t,t>$</span></p>
<p>This is a linear system with 2 equations and 2 unknowns, which you can write as the matrix equation -- the one you have in the question.</p>
<p>To finish the exercise just compute all the dot products (for example in your case <span class="math-container">$<1,1>=\sum_i 1 \cdot 1=7$</span>, <span class="math-container">$<1,t>=\sum_i 1 \cdot i=0+1+\ldots+6=21$</span>, <span class="math-container">$<t,t>=91$</span>, <span class="math-container">$<h, 1>=\sum_{i=0}^6 h(i)$</span>, <span class="math-container">$<h, t>=\sum_{i=0}^6 h(i) \cdot i$</span>) and solve the 2 by 2 linear system by whatever method you like (Gaussian elimination, or multiplying by <span class="math-container">$\begin{bmatrix}7&21\\21&91\end{bmatrix}^{-1}=\frac{1}{196}\begin{bmatrix}91&-21\\-21&7\end{bmatrix}$</span>, or even the Cramer's rule that Yuri used in another answer). You will get <span class="math-container">$c_1= \ln \alpha$</span> and <span class="math-container">$c_2=-\beta$</span>, and hence can solve for <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> as well.</p>
|
3,637,085 | <p>I have found this limit in <a href="https://oeis.org/A019609" rel="nofollow noreferrer">https://oeis.org/A019609</a> and I was wondering how to prove it (if it is actually correct):
<span class="math-container">$$\lim_{n\to\infty} \frac{4n}{a^2_n}=\pi e$$</span>
where
<span class="math-container">$$a_1=0,a_2=1, a_{n+2}=a_{n+1}+\frac{a_n}{2n}.$$</span></p>
<p>By computer evaluation, it is correct for <span class="math-container">$2$</span> digits after decimal point at about <span class="math-container">$n\approx 24100$</span>, so if it is correct, it converges really slow. </p>
<p>I've attempted to prove this by first considering generating function <span class="math-container">$f(x)=\sum_{n \geq 1}a_nx^n$</span> and then trying to get asymptotics of its coefficients. By using recurrence, we get <span class="math-container">$f(x)/x^2-1=f(x)/x+\sum \frac{a_n}{2n}x^n$</span>, and after differentiation we get differential equation which solves to <span class="math-container">$$f(x)=\frac{e^{-x/2}x^2}{(1-x)^{3/2}}.$$</span> Now I think this is a step away from getting asymptotics of <span class="math-container">$a_n$</span>, but I don't know how. Can anybody show how to finish this? Or maybe there is another way?</p>
<p>Also, I don't think it is useful, but here is at least closed form obtained from the <span class="math-container">$f(x)$</span> using binomial series and exponential function series:
<span class="math-container">$$
a_n=\sum_{i=0}^{n-2}\frac{(-1)^n}{2^i i!}\binom{-3/2}{n-i-2}.
$$</span></p>
<p>Closest to this question seems to be <a href="https://math.stackexchange.com/questions/1664933/mirror-algorithm-for-computing-pi-and-e-does-it-hint-on-some-connection-b">Mirror algorithm for computing $\pi$ and $e$ - does it hint on some connection between them?</a>, where there are two sequences approaching <span class="math-container">$\pi$</span> and <span class="math-container">$e$</span> and solutions seem to use same approach using generating functions, so this seems to be on the right track.</p>
| md5 | 301,549 | <p>You can get the asymptotics of the coefficients of the generating function:</p>
<p><span class="math-container">$$f(z)=\frac{e^{-z/2} z^2}{(1-z)^{3/2}}$$</span></p>
<p>using standard tools of singularity analysis from analytic combinatorics (see e.g. section B.VI of <a href="http://algo.inria.fr/flajolet/Publications/book.pdf" rel="nofollow noreferrer">Flagolet and Sedgewick's book</a>). What you need is:</p>
<p><span class="math-container">$$[z^n](1-z)^{-\alpha}\underset{n\to\infty}{\sim} \frac{n^{\alpha-1}}{\Gamma(\alpha)}$$</span></p>
<p>And some transfer theorem, namely that under mild conditions on the regularity of <span class="math-container">$f$</span> on the unit disk (satisfied here), <span class="math-container">$f(z)\underset{z\to 1}{\sim} C(1-z)^{-\alpha}$</span> implies that <span class="math-container">$[z^n] f(z)\underset{n\to\infty}{\sim} Cn^{\alpha-1}/\Gamma(\alpha)$</span>. Basically it allows you to say directly:</p>
<p><span class="math-container">$$[z^n] f(z)\underset{n\to\infty}{\sim} e^{-1/2}\frac{\sqrt{n}}{\Gamma(3/2)}=2\sqrt{\frac{n}{e\pi}}$$</span></p>
<p>which gives the intended asymptotics.</p>
|
3,733,229 | <p>In a probability space, it is said that a set of events should be <span class="math-container">$\sigma$</span>-algebra, meaning:</p>
<p><a href="https://i.stack.imgur.com/BgP8O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BgP8O.png" alt="definition of sigma-algebra" /></a>
<em>This is from <a href="https://cims.nyu.edu/%7Ecfgranda/pages/stuff/probability_stats_for_DS.pdf" rel="nofollow noreferrer">Probability and Statistics for Data Science by Prof. Fernandez-Grandza</a></em></p>
<p>But, <span class="math-container">$\sigma$</span>-algebra does not necessarily contain all possible sets of outcomes and is not always equal to the power set of a sample space, how the third condition holds?</p>
| Philipp123 | 704,651 | <p><span class="math-container">$\Omega\in\mathcal{F}$</span> does not imply <span class="math-container">$A\in\mathcal{F}$</span> if <span class="math-container">$A \subset \Omega$</span>.
An algebra is a set of sets. For example, it can happen, that <span class="math-container">$\{1,2\}\in \mathcal{F}$</span> but <span class="math-container">$\{1\}\notin \mathcal{F}$</span></p>
|
833,143 | <p>Wolfram alpha solves $\sqrt{x+1}\ge\sqrt{x+2}+\sqrt{x+3}$ for $x$, and answers $x=-2/3(3+\sqrt{3})$. How did it do it? Thanks!</p>
| georg | 144,937 | <p>I'd say it's okay. That the number $X=-2/3(3+\sqrt{3})$ is the solution of the <strong>equation</strong> $\sqrt{x+1} >\color{red}{=} \sqrt{x+2}+\sqrt{x+3}:$</p>
<p>$\sqrt{X+1}=i\sqrt{1/3(3+2\sqrt{3})}$</p>
<p>$\sqrt{X+2}+\sqrt{X+3}=i\sqrt{1/3 (3+2\sqrt{3})}$</p>
<p>For the desired inequality <strong>not explicitly stated condition x > 0</strong>, so there is no reason Wolfram Alpha to exclude solutions < 0.</p>
|
1,399,935 | <p>I'm reading Kleene's introduction to logic and in the beginning he mentions something that I have thought about for a while. The question is how can we treat logic mathematically without using logic in the treatment? He mentions that in order to deal with this what we do is that we separate the logic we are studying from the logic we are using to study it (which is the <em>object language and metalanguage, respectively</em>). How does this answer the question? Aren't we still using logic to build logic? </p>
<p>And I have a feeling that the answer is to some extent that we use simpler logics to build more complex ones, but then don't we run into a paradox of what 's the simplest logic, for won't any system of logic no matter how simple be a metalanguage for a simpler language? </p>
| mathreadler | 213,607 | <p>You can use arithmetics to do logic.</p>
<p>true is 1</p>
<p>false is 0</p>
<p>$a \cdot b \Leftrightarrow$ logical and</p>
<p>$1-a \Leftrightarrow$ logical not</p>
<p>$a+b>0 \Leftrightarrow$ logical inclusive or</p>
<p>$a+b=1 \Leftrightarrow$ logical exclusive or</p>
|
2,645,611 | <blockquote>
<p>Prove that:
<span class="math-container">$$\frac{(n+0)!}{0!}+\frac{(n+1)!}{1!}+...+\frac{(n+n)!}{n!}=\frac{(2n+1)!}{(n+1)!}$$</span></p>
</blockquote>
<h3>My work so far:</h3>
<p><span class="math-container">$$\frac{(n+0)!}{0!}+\frac{(n+1)!}{1!}+...+\frac{(n+n)!}{n!}=\frac{(2n+1)!}{(n+1)!}$$</span>
<span class="math-container">$$\frac{(n+0)!}{n!0!}+\frac{(n+1)!}{n!1!}+...+\frac{(n+n)!}{n!n!}=\frac{(2n+1)!}{n!(n+1)!}$$</span>
<span class="math-container">$$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+...+\binom{2n}{n}=\binom{2n+1}{n}$$</span></p>
<p><em>How to prove the last equality?</em></p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\sum_{k = 0}^{n}{\pars{n + k}! \over k!} & =
n!\sum_{k = 0}^{n}{n + k \choose k} =
n!\sum_{k = 0}^{n}{-n - 1 \choose k}\pars{-1}^{k} =
n!\sum_{k = 0}^{n}\pars{-1}^{k}{-n - 1 \choose -n - 1 - k}
\\[5mm] & =
n!\sum_{k = 0}^{n}\pars{-1}^{k}\bracks{z^{-n - 1 - k}}
\pars{1 + z}^{-n - 1} =
n!\bracks{z^{-n - 1}}{1 \over \pars{1 + z}^{n + 1}}
\sum_{k = 0}^{n}\pars{-z}^{k}
\\[5mm] & =
n!\bracks{z^{-n - 1}}{1 \over \pars{1 + z}^{n + 1}}\,
{\pars{-z}^{n + 1} - 1 \over -z - 1} =
-n!\bracks{z^{-n - 1}}
{1 - \pars{-1}^{n}z^{n + 1} \over \pars{1 + z}^{n + 2}}
\\[5mm] & =
-n!\bracks{z^{n + 1}}z
{z^{n + 1} - \pars{-1}^{n} \over \pars{1 + z}^{n + 2}} =
{\pars{-1}^{n} \over n!}\bracks{z^{n}}\pars{1 + z}^{-n - 2} =
{\pars{-1}^{n} \over n!}{-n - 2 \choose n}
\\[5mm] & =
\pars{-1}^{n}\,n!{2n + 1 \choose n}\pars{-1}^{n} = \bbx{\pars{2n + 1}! \over \pars{n + 1}!}
\end{align}</p>
|
1,837,220 | <p>In this post:
<a href="https://math.stackexchange.com/questions/1056058/computing-int-sqrt14x2-dx">Computing $\int \sqrt{1+4x^2} \, dx$</a>
someone mentioned Euler substitution to compute the following integral:</p>
<p>$$\int \sqrt{1+4x^2} \, dx$$</p>
<p>I tried to follow this advice and got very nice result, namely I substituted $\sqrt{1+4x^2}=t-2x$ which after raising to the square and reducing gives $x=\frac{t^2-1}{4t}$ and $t-2x=\frac{t^2+1}{2t}$, then derivative is equal to $\frac{dx}{dt}=\frac{t^2+1}{4t^2}$ and the whole integral:</p>
<p>$$\int \sqrt{1+4x^2} \, dx = \int \frac{(t^2+1)^2}{8t^3} \, dt$$</p>
<p>Could you please check my solution, because it seems a lot easier than all these trigonometric substitution (too easy which is suspicious...).
Thanks in advance.</p>
| Behrouz Maleki | 343,616 | <p><strong>Hint</strong>
$$x=\frac{1}{2}\tan\theta$$
$$I=\frac12\int\sec^3\theta\,d\theta $$</p>
|
1,528,235 | <p>Recall that <a href="http://en.wikipedia.org/wiki/Tetration" rel="noreferrer">tetration</a> ${^n}x$ for $n\in\mathbb N$ is defined recursively: ${^1}x=x,\,{^{n+1}}x=x^{({^n}x)}$. </p>
<p>Its inverse function with respect to $x$ is called <a href="http://en.wikipedia.org/wiki/Tetration#Super-root" rel="noreferrer">super-root</a> and denoted $\sqrt[n]y_s$ (the index $_s$ is not a variable, but is part of the notation — it stands for "super"). For $y>1, \sqrt[n]y_s=x$, where $x$ is the unique solution of ${^n}x=y$ satisfying $x>1$. It is known that $\lim\limits_{n\to\infty}\sqrt[n]2_s=\sqrt{2}$. We are interested in the convergence speed. It appears that the following limit exists and is positive:
$$\mathcal L=\lim\limits_{n\to\infty}\frac{\sqrt[n]2_s-\sqrt2}{(\ln2)^n}\tag1$$
Numerically,
$$\mathcal L\approx0.06857565981132910397655331141550655423...\tag2$$</p>
<hr>
<p>Can we prove that the limit $(1)$ exists and is positive? Can we prove that the digits given in $(2)$ are correct? Can we find a closed form for $\mathcal L$ or at least a series or integral representation for it?</p>
| mick | 39,261 | <p>$$\mathcal L=\lim\limits_{n\to\infty}\frac{\sqrt[n]2_s-\sqrt2}{(\ln2)^n}\tag1$$</p>
<p>This limit is only possible if</p>
<p>$$\lim\limits_{n\to\infty}\frac{\sqrt[n+1]2_s - \sqrt 2}{\sqrt[n]2_s - \sqrt 2}= \ln2$$</p>
<p>To show this , use l'hopital</p>
<p>We get with $f(x) = x^{f(x)}$ :</p>
<p>$ \frac{D x^{f(x)}} {f ' (x)} = \frac{ \sqrt 2 ^2 2\ln(\sqrt2) }{2} = \ln2$.</p>
<p>Qed</p>
<p>This is part of the answer that justifies the RHS of the limit</p>
<p>$$\mathcal L=\lim\limits_{n\to\infty}\frac{\sqrt[n]2_s-\sqrt2}{(\ln2)^n}\tag1$$</p>
<p>With thanks to Tommy1729 for hints.</p>
|
4,587,657 | <p>The call function is defined as</p>
<p><span class="math-container">$$
\text{call}: \begin{cases}
(\mathbb{R}^{I}\times I) \to \mathbb{R} \\
(f,x) \mapsto f(x)
\end{cases}
$$</span></p>
<p>is "<span class="math-container">$\text{call}$</span>" a measurable function? In other words: for a random field <span class="math-container">$Z:\mathbb{R}^n\to\mathbb{R}$</span> and a random location <span class="math-container">$X\in\mathbb{R}^n$</span>, is <span class="math-container">$Z(X)$</span> a random variable?</p>
<p>To address a comment: This is not simply a question of chaining measurable functions, as both <span class="math-container">$Z$</span> and <span class="math-container">$X$</span> depend on <span class="math-container">$\omega$</span>. I.e we really have
<span class="math-container">$Z(\omega):\mathbb{R}^n\to\mathbb{R}$</span>, so <span class="math-container">$\omega\mapsto Z(\omega)(X(\omega))=\text{call}(Z(\omega), X(\omega))$</span> needs to be measurable.</p>
<p>I would guess this should be correct tough. At least for continuous random fields. But I am not sure what to search for. In a programming context "call" would be appropriate (cf. <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call</a>). I am mostly looking for a reference to cite.</p>
| FShrike | 815,585 | <p>This post assumes <span class="math-container">$\Bbb R^I$</span> is taken with the compact-open topology, and is given the Borel sigma-algebra. <span class="math-container">$\Bbb R^I$</span> is used to denote the set of <em>continuous</em> functions <span class="math-container">$I\to\Bbb R$</span>, so only addresses a special subcase of the problem.</p>
<hr />
<p><a href="https://assets.pubpub.org/6d1dqgg9/51597355090422.pdf" rel="nofollow noreferrer">Reference</a>.</p>
<p>You mean the domain to be <span class="math-container">$\Bbb R^I\times I$</span>, by the way.</p>
<p>The answer is <em>yes</em> since <span class="math-container">$I$</span> is locally compact Hausdorff in the sense that it is Hausdorff and there is a compact local base at every point. That means the evaluation is continuous, so in particular it is Borel measurable.</p>
<p>That is one condition for the exponential topology on <span class="math-container">$X^I$</span> (for any other space <span class="math-container">$X$</span>) to exist. In fact, the compact-open topology on <span class="math-container">$V^U$</span> is exponential iff. the evaluation (‘call’): <span class="math-container">$$V^U\times U\overset{\rm{ev}}{\longrightarrow}V$$</span>Is continuous.</p>
<p>By an ‘exponential topology’, I mean a topology on the set <span class="math-container">$V^U$</span> such that the adjunction pairing: <span class="math-container">$$f\mapsto\mathrm{ev}\circ(f\times1):W\times U\to V$$</span>Among continuous <span class="math-container">$f:W\to V^U$</span> defines a bijection <span class="math-container">$C(W,V^U)\cong C(W\times U,V)$</span> for all spaces <span class="math-container">$W$</span>. Under certain conditions, such as when one works in the category of compactly generated Hausdorff spaces, that last bijection is even a homeomorphism!</p>
<p>Note: the book I reference does dip a little into compactly generated Hausdorff spaces, but it doesn’t do so very well or in great detail / rigour. Also, be warned that conflicting definitions of ‘compactly generated’ can cause serious problems! There is a paper by Strickland, if I remember correctly, that covers compactly generated spaces and their nice categorical properties in good detail.</p>
|
2,259,109 | <p>If the value of $f(z_0)$ or $f^\prime(z_0)$ is complex number then is $f(z)$ analytic at $z_0$?</p>
| PM. | 416,252 | <p>$f (z) $ is analytic at a point if it is <em>differentiable</em> at the point , <em>and</em> on some region containing the point. </p>
|
64,406 | <p>It's often said that there are only two nonabelian groups of order 8 up to isomorphism, one is the quaternion group, the other given by the relations $a^4=1$, $b^2=1$ and $bab^{-1}=a^3$. </p>
<p>I've never understood why these are the only two. Is there a reference or proof walkthrough on how to show any nonabelian group of order 8 is isomorphic to one of these? </p>
| Arturo Magidin | 742 | <p>Here's the proof that there are exactly five nonisomorphic groups of order $p^3$ for every prime $p$, as it appears in Marshall Hall's <strong>Theory of Groups</strong>.</p>
<ol>
<li><p>The abelian case is easy: you have $C_{p^3}$, $C_{p^2}\times C_p$, and $C_{p}\times C_p\times C_p$.</p></li>
<li><p>The nonabelian case. There can be no element of order $p^3$, because then the group is cyclic. </p>
<ul>
<li><p>If all elements are of order $p$, then $p$ must be odd (otherwise the group is abelian). The center of $G$ is of order $p$ (since the quotient must be of order $p^2$ and isomorphic to $C_p\times C_p$); let $x$ and $y$ be elements of $G$ whose images generate the two cyclic factors of $G/Z(G)$. Then $x^p = y^p = 1$, and $x^{-1}y^{-1}xy\neq 1$ (otherwise, $G$ would be abelian), but must be central; so $z=x^{-1}y^{-1}xy$ generates $Z(G)$. So $G$ is given by
$$G = \langle x,y,z\mid x^p=y^p=z^p=1, xy=yxz,\ xz=zx,\ yz=zy\rangle.$$</p></li>
<li><p>If there is an element $x$ of order $p^2$, then $\langle x\rangle$ is a maximal abelian subgroup of $G$, and normal (since its index is the smallest prime that divides $|G|$. Let $b\notin A$. Then $b^p\in A$, and $b^{-1}ab=a^r$ for some $r$; since $G$ is nonabelian, $r\neq 1$. Since $b^p$ commutes with $a$, $a^{r^p}=a$, so $r^p\equiv 1\pmod{p^2}$. From Fermat's Little Theorem, $r^p\equiv r\pmod{p}$, so $r\equiv 1\pmod{p}$. </p>
<p>Write $r=1+sp$, and let $j$ such that $js\equiv 1\pmod{p}$. Then
$$b^{-j}ab^j = a^{r^j} = a^{(1+sp)^j} = a^{1+spj} =a^{1+p}.$$
Since $(j,p)=1$, $b^j\notin A$, so replacing $b$ with $b^j$, we may assume that $b^{-1}ab=a^{1+p}$.</p>
<p>Now, $b^p = a^t$, and $t$ must be a multiple of $t$, because $b$ is not of order $p^3$. Write $t=up$, so $b^p=a^{up}$. Then we have:
$$\begin{align*}
(ba^{-u})^p &= b^pa^{-u(1+(1+p)+(1+p)^2 + \cdots + (1+p)^{p-1})}\\
&= b^p a^{-up-up(1+2+\cdots + p-1)}\\
&= b^p a^{-up-up\binom{p}{2}}.
\end{align*}$$</p>
<ul>
<li><p>If $p$ is odd, then $up\binom{p}{2}$ is a multiple of $p^2$, so we get $(ba^{-u})^p = b^pa^{-up} = b^pb^{-p} = 1$. Setting $c=ba^{-u}$ we get $c^{-1}ac = b^{-1}ab$, so the group is presented by
$$\langle a,c\mid a^{p^2} = c^p = 1,\ ac = ca^{1+p}.\rangle$$</p></li>
<li><p>If $p$ is $2$, however, we get $(ba^{-u})^2 = b^2a^{-up-up} = b^2$. We have two possibilities: it could be that $b^2=1$, in which case we get the same presentation as above:
$$\langle a,b\mid a^{4} = b^2 = 1,\ ab=ba^3\rangle.$$
Or it could be that $b^2=a^2$; we <em>must</em> have $b^{-1}ab=a^3$ (it cannot equal $a$, because then $a$ and $b$ commute and $G$ is abelian), so the group is given by
$$\langle a,b\mid a^4=1,\ a^2=b^2,\ ab=ba^3.$$</p></li>
</ul></li>
</ul></li>
</ol>
<p>Burnside uses essentially the same approach, though he only deals explicitly with odd $p$; the classification for groups of order $p^3$ takes up about two pages (one paragraph, but he invokes results covering two previous pages). He then proceeds to those of order $p^4$; that takes four and a half pages (plus invoking stuff that covers at least one previous page). He only lists those of order $2^3$ and $2^4$. </p>
|
1,634,807 | <p>The full question:</p>
<p>Having an equivalence relation $\sim$ on $\Bbb N$ defined by: $a \sim b$ meaning $a,b\in\Bbb N$ such that $a=b*10^k,$ for some $k\in\Bbb Z$, give a complete set of equivalence class representatives.</p>
<p>I am having trouble visualising these. I'm thinking you would need the whole set $\Bbb Z$. Does anyone have any ideas?</p>
<p>Help would be greatly appreciated! (this is my first post as well, so please say if anything is unclear)</p>
| Ethan Bolker | 72,858 | <p>Here's another answer that may be easier to "visualize". Your equivalence relation says that $a$ and $b$ are equivalent if you can get $a$ by appending zeroes to $b$ or deleting them from the end of $b$. That's the same as saying $a$ and $b$ start with the same sequence of digits before the trailing zeroes (if any) begin). Then the integers with no trailing zeroes represent each class just once.</p>
<p>Since $0$ is in a class by itself, it's the representative of its class.</p>
<p>Those are exactly the numbers in @dREaM 's correct answer.</p>
|
641,744 | <p>I am obliged to find rank of matrix $A$ depending on $ p $.</p>
<p>I know how to do this using Gauss elimination method but I would like to try solve this using minors. I know that the rank of matrix is equal to degree of the biggest non-zero minor.</p>
<p>$A=\left(
\begin{array}{ccc}
p-1&p-1&1&1\\
1&p^{2}-1&1&p-1\\
1&p-1&p-1& 1
\end{array}
\right)$</p>
<p>So I am taking this sub-matrix:
$A_{1}=\left(
\begin{array}{ccc}
p-1&1&1\\
1&1&p-1\\
1&p-1& 1
\end{array}
\right)$</p>
<p>I am counting $det(A_{1})=-p^{2}+5p-6$</p>
<p>$-p^{2}+5p-6=0 \iff p=2 \lor p = 3 $</p>
<p>So now I got two values when the rank of matrix is not equal to 3. No I will test rank for $p=2$ and $p=3$ and so on. Do I have to check only one minor? Am I doing this right?</p>
| Bman72 | 119,527 | <p>This is your matrix. You have already checked in a correct way that the rank of the matrix is 3 for $p \neq 2,3$</p>
<p>$$A=\left(
\begin{array}{ccc}
p-1&p-1&1&1\\
1&p^{2}-1&1&p-1\\
1&p-1&p-1& 1
\end{array}
\right)$$</p>
<p>Now you have to check what happens for p=2 or 3. For p = 2 you have</p>
<p>$$A=\left(
\begin{array}{ccc}
1&1&1&1\\
1&3&1&1\\
1&1&1& 1
\end{array}
\right)\to \left(
\begin{array}{ccc}
1&1&1&1\\
0&0&0&2\\
0&0&0& 0
\end{array}
\right)$$</p>
<p>I've obtained the second matrix by subtraction each line with first line and by taking the second column vector to the end of the matrix. You now easily see that the rank of your matrix is $2 \space \text{for} \space p=2$. For p = 3 you obtain:</p>
<p>$$A=\left(
\begin{array}{ccc}
2&2&1&1\\
1&8&1&2\\
1&2&2& 1
\end{array}
\right) \to \left(
\begin{array}{ccc}
2&2&1&1\\
0&14&1&3\\
0&-6&0&- 1
\end{array}
\right)$$</p>
<p>where I've subtracted 2 times the second line with the first line and the third line with the second line.</p>
<p>$$\left(
\begin{array}{ccc}
2&2&1&1\\
0&14&1&3\\
0&-6&0&- 1
\end{array}
\right) \to \left(
\begin{array}{ccc}
2&2&1&1\\
0&14&1&3\\
0&0&6&4
\end{array}
\right)$$</p>
<p>Where I've added 14 time the third line with 6 time the second line. You now see that for $p=3$ the rank is $3$</p>
|
4,347,174 | <p>In the book Superlinear Parabolic Problems Blow-up, Global Existence and Steady States, page 493 this equation appears in which the book says it uses Young's inequality</p>
<p><span class="math-container">$$ |\Omega| u^p \leq ku^q +\epsilon(k)u $$</span></p>
<p>where <span class="math-container">$\epsilon(k) = C(|\Omega|,p,q) k^{{-(p-1)}/{(q-p)}}$</span>.
The only things that are given are that <span class="math-container">$p<q$</span> (<span class="math-container">$p$</span> and <span class="math-container">$q$</span> are not conjugate exponents) and <span class="math-container">$k>0$</span> (<span class="math-container">$k$</span> can be taken large enough). I am failing to see how this is an application of classical <a href="https://en.wikipedia.org/wiki/Young%27s_inequality_for_products" rel="nofollow noreferrer">Young inequality</a>.
Is this really an application of the classic young inequality or is it an application of some modification of it?</p>
<p><strong>attempt:</strong> The best I've achieved so far using Young's inequality was</p>
<p><span class="math-container">$$ u^p = [\left( k \frac{(p-1)}{q} \right)^{-(p-1)/q}u][(\left( k \frac{(p-1)}{q} \right)^{(p-1)/q}u^{p-1}] \leq \dfrac{q}{(p-1)}[\left( k \frac{(p-1)}{q} \right)^{(p-1)/q}u^{p-1}]^{q/(p-1)} + \dfrac{q}{(q-p+1)}[\left( k \frac{(p-1)}{q} \right)^{-(p-1)/q}u]^{q/(q-p+1)} \leq ku^q+Ck^{-(p-1)/(q-p+1)}u^{q/(q-p+1)}.$$</span></p>
| Glitch | 74,045 | <p>I assume that <span class="math-container">$1 \le p$</span> in the problem, as I believe the inequality is false without this assumption. Let's assume this.</p>
<p>The estimate is trivial if <span class="math-container">$p=1$</span> since you're free to make the constant on the second term larger than <span class="math-container">$|\Omega|$</span>.</p>
<p>Assume then that <span class="math-container">$1 < p < q$</span>. Then write<br />
<span class="math-container">$$
u^{p} = u^{p-1/r} u^{1/r}
$$</span>
for <span class="math-container">$1 < r < \infty$</span> and apply the standard Young inequality to bound
<span class="math-container">$$
u^{p-1/r} u^{1/r} \le \frac{1}{r'} u^{r'(p-1/r)} + \frac{1}{r} u.
$$</span>
Now, <span class="math-container">$r' = r/(r-1)$</span>, so
<span class="math-container">$$
r'(p-1/r) = \frac{rp-1}{r-1}.
$$</span>
We now want to choose <span class="math-container">$1 < r < \infty$</span> such that <span class="math-container">$(rp-1)/(r-1) =q$</span>. Simple algebra shows this is equivalent to
<span class="math-container">$$
r = \frac{q-1}{q-p}
$$</span>
and this satisfies <span class="math-container">$r>1$</span> precisely due to the assumptions that <span class="math-container">$1 < p < q$</span>. We have thus proved that
<span class="math-container">$$
u^p \le \frac{p-1}{q-1} u^q + \frac{q-p}{q-1} u.
$$</span>
We're now most of the way there as we have our "basic" Young inequality. We now apply this to <span class="math-container">$\varepsilon u$</span>, and divide through by <span class="math-container">$\varepsilon^p$</span> to see that
<span class="math-container">$$
u^p \le \frac{p-1}{q-1} \varepsilon^{q-p} u^q + \frac{q-p}{q-1} \varepsilon^{1-p} u.
$$</span>
We can now get your desired bound by choosing <span class="math-container">$\varepsilon$</span> appropriately.</p>
|
4,164,820 | <p>CGM = Continuous Geometric Mean. Heuristics and mathematics are described in:</p>
<ul><li><a href="https://math.stackexchange.com/questions/4162869/subset-as-arithmetic-mean-of-geometric-means-not-really">Subset as arithmetic mean of geometric means. Not really?</a></li></ul>
A shortcut to the question as presented here is:
<span class="math-container">$$
\operatorname{CGM}(\vec{r}) = \exp\left(-\!\!\!\!\!\!\int_0^1 \ln(\left|\vec{p}(t)-\vec{r}\right|^2)\,dt\right)
$$</span>
where <span class="math-container">$\,\vec{p}(t)\,$</span> is a <b>circle</b> in the plane and <span class="math-container">$\,\vec{r} = (x,y)\,$</span> is another point in the plane.
<br>A similar exercise for the circle as previously for a straight line segment leads to the following expression:
<span class="math-container">$$
\operatorname{CGM}(\xi,\eta)=
\exp\left( \frac{1}{2\pi}-\!\!\!\!\!\!\int_0^{2\pi}\ln\left|[\cos(t)-\xi]^2+[\sin(t)-\eta]^2\right|\,dt\right)
$$</span>
where dimensionless <span class="math-container">$\xi=x/R$</span> , <span class="math-container">$\eta=y/R$</span> ; <span class="math-container">$(x,y)=$</span> coordinates in the plane, <span class="math-container">$R=$</span> radius of circle.
<br>When this expression is fed into <a href="https://www.maplesoft.com/products/Maple/" rel="nofollow noreferrer">MAPLE</a> (version 8),
then surprisingly we get outcome <b>one</b>, independent of any <span class="math-container">$(x,y,R)$</span> values:
<pre>
> exp(int(ln((cos(t)-xi)^2+(sin(t)-eta)^2),t=0..2*Pi,'continuous')/(2*Pi));
<pre><code> 1
</code></pre>
</pre>
On the other hand trying to proceed by substituting polar coordinates:
<span class="math-container">$$
x = r\cos(\phi) \quad ; \quad y = r\sin(\phi) \\
\exp\left(\frac{1}{2\pi}-\!\!\!\!\!\!\int_0^{2\pi} \ln\left|\left[\cos(t)-\frac{r}{R}\cos(\phi)\right]^2
+\left[\sin(t)-\frac{r}{R}\sin(\phi)\right]^2\right|\,dt\right)
\quad \Longrightarrow \\ \operatorname{CGM}(\rho) =
\exp\left(\frac{1}{2\pi}-\!\!\!\!\!\!\int_0^{2\pi} \ln\left|\rho^2+1-2\rho.\cos(t-\phi)\right|\,dt\right)
\quad \mbox{with} \quad \rho=r/R
$$</span>
Because of circle symmetry we can choose <span class="math-container">$\,\phi=0\,$</span> without loss of generality.<br>
Care should be taken if a zero is present in the argument of the logarithm, resulting in a singularity at that place:
<span class="math-container">$$
\rho^2+1-2\rho.\cos(t)=0 \quad \Longrightarrow \quad \rho = \cos(t)\pm\sqrt{\cos^2(t)-1}
\\ \Longrightarrow \quad t = 0 \quad \mbox{and} \quad \rho = 1
$$</span>
There are two approaches to this special case.<br>The easiest one is to ignore the Cauchy principal value and remember that
the special case represents the value zero, according to the heuristics in the referenced
<a href="https://math.stackexchange.com/questions/4162869/subset-as-arithmetic-mean-of-geometric-means-not-really">Q&A</a>.
<br>The second approach is to accept the Cauchy principal value as being essential.
With our numerical method, being the <i>trapezium rule</i>, integration at the interval <span class="math-container">$[0,\Delta t]$</span> is then calculated by:
<span class="math-container">$$
\frac{f_1+f_2}{2}\Delta t \quad \mbox{with} \quad f_2 = \ln|2-2\cos(\Delta t)| \approx \ln|2-2(1-\Delta t^2/2)| = 2\ln|\Delta t|
$$</span>
On the other hand the integral at that place can be calculated more (or less) exactly:
<span class="math-container">$$
\frac{f_1+f_2}{2}\Delta t \approx \int_0^{\Delta t}\ln|2-2\cos(t)|\,dt \approx
\int_0^{\Delta t}2\ln|t|\,dt = 2(\Delta t\,\ln|\Delta t| - \Delta t) \\
\frac{f_1+2\ln|\Delta t|}{2}\Delta t \approx 2(\ln|\Delta t|-1)\Delta t \quad \Longrightarrow \quad f_1 \approx 2\ln|\Delta t|-4
$$</span>
Having programmed all this (in Delphi Pascal) we get the following output.
Mind the two bold printed values at <span class="math-container">$I(1.0)$</span> : the first one without and the second one with the Cauchy principal value.
<pre>
NUMERICALLY CONJECTURED
I(0.0) = 1.00000000000000E+0000 = 1.00000000000000E+0000
I(0.1) = 1.00000000000000E+0000 = 1.00000000000000E+0000
I(0.2) = 1.00000000000000E+0000 = 1.00000000000000E+0000
I(0.3) = 1.00000000000000E+0000 = 1.00000000000000E+0000
I(0.4) = 1.00000000000000E+0000 = 1.00000000000000E+0000
I(0.5) = 1.00000000000000E+0000 = 1.00000000000000E+0000
I(0.6) = 9.99999999999999E-0001 = 1.00000000000000E+0000
I(0.7) = 1.00000000000000E+0000 = 1.00000000000000E+0000
I(0.8) = 1.00000000000000E+0000 = 1.00000000000000E+0000
I(0.9) = 1.00000000000000E+0000 = 1.00000000000000E+0000
<b>I(1.0) = 0.00000000000000E+0000 = 0.00000000000000E+0000
I(1.0) = 9.99967575938953E-0001 = 1.00000000000000E+0000</b>
I(1.1) = 1.21000000000000E+0000 = 1.21000000000000E+0000
I(1.2) = 1.44000000000000E+0000 = 1.44000000000000E+0000
I(1.3) = 1.69000000000000E+0000 = 1.69000000000000E+0000
I(1.4) = 1.96000000000000E+0000 = 1.96000000000000E+0000
I(1.5) = 2.24999999999999E+0000 = 2.25000000000000E+0000
I(1.6) = 2.55999999999999E+0000 = 2.56000000000000E+0000
I(1.7) = 2.89000000000002E+0000 = 2.89000000000000E+0000
I(1.8) = 3.24000000000002E+0000 = 3.24000000000000E+0000
I(1.9) = 3.61000000000001E+0000 = 3.61000000000000E+0000
I(2.0) = 4.00000000000000E+0000 = 4.00000000000000E+0000
I(2.1) = 4.41000000000001E+0000 = 4.41000000000000E+0000
I(2.2) = 4.84000000000001E+0000 = 4.84000000000000E+0000
I(2.3) = 5.29000000000000E+0000 = 5.29000000000000E+0000
I(2.4) = 5.75999999999996E+0000 = 5.76000000000000E+0000
I(2.5) = 6.24999999999998E+0000 = 6.25000000000000E+0000
I(2.6) = 6.75999999999999E+0000 = 6.76000000000001E+0000
I(2.7) = 7.29000000000010E+0000 = 7.29000000000001E+0000
I(2.8) = 7.84000000000007E+0000 = 7.84000000000001E+0000
I(2.9) = 8.41000000000004E+0000 = 8.41000000000001E+0000
I(3.0) = 9.00000000000000E+0000 = 9.00000000000001E+0000
</pre>
The numerical experiments give rise to the following <b>conjecture</b> (contradictory on purpose):
<span class="math-container">$$
\operatorname{CGM}(\rho) = \begin{cases} 1 & \mbox{for} \quad \rho \le 1 \\ 0 & \mbox{for} \quad \rho = 1 \\
\rho^2 & \mbox{for} \quad \rho \ge 1 \end{cases}
$$</span>
Is there someone who can prove this conjecture analytically?<br>I have seen something with complex analysis in
<a href="https://groups.google.com/g/sci.math/c/j23Nz9Tzlzs" rel="nofollow noreferrer"><i>sci.math</i></a> a long time ago (2008) but
didn't quite understand the argument.
<p>
<h3>CGM for a Sphere</h3>
In view of further development of the theory, it seems to be advantageous to define the Continuous Geometric Mean slightly different, namely as:
<span class="math-container">$$
\operatorname{CGM}(\vec{r}) = \exp\left(- -\!\!\!\!\!\!\int_0^1 \ln(\left|\vec{p}(t)-\vec{r}\right|^2)\,dt\right)
$$</span>
For our unit circle (with Cauchy principal value) then we have instead:
<span class="math-container">$$
\operatorname{CGM}(\rho) = \begin{cases} 1 & \mbox{for} \quad \rho \le 1 \\ 1/\rho^2 & \mbox{for} \quad \rho \ge 1 \end{cases}
$$</span>
We seek to generalize the Continuous Geometric Mean for a circle to the same for the surface of a unit sphere:
<span class="math-container">$$
\operatorname{CGM}(\vec{r}) = \exp\left(-\iint \ln(\left|\vec{p}-\vec{r}\right|^2)\,dA/(4\pi)\right)
$$</span>
Expressed in spherical coordinates and specializing (without loss of generality) for <span class="math-container">$\,\vec{r} = (0,0,\rho)\,$</span>:
<span class="math-container">$$
\vec{p}-\vec{r} = \begin{bmatrix} \sin(\theta)\cos(\phi) \\ \sin(\theta)\sin(\phi) \\ \cos(\theta)-\rho \end{bmatrix}
\quad ; \quad dA = \sin(\theta)\,d\theta\,d\phi \\
\left|\vec{p}-\vec{r}\right|^2 = \sin^2(\theta)+\left[\,\cos(\theta)-\rho\,\right]^2 = 1-2\rho\cos(\theta)+\rho^2 \\
\iint \ln(\left|\vec{p}-\vec{r}\right|^2)\,dA / (4\pi)=
\frac{2\pi}{4\pi} -\!\!\!\!\!\!\!\int_0^\pi \ln\left|\rho^2+1-2\rho\,\cos(\theta)\right|\,\sin(\theta)\,d\theta
\\ \Longrightarrow \quad
\operatorname{CGM}(\rho) = \exp\left(-\frac{1}{2} -\!\!\!\!\!\!\!\int_0^{\pi} \ln\left|\rho^2+1-2\rho\,\cos(\theta)\right|\,\sin(\theta)\,d\theta\right)
$$</span>
Surprisingly enough, integration is much easier in 3-D, when compared with the 2-D case. Substitution of <span class="math-container">$\,t = \cos(\theta)\,$</span> gives a short route to the solution:
<span class="math-container">$$
-\!\!\!\!\!\!\int_0^{\pi} \ln\left|\rho^2+1-2\rho\,\cos(\theta)\right|\,\sin(\theta)\,d\theta =
-\!\!\!\!\!\!\int_{-1}^{+1}\ln\left|\rho^2+1-2\rho\,t\right|\,dt = \\
\frac{1}{2\rho}\left[\;u\ln|u|-u\;\right]_{u=1+\rho^2-2\rho}^{u=1+\rho^2+2\rho} \quad \Longrightarrow \\
\operatorname{CGM}(\rho) = \exp\left(-\left[(1+\rho)^2\ln\left|(1+\rho)^2\right|-(1-\rho)^2\ln\left|(1-\rho)^2\right|-4\rho\right]/(4\rho)\right)
$$</span>
If we make a graph of the two functions - red for 2-D, black for 3-D - then there is another surprise: the graphs coincide for large values of the normed radius <span class="math-container">$\rho$</span>.
<p><a href="https://i.stack.imgur.com/dDg8Y.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dDg8Y.jpg" alt="enter image description here" /></a></p>
<p>Confirmation is found with MAPLE, series expansion for <span class="math-container">$q=1/\rho\to 0$</span>:</p>
<pre>
> f(q) := exp(-((1+1/q)^2*ln((1+1/q)^2)-(1-1/q)^2*ln((1-1/q)^2)-4/q)/(4/q));
> series(f(q),q=0);
</pre>
<p>Output:
<span class="math-container">$$
({q}^{2}-{\frac {1}{3}}{q}^{4}+O \left( {q}^{6} \right) )
$$</span></p>
| Jean Marie | 305,862 | <p>This is a direct consequence of the following formula:</p>
<p><span class="math-container">$$
\frac{1}{2 \pi}\int_0^{2\pi} \ln(a+ b\cos\theta)d\theta = \ln \tfrac12(a + \sqrt{a^2 - b^2})\tag{1}$$</span></p>
<p>(valid for <span class="math-container">$a>b$</span>) established for example <a href="https://math.stackexchange.com/questions/801136/find-the-value-of-the-integral-int-02-pi-lnab-sin-xdx-where-0-lt-a-lt">here</a>.</p>
<p>Indeed, taking <span class="math-container">$a=\rho^2+1$</span> and <span class="math-container">$b=-2\rho$</span>, (1) gives:</p>
<p><span class="math-container">$$
\frac{1}{2 \pi}\int_0^{2\pi} \ln(\rho^2+1-2\rho\cos\theta)d\theta = \ln \tfrac12(\rho^2+1 + \sqrt{(\rho^2+1)^2 - 4\rho^2})=$$</span></p>
<p><span class="math-container">$$=\ln \tfrac12\left(\rho^2+1 + \sqrt{(\rho^2-1)^2}\right)=\ln \tfrac12(\rho^2+1 + |\rho^2-1|)$$</span></p>
<p><span class="math-container">$$=\begin{cases}\ln \tfrac12(\rho^2+1 -(\rho^2-1))=\ln 1=0 &\text{ if } \rho<1\\
\ln \tfrac12(\rho^2+1 +(\rho^2-1))=\ln (\rho^2) &\text{ if } \rho > 1
\end{cases}\tag{2}$$</span></p>
<p>(the case <span class="math-container">$\rho=1$</span> has to be treated apart).</p>
<p>It remains to take the exponential in (2) to retrieve the results you have found experimentally.</p>
<p><strong>Remark:</strong> I just discovered a closer question <a href="https://math.stackexchange.com/q/650513">here</a> with very interesting answers.</p>
|
439,941 | <p>I ran into this question and I am finding it very difficult to solve:</p>
<blockquote>
<p>How many different expressions can you get by inserting parentheses into:
$$x_{1}-x_{2}-\cdots-x_{n}\quad ?$$</p>
</blockquote>
<p>For example:</p>
<p>$$\begin{align*}
x_{1}-(x_{2}-x_{3}) &= x_{1}-x_{2}+x_{3}\\
(x_{1}-x_{2})-x_{3}&=x_{1}-x_{2}-x_{3}\\
x_{1}-(x_{2}-x_{3})-x_{4})&=x_{1}-x_{2}+x_{3}+x_{4}\\
\end{align*}$$</p>
<p>I'm really desperate for a full answer. I've been working on this for 3 hours. Thanks in advance.</p>
| Brian M. Scott | 12,042 | <p>HINT: No matter how you parenthesize the expression, when you clear the parentheses, the first two terms will be $x_1-x_2$. Show by induction that the remaining $n-2$ signs can be any combination of plus and minus signs, meaning that for $n\ge 2$ you get $2^{n-2}$ distinct expressions.</p>
|
3,621,223 | <p>I use a software called Substance Designer which has a Pixel Processor where I can assign to every pixel of a image a gray-scale value defined by a series of operations.</p>
<p>I am basically trying to generate a <a href="https://i.stack.imgur.com/6jhYa.jpg" rel="nofollow noreferrer">"normal gradient"</a> generated by the normals of a semi-ellipse with a given <strong>a</strong> and <strong>b</strong> semi-major and semi-minor axis. </p>
<p>This semi ellipse is **origin-centered ** and has the principle axes parallel to the x and y axes.</p>
<p>For all points P(x,y) with y≥0, I want to find the angle or direction θ of the outwards-facing ellipse normal that intersects that point. Both when a>b and if possible when b>a "</p>
<p><a href="https://i.stack.imgur.com/LA7x1.jpg" rel="nofollow noreferrer">Here is a visual representation of what I am after, although I only need the values for y>0</a></p>
<p><a href="https://commons.wikimedia.org/wiki/File:Evolute1.gif" rel="nofollow noreferrer">I am trying to visualize the blue tangents on the evolute when y> 0. All the points on the same normal will have the same value.</a></p>
<p>Thanks </p>
| Anonymous Coward | 770,101 | <p>Consider an axis-aligned ellipse centered at origin, height <span class="math-container">$2$</span> (semimajor axis <span class="math-container">$1$</span>), width <span class="math-container">$2\chi$</span> (semiminor axis <span class="math-container">$\chi \lt 1$</span>), parametrized using angle variable <span class="math-container">$\varphi$</span>:
<span class="math-container">$$\vec{p}(\varphi) = \left [ \begin{matrix} p_x(\varphi) \\ p_y(\varphi) \end{matrix} \right ] = \left [ \begin{matrix} \chi \cos \varphi \\ \sin\varphi \end{matrix} \right ] \tag{1}\label{EQ1}$$</span>
The eccentricity <span class="math-container">$\epsilon$</span> of this ellipse is
<span class="math-container">$$\epsilon = \sqrt{1 - \chi^2} \quad \iff \quad \chi = \sqrt{1 - \epsilon^2} \tag{2}\label{EQ2}$$</span>
The tangent vector <span class="math-container">$\vec{t}(\varphi)$</span> is
<span class="math-container">$$\vec{t}(\varphi) = \nabla\vec{p}(\varphi) = \left [ \begin{matrix}
\displaystyle \frac{d p_x(\varphi)}{d \varphi} \\
\displaystyle \frac{d p_y(\varphi)}{d \varphi} \\
\end{matrix} \right ] = \left [ \begin{matrix}
t_x(\varphi) \\ t_y(\varphi) \end{matrix} \right ] = \left [ \begin{matrix} - \chi \sin \varphi \\ \cos \varphi \end{matrix} \right ]\tag{3}\label{EQ3}$$</span>
The normal vector <span class="math-container">$\vec{n}(\varphi)$</span> is the tangent vector rotated <span class="math-container">$90^o$</span> clockwise (since the angle parameter traverses the ellipse counterclockwise):
<span class="math-container">$$\vec{n}(\varphi) = \left [ \begin{matrix} t_y(\varphi) \\ -t_x(\varphi) \end{matrix} \right ] = \left [ \begin{matrix} n_x(\varphi) \\ n_y(\varphi) \end{matrix} \right ] = \left [ \begin{matrix} \cos\varphi \\ \chi\sin\varphi \end{matrix} \right ] \tag{4}\label{EQ4}$$</span>
The angle <span class="math-container">$\vec{n}(\varphi)$</span> makes with the <span class="math-container">$x$</span> axis with <span class="math-container">$0^o \le \varphi \le 180^o$</span> is <span class="math-container">$\theta$</span>,
<span class="math-container">$$\theta = \operatorname{arctan}\left( \frac{n_y(\varphi)}{n_x(\varphi)} \right) = \operatorname{arctan}\left( \chi \frac{\sin\varphi}{\cos\varphi} \right) = \operatorname{arctan}\left(\chi \tan\varphi\right)$$</span></p>
<p>This means that for an axis-aligned ellipse that is taller than it is wide, the relationship between the angle parameter <span class="math-container">$\varphi$</span> and the angle between the ellipse normal and the positive <span class="math-container">$x$</span> axis <span class="math-container">$\theta$</span> is
<span class="math-container">$$\theta = \operatorname{arctan}\left(\sqrt{1 - \epsilon^2} \tan\varphi\right) \tag{5a}\label{EQ5a}$$</span>
and conversely (by solving above for <span class="math-container">$\varphi$</span>),
<span class="math-container">$$\varphi = \operatorname{arctan}\left(\frac{1}{\sqrt{1-\epsilon^2}}\tan\theta\right)\tag{5b}\label{EQ5b}$$</span></p>
<hr>
<p>This same ellipse also fulfills
<span class="math-container">$$\frac{x^2}{\chi^2} + \frac{y^2}{1^2} = 1$$</span>
which we can solve for <span class="math-container">$y$</span>:
<span class="math-container">$$y(x) = \pm \sqrt{1 - \frac{x^2}{\chi^2}} = \pm\sqrt{1 - \frac{x^2}{1 - \epsilon^2}}$$</span>
Since <span class="math-container">$\eqref{EQ1}$</span> says that <span class="math-container">$x = \chi\cos\varphi$</span>, the angle parameter <span class="math-container">$\varphi$</span> is
<span class="math-container">$$\varphi = \operatorname{arccos}\left(\frac{x}{\chi}\right) = \operatorname{arccos}\left(\frac{x}{\sqrt{1 - \epsilon^2}}\right)$$</span>
and the angle <span class="math-container">$\theta$</span> relationship to parameter <span class="math-container">$\varphi$</span> simplifies to
<span class="math-container">$$\theta = \operatorname{arctan}\left(\frac{1}{x}\sqrt{\frac{1 - \epsilon^2 - x^2}{1 - \epsilon^2}}\right)$$</span>
in the positive quadrant (<span class="math-container">$x \ge 0$</span>, <span class="math-container">$y \ge 0$</span>).</p>
|
2,325,436 | <p>I was reading <em>Introduction to quantum mechanics</em> by David J. Griffiths and came across following paragraph:</p>
<blockquote>
<p><span class="math-container">$3$</span>. The eigenvectors of a hermitian transformation span the space.</p>
<p>As we have seen, this is equivalent to the statement that any hermitian matrix can be diagonalized. <strong>This rather technical fact is</strong>, in a sense, <strong>the mathematical support on which much of a quantum mechanics leans</strong>. It turns out to be a thinner reed then one might have hoped, because <strong>the proof does not carry over to infinite-dimensional spaces.</strong>"</p>
</blockquote>
<p>My thoughts:</p>
<p>If much of a quantum mechanics leans on it, but the proof does not carry over to infinite-dimensional spaces, then hermitian transformations with infinite dimensionality are spurious.</p>
<p>But there is infinite set of separable solutions for e.g. particle in a box. So Hamiltionan for that system has spectrum with infinite number of eigenvectors and is of infinite dimensionality.</p>
<p>If we can't prove that this infinite set of eigenvectors span the space then how can we use completness all the time?</p>
<p>Am I missing something here? Any missconceptions?</p>
<p>I'd appriciate any help.</p>
| Jonas Dahlbæk | 161,825 | <p>To be quite frank, while the textbook by Griffiths may be a decent introduction to quantum mechanics, you should not rely on it for anything mathematics related.</p>
<p>That hermitian matrices can be orthogonally diagonalized is not a 'rather technical fact'. It is a completely standard result (known to both mathematicians and non-mathematicians), which is covered in any introductory linear algebra course. It finds application in a wide range of subjects, not just quantum mechanics.</p>
<p>There is a vast mathematical literature devoted to making mathematical sense of quantum mechanical concepts. Saying that this massive field, which lies in the intersection between physics and mathematics, is simply concerned with hermitian matrices, and then stating that the results do not carry over to the infinite dimensional setting... I don't know what to say. At the very least, it is an extreme misrepresentation of the actual state of affairs.</p>
<p>When should you expect to have a complete set of eigenvectors of a Hamiltonian? Usually, this happens if you consider a confining potential. Examples include the particle in a box, like you mentioned, and the harmonic oscillator.</p>
<p>When should you not expect to have a complete set of eigenvectors? Usually, this happens if your system can 'break apart', becoming essentially a non-interacting free system. Examples include the free particle.</p>
<p>When should you expect to have eigenvectors, but not a complete set? If your system has bound states, but it is also possible to break it apart. Examples include the hydrogen atom, with bound states below the ionization threshold.</p>
<p>Since you mention chemistry, it is in order to mention that for many applications, in practice one often considers only the space spanned by the bound states. In this subspace, the eigenvectors do of course constitute a complete set. This is what is usually done in introductory quantum mechanics courses, where one calculates the energy levels of the hydrogen atom. Often, it is not even mentioned that the spectrum of the hydrogen atom contains the positive real line along with these negative eigenvalues! On the other hand, this fact is not really important if you only care about the bound states.</p>
<p>If you are interested in the mathematics of quantum mechanics, one of the key topics is spectral theory of unbounded (self-adjoint) operators. The classics <a href="http://www.springer.com/cn/book/9783540586616" rel="noreferrer">'Perturbation Theory for Linear Operators'</a> by Kato and <a href="https://www.elsevier.com/books/book-series/methods-of-modern-mathematical-physics" rel="noreferrer">'Methods of Modern Mathematical Physics'</a> by Reed and Simon are recommendable. For more recent texts, I would also recommend <a href="http://www.springer.com/in/book/9789400747524" rel="noreferrer">'Unbounded Self-adjoint Operators in Hilbert Space'</a> by Schmüdgen and <a href="http://www.springer.com/cn/book/9783540430780" rel="noreferrer">'Quantum Mathematical Physics'</a> by Thirring. Of course, all of these texts require a firm background in mathematics, with a focus on functional analysis.</p>
<p>Finally there are a bunch of questions on this site which are related to your question:</p>
<p><a href="https://math.stackexchange.com/questions/2093413/when-schrodinger-operator-has-discrete-spectrum">When Schrodinger operator has discrete spectrum?</a></p>
<p><a href="https://math.stackexchange.com/questions/301217/why-is-the-spectrum-of-the-schr%C3%B6dinger-operator-discrete">why is the spectrum of the schrödinger operator discrete?</a></p>
<p><a href="https://math.stackexchange.com/questions/1778441/spectrum-of-laplace-operator-with-potential-acting-on-l2-mathbb-r-is-discre?noredirect=1&lq=1">Spectrum of Laplace operator with potential acting on $L^2(\mathbb R)$ is discrete</a></p>
<p><a href="https://math.stackexchange.com/questions/1191714/when-the-point-spectrum-is-discrete?rq=1">When the point spectrum is discrete?</a></p>
<p><a href="https://math.stackexchange.com/questions/170507/proving-compactness-of-resolvent-of-an-operator">Proving Compactness of Resolvent Of an Operator</a></p>
<p><a href="https://math.stackexchange.com/questions/1895457/operators-with-compact-resolvent">Operators with compact resolvent</a></p>
<p><a href="https://math.stackexchange.com/questions/978351/selfadjointness-of-the-dirac-operator-on-the-infinite-dimensional-hilbert-space/980737#980737">Selfadjointness of the Dirac operator on the infinite-dimensional Hilbert space</a></p>
<p><a href="https://math.stackexchange.com/questions/976591/hermitian-and-self-adjoint-operators-on-infinite-dimensional-hilbert-spaces/976638#976638">Hermitian and self-adjoint operators on infinite-dimensional Hilbert spaces</a></p>
|
1,661,439 | <p>I'm trying to take the Lagrange polynomial $P_3(x)$ that passes through four points- $(x_1,y_1), (x_2,y_2), (x_3,y_3)$, and $(x_4,y_4)$, and integrate it (maybe deriving Simpson's rule in the process!).
All the x-values are a distance h apart from each other.
Once I did some simplifications using h, here's what I ended up with:
$$P_3(x) = {(x-x_2)(x-x_3)(x-x_4)y_1\over -6h^3} + {(x-x_1)(x-x_3)(x-x_4)y_2\over 2h^3} + {(x-x_1)(x-x_2)(x-x_4)y_3\over -2h^3} + {(x-x_1)(x-x_2)(x-x_3)y_4\over 6h^3}$$
I'm trying to find a substitution to make these integrals easier. When I choose a u-substitution, should I try to make the integrands go from -1 to 1? And how would I scale the u-substitution to make that happen?
Throw out the next step that you think I should take!
Thanks!</p>
| user5713492 | 316,404 | <p>Why does this keep getting bumped by Community? Well, let's try doing the integrals:
<span class="math-container">$$\begin{align}I_1&=\int_{x_1}^{x_4}(x-x_2)(x-x_3)(x-x_4)dx\\
&=\left[\frac12(x-x_2)(x-x_3)(x-x_4)^2-\frac16(2x-x_2-x_3)(x-x_4)^3+\frac1{24}(2)(x-x_4)^4\right]_{x_1}^{x_4}\\
&=-\left[9h^4-\frac{27}2h^4+\frac{27}4h^4\right]=-\frac94h^4\end{align}$$</span>
To do this integral we used <a href="https://en.wikipedia.org/wiki/Integration_by_parts#Tabular_integration_by_parts" rel="nofollow noreferrer">tabular integration</a>. Our table would look something like this:
<span class="math-container">$$\begin{array}{r|c|l}+&(x-x_2)(x-x_3)&(x-x_4)\\
-&(2x-x_2-x_3)&\frac12(x-x_4)^2\\
+&2&\frac16(x-x_4)^3\\
-&0&\frac1{24}(x-x_4)^4\end{array}$$</span>
To get each term in the integral we multiply plus or minus (as indicated in the first column) the expression in the second column with the expression in the next row in the third column. When the polynomial in the second column is differentiated down to zero we can stop integrating the function in the third column because there will be no more subsequent terms. Tabular integration helped here because it forced the term values to zero at the upper bound and we didn't have to fully expand the polynomial to get the derivatives. Of course we had to keep in mind that <span class="math-container">$x_2-x_1=x_3-x_2=x_4-x_3=h$</span> and <span class="math-container">$x_3-x_1=x_4-x_2=2h$</span> and <span class="math-container">$x_4-x_1=3h$</span>. Since we will choose <span class="math-container">$(x-x_1)$</span> for the integrated factor in the last three integrals we can lump their tables together as follows:
<span class="math-container">$$\begin{array}{r|c|c|c|l}+&(x-x_3)(x-x_4)&(x-x_2)(x-x_4)&(x-x_2)(x-x_3)&(x-x_1)\\
-&(2x-x_3-x_4)&(2x-x_2-x_4)&(2x-x_2-x_3)&\frac12(x-x_1)^2\\
+&2&2&2&\frac16(x-x_1)^3\\
-&0&0&0&\frac1{24}(x-x_1)^4\end{array}$$</span>
So now
<span class="math-container">$$\begin{align}I_2=&\int_{x_1}^{x_4}(x-x_1)(x-x_3)(x-x_4)dx\\
&=\left[\frac12(x-x_3)(x-x_4)(x-x_1)^2-\frac16(2x-x_3-x_4)(x-x_1)^3+\frac1{24}(2)(x-x_1)^4\right]_{x_1}^{x_4}\\
&=\left[0-\frac92h^4+\frac{27}4h^4\right]=\frac94h^4\end{align}$$</span>
<span class="math-container">$$\begin{align}I_3=&\int_{x_1}^{x_4}(x-x_1)(x-x_2)(x-x_4)dx\\
&=\left[\frac12(x-x_2)(x-x_4)(x-x_1)^2-\frac16(2x-x_2-x_4)(x-x_1)^3+\frac1{24}(2)(x-x_1)^4\right]_{x_1}^{x_4}\\
&=\left[0-9h^4+\frac{27}4h^4\right]=-\frac94h^4\end{align}$$</span>
<span class="math-container">$$\begin{align}I_4=&\int_{x_1}^{x_4}(x-x_1)(x-x_2)(x-x_3)dx\\
&=\left[\frac12(x-x_2)(x-x_3)(x-x_1)^2-\frac16(2x-x_2-x_3)(x-x_1)^3+\frac1{24}(2)(x-x_1)^4\right]_{x_1}^{x_4}\\
&=\left[9h^4-\frac{27}2h^4+\frac{27}4h^4\right]=\frac94h^4\end{align}$$</span>
Then
<span class="math-container">$$\begin{align}\int_{x_1}^{x_2}f(x)dx&=-\frac{y_1}{6h^3}I_1+\frac{y_2}{2h^3}I_2-\frac{y_3}{2h^3}I_3+\frac{y_4}{6h^3}I_4\\
&=\frac38hy_1+\frac98hy_2+\frac98hy_3+\frac38hy_4=\frac38h(y_1+3y_2+3y_3+y_4)\end{align}$$</span>
Exact for cubic polynomials. This is the familiar Simpson's <span class="math-container">$\frac38$</span> rule. This problem was small enough that disciplined organization of the arithmetic kept it from getting too far out of hand.</p>
|
3,460,843 | <p>I understand that the way to calculate the cube root of <span class="math-container">$i$</span> is to use Euler's formula and divide <span class="math-container">$\frac{\pi}{2}$</span> by <span class="math-container">$3$</span> and find <span class="math-container">$\frac{\pi}{6}$</span> on the complex plane; however, my question is why the following solution doesn't work. </p>
<p>So <span class="math-container">$(-i)^3 = i$</span>, but why can I not cube root both sides and get <span class="math-container">$-i=(i)^{\frac{1}{3}}$</span>. Is there a rule where equality is not maintained when you root complex numbers or is there something else I am violating and not realizing?</p>
| an4s | 533,556 | <p>Substitute <span class="math-container">$t = \dfrac x2\implies x = 2t,\mathrm dx = 2\mathrm dt$</span>.</p>
<p>Therefore,</p>
<p><span class="math-container">$$\int\dfrac {-3}{x^2 + 4}\,\mathrm dx = -\int\dfrac 6{4t^2 + 4}\,\mathrm dt = -\int\frac6{4(t^2 + 1)}\,\mathrm dt = -\frac 32\int\frac1{t^2 + 1}\,\mathrm dt$$</span></p>
<p><span class="math-container">$\displaystyle\int\dfrac 1{t^2 + 1}\,\mathrm dt$</span> results in <span class="math-container">$\arctan t + C$</span>. Reverse substitution to get
<span class="math-container">$$\int\dfrac {-3}{x^2 + 4}\,\mathrm dx = -\dfrac32\arctan\left(\dfrac x2\right)+C.$$</span></p>
|
1,705,159 | <blockquote>
<p>Find necessary and sufficient conditions for a Mobius transformation <span class="math-container">$T(z)=\frac{az+b}{cz+d}$</span> to map the unit circle to itself. So if <span class="math-container">$\gamma$</span> is a circle, <span class="math-container">$T(\gamma)=\gamma$</span>.</p>
<p>I've worked out the necessary conditions. Namely, if <span class="math-container">$T(\gamma)=\gamma$</span>, then</p>
<ol>
<li><p><span class="math-container">$|a|^2+|b|^2=|c|^2+|d|^2$</span></p>
</li>
<li><p><span class="math-container">$a\bar{b}=\bar{d}c$</span></p>
</li>
<li><p><span class="math-container">$\bar{a}b=d\bar{c}$</span></p>
</li>
</ol>
<p>Source: Conway's Complex Functions of One Variable</p>
</blockquote>
<p>How does one go about showing sufficiency? Should I simply assume conditions 1),2) and 3) and try to prove that <span class="math-container">$T(\gamma)=\gamma$</span>? If so I can simply claim that all the implications I used to get these conditions also work backwards. Or just show that <span class="math-container">$|\frac{az+b}{cz+d}|=1$</span> by these conditions, which is rather simple. Is that all there is to this? I just wish the whole "necessary and sufficient" language was scrapped for some direct notation.</p>
<p>As an aside, I'm wondering if I'm using the words "necessary" and "sufficient" correctly in this context. Is what I showed in the first part the necessary conditions (that's what makes sense to me semantically, because they are necessary once I've assumed the map), or are they the sufficient conditions?</p>
| Alan Muniz | 289,217 | <p>The map $T(z) = \frac{az+b}{cz+d}$ sends the unit circle to itself if and only if for any $\zeta$ in the circle, $|T(\zeta)|=1$. Now you just have to translate this into conditions on the coefficients.</p>
<p>$|T(\zeta)|=1$ is equivalent to
$$
|a|^2 + |b|^2 + 2Re(a\bar{b} \zeta) = |c|^2 +|d|^2+ 2Re(c\bar{d}\zeta)
$$</p>
<p>Since $T$ is determined by the image of three points, just evaluate the equation above for three distinct values of $\zeta$ and you'll get the necessary and sufficient conditions. For $\zeta = 1, i, -1$, you'll get the conditions 1) and 2). </p>
|
95,741 | <p>I wonder if there is any difference between mapping and a function. Somebody told me that the only difference is that mapping can be from any set to any set, but function must be from $\mathbb R$ to $\mathbb R$. But I am not ok with this answer. I need a simple way to explain the differences between mapping and function to a lay man together with some illustration (if possible).</p>
<p>Thanks for any help.</p>
| Gaussler | 129,649 | <p>To me, function and map mean two entirely different things. A function is just a set-theoretic construction, something that assigns to each object in a set some unique object of another set. A map, on the other hand, is a construction from <em>category theory</em> rather than <em>set theory</em>. It means more or less the same thing as <em>morphism</em>: a function that preserves the structure in whatever category we are working in. So a map is not just a map, it is a map of something:</p>
<ul>
<li>A map of groups or rings is a homomorphism</li>
<li>A map of vector spaces is a linear function</li>
<li>A map of topological spaces is a continuous function</li>
<li>A map of smooth manifolds is a smooth function</li>
<li>A map of measurable spaces is a measurable function</li>
<li>A map of varieties is a morphism</li>
<li>A map of sets is any function</li>
</ul>
<p>Note that I deliberately avoided the term “map” in the predicates here. That is because the “map” parts of terms like “continuous map” and “linear map” are actually redundant; a linear map is really just a map (of vector spaces), and a continuous map is just a map (of topological spaces). Consequently, I avoid many of these redundant terms and simply say “let $f\colon X\to Y$ be a map” when it is clear from the context which category I currently think of $X$ and $Y$ as being objects of. I am particularly pleased to avoid the long and complicated term “homomorphism.”</p>
<p>On the other hand, I use the word “function” when I want to think of it as my object of study rather than a method of carrying structure from one object to another. Thus I would always call members of $\mathscr L^p$ spaces (which is the space of functions rather than the space $L^p$ of equivalence classes of functions) by the word “function,” even though they are measurable and hence can be thought of as maps of measurable spaces. Similarly, I would mostly call elements of polynomial rings or coordinate rings “functions” unless I am interested in some structure they preserve. </p>
<p>So to sum up: A map is a function preserving some structure, namely the structure of whatever category we are working in. The “function” part is just the underlying set-theoretic object, which is more or less the same thing as a map of sets. (Note, however, that I am well aware that not all people follow this convention.)</p>
|
3,987,718 | <p>Let <span class="math-container">$L \in \mathbb{R}$</span> and let <span class="math-container">$f$</span> be a function that is differentiable on a deleted neighborhood of <span class="math-container">$x_{0} \in \mathbb{R}$</span> such that <span class="math-container">$\lim_{x \to x_{0}}f'(x)=L$</span>.</p>
<p>Find a function satisfying the above, and such that <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x_{0}$</span>.</p>
<p>--</p>
<p>So I think that I do not completlely understand, when a function is indeed differentiable at <span class="math-container">$x_{0}$</span> and when it's not, and why in both cases I can still find its <span class="math-container">$f'$</span>?</p>
<p>I will appreciate some explanation about that.</p>
<p>Moreover, I thought of <span class="math-container">$f(x)=x^x$</span> or <span class="math-container">$f(x)=\ln(x^x)$</span>.</p>
<p>If I understand it correctly, than both my <span class="math-container">$f$</span>'s does not differentiable at <span class="math-container">$x_{0}=0$</span>, because:</p>
<p><span class="math-container">$f'(0)=\lim_{x \to 0}\frac{x^x-0^0}{x-0}$</span> which is undefined?</p>
<p>or</p>
<p><span class="math-container">$f'(0)=\lim_{x \to 0}\frac{\ln(x^x)-\ln(0^0)}{x-0}$</span> which is undefined?</p>
<p>Thanks a lot!</p>
| Community | -1 | <p>In general, you have the following theorem.</p>
<blockquote>
<p>Suppose <span class="math-container">$f:(a,b)\to\mathbb{R}$</span> is continuous and differentiable on <span class="math-container">$(a,b)\setminus\{c\}$</span> for some <span class="math-container">$c\in (a,b)$</span>. If <span class="math-container">$\lim_{x\to c}f'(x)$</span> exists, then <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x=c$</span> and
<span class="math-container">$$
f'(c)=\lim_{x\to c}f'(x).
$$</span></p>
</blockquote>
<p>(This is a good exercise of the mean value theorem.)</p>
<p>So in order to find your desired example, <span class="math-container">$f$</span> must <em>not</em> be continuous at <span class="math-container">$x=c$</span>. This is easy: just move it somewhere else. For example,
<span class="math-container">$$
f(x)=\begin{cases}
Lx&x\ne 0\\
1&x=0
\end{cases}.
$$</span></p>
<hr />
<p>Notes.</p>
<p>If a function <span class="math-container">$f$</span> is not defined at <span class="math-container">$x=x_0$</span> or it is defined but not continuous at <span class="math-container">$x_0$</span>, then you can immediate tell that <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x=x_0$</span>.</p>
|
3,625,069 | <p>How to solve <span class="math-container">$10\sqrt{10\sqrt[3]{10\sqrt[4]{10...}}}$</span>?</p>
<p>I tried to solve this problem by letting <span class="math-container">$x=10\sqrt{10\sqrt[3]{10\sqrt[4]{10...}}}$</span> to observe the pattern.</p>
<p>Based on the pattern, the result is</p>
<p><span class="math-container">$\dfrac{x^{n!}}{10^{((((1)(2)+1)4+1)...n+1)}}$</span> where <span class="math-container">$n$</span> is a positive integer approaching infinity. </p>
<p>This is where I got stuck.</p>
| lab bhattacharjee | 33,337 | <p><span class="math-container">$$\lim_{n\to\infty}10^{\sum_{r=1}^n\dfrac1{r!}}=10^{\lim_{n\to\infty}\sum_{r=1}^n\dfrac1{r!}}$$</span></p>
<p>Now <span class="math-container">$\lim_{n\to\infty}\sum_{r=1}^n\dfrac1{r!}=e-1$</span></p>
|
4,084,517 | <p>In geometry of 2D and 3D, it's not uncommon for people to call a square or rectangle a <code>Box</code> in the field I work in. This makes naming things easier since it's clear what's in a folder of 'boxes'.</p>
<p>Does the similar name exist for a circle and a sphere? Interestingly we have circles in 3D that could be defined as the intersection of a geometric primitive with a sphere, like a plane slicing through a sphere to form a circle. However searching for a unifying name (even if it's not that good of a name) has been unfruitful.</p>
<p>Does a name exist for this classification? As in, a name for circular or spherical objects defined by some distance from (for convenience here) the origin?</p>
<p>Some things seem easy, like a hyperplane is going down a dimension, so it ends up working for almost any dimension. I was hoping there was something like this that would either work for 2 and 3 dimensions, or better, work for N dimensions.</p>
| Jean Marie | 305,862 | <p>There is a general geometric framework called the "space of spheres" covering spheres in any dimension, in particular</p>
<ul>
<li><p>ordinary circles for dimension 2 or even</p>
</li>
<li><p>line segments <span class="math-container">$[a,b] \subset \mathbb{R}$</span> for dimension 1.</p>
</li>
</ul>
<p>The important fact is that these spaces are endowed with a quadratic form that I am going to explain in the case of circles, but that can be extended to any dimension.</p>
<p>As a circle can be defined by kinds of equations:</p>
<p><span class="math-container">$$(x-a)^2+(y-b)^2=R^2 \ \ \iff \ \ x^2+y^2-2ax-2by+c=0\tag{1}$$</span></p>
<p>we will consider that a circle is defined by coordinates <span class="math-container">$(a,b,c)$</span>.</p>
<p>Due to the equivalence in (1), we have</p>
<p><span class="math-container">$$R^2=a^2+b^2-c>0\tag{2}$$</span></p>
<p>The "set of circles" can be considered as a kind of 3D space deprived of the interior of a certain paraboloid.</p>
<p>Moreover, the RHS of (2) will give a kind of "norm" (please note the double quotes) for a circle:</p>
<p><span class="math-container">$$\|(a,b,c)\|^2=a^2+b^2-c\tag{3}$$</span></p>
<p>or, introducing a 4th (projective) dimension <span class="math-container">$d$</span> for the homogeneization of expression (3).</p>
<p><span class="math-container">$$\|(a,b,c)\|^2=a^2+b^2-cd=a^2+b^2+\tfrac12(c-d)^2-\tfrac12(c+d)^2$$</span></p>
<p>which is a true quadratic form with signature <span class="math-container">$(+,+,+,-)$</span> (Minkovski space structure)</p>
<p>For more about this space, see first my answer <a href="https://math.stackexchange.com/q/3640023">here</a>, then the very nice article <a href="https://www.researchgate.net/publication/266349529_Hyperbolic_Delaunay_triangulations_and_Voronoi_diagrams_made_practical/figures?lo=1" rel="nofollow noreferrer">here</a> showing application to Voronoi tessellations and Voronoi triangulations.</p>
<p>For more, see the paragraph "Space of spheres" in the marvellous book "Geometry II" of Marcel BERGER (Springer, 1987).</p>
|
4,049,293 | <p>I am learning about the cross entropy, defined by Wikipedia as
<span class="math-container">$$H(P,Q)=-\text{E}_P[\log Q]$$</span>
for distributions <span class="math-container">$P,Q$</span>.</p>
<p>I'm not happy with that notation, because it implies symmetry, <span class="math-container">$H(X,Y)$</span> is often used for the joint entropy and lastly, I want to use a notation which is consistent with the notation for entropy:
<span class="math-container">$$H(X)=-\text{E}_P[\log P(X)]$$</span></p>
<p>When dealing with multiple distributions, I like to write <span class="math-container">$H_P(X)$</span> so it's clear with respect to which distribution I'm taking the entropy. When dealing with multiple random variables, it thinks it's sensible to make precise the random variable with respect to which the expectation is taken by using the subscript <span class="math-container">$_{X\sim P}$</span>. My notation for entropy thus becomes
<span class="math-container">$$H_{X\sim P}(X)=-\text{E}_{X\sim P}[\log P(X)]$$</span></p>
<p>Now comes the point I don't understand about the definition of cross entropy: Why doesn't it reference a random variable <span class="math-container">$X$</span>? Applying analogous reasoning as above, I would assume that cross entropy has the form <span class="math-container">\begin{equation}H_{X\sim P}(Q(X))=-\text{E}_{X\sim P}[\log Q(X)]\tag{1}\end{equation}</span>
however, Wikipedia makes no mention of any such random variable <span class="math-container">$X$</span> in the article on cross entropy. It speaks of</p>
<blockquote>
<p>the cross-entropy between two probability distributions <span class="math-container">$p$</span> and <span class="math-container">$q$</span></p>
</blockquote>
<p>which, like the notation <span class="math-container">$H(P,Q)$</span>, implies a function whose argument is a pair of distributions, whereas entropy <span class="math-container">$H(X)$</span> is said to be a function of a random variable. In any case, to take an expected value I need a (function of) a random variable, which <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are not.</p>
<p>Comparing the definitions for the discrete case:
<span class="math-container">$$H(p,q)=-\sum_{x\in\mathcal{X}}p(x)\log q(x)$$</span>
and
<span class="math-container">$$H(X)=-\sum_{i=1}^n P(x_i)\log P(x_i)$$</span></p>
<p>where <span class="math-container">$\mathcal{X}$</span> is the support of <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>, there would only be a qualitative difference if the events <span class="math-container">$x_i$</span> didn't cover the whole support (though I could just choose an <span class="math-container">$X$</span> which does).</p>
<p>My questions boil down to the following:</p>
<ol>
<li><p>Where is the random variable necessary to take the expected value which is used to define the cross entropy <span class="math-container">$H(P,Q)=-\text{E}_{P}[\log Q]$</span></p>
</li>
<li><p>If I am correct in my assumption that one needs to choose a random variable <span class="math-container">$X$</span> to compute the cross entropy, is the notation I used for (1) free of ambiguities.</p>
</li>
</ol>
| mechanodroid | 144,766 | <p>You can cut some work by just looking at the remainder mod <span class="math-container">$3$</span>.</p>
<p>Let <span class="math-container">$n=3q+r$</span>. Easy calculation gives
<span class="math-container">$$\frac15(3q+r)^5+\frac13(3q+r)^3+\frac{7}{15}(3q+r) = \text{some integer} + \frac{243q^5+7q}{5} + \frac{r^5}5+\frac{r^3}3+\frac{7r}{15}.$$</span></p>
<p>By Little Fermat we have <span class="math-container">$q^5 \equiv q \pmod{5}$</span> so
<span class="math-container">$$243q^5+7q \equiv 250q \equiv 0\pmod{5}.$$</span>
The second term
<span class="math-container">$$\frac{r^5}5+\frac{r^3}3+\frac{7r}{15}$$</span>
is the same as the original expression but you only have to check <span class="math-container">$r=0,1,2$</span> which is almost trivial.</p>
|
4,049,293 | <p>I am learning about the cross entropy, defined by Wikipedia as
<span class="math-container">$$H(P,Q)=-\text{E}_P[\log Q]$$</span>
for distributions <span class="math-container">$P,Q$</span>.</p>
<p>I'm not happy with that notation, because it implies symmetry, <span class="math-container">$H(X,Y)$</span> is often used for the joint entropy and lastly, I want to use a notation which is consistent with the notation for entropy:
<span class="math-container">$$H(X)=-\text{E}_P[\log P(X)]$$</span></p>
<p>When dealing with multiple distributions, I like to write <span class="math-container">$H_P(X)$</span> so it's clear with respect to which distribution I'm taking the entropy. When dealing with multiple random variables, it thinks it's sensible to make precise the random variable with respect to which the expectation is taken by using the subscript <span class="math-container">$_{X\sim P}$</span>. My notation for entropy thus becomes
<span class="math-container">$$H_{X\sim P}(X)=-\text{E}_{X\sim P}[\log P(X)]$$</span></p>
<p>Now comes the point I don't understand about the definition of cross entropy: Why doesn't it reference a random variable <span class="math-container">$X$</span>? Applying analogous reasoning as above, I would assume that cross entropy has the form <span class="math-container">\begin{equation}H_{X\sim P}(Q(X))=-\text{E}_{X\sim P}[\log Q(X)]\tag{1}\end{equation}</span>
however, Wikipedia makes no mention of any such random variable <span class="math-container">$X$</span> in the article on cross entropy. It speaks of</p>
<blockquote>
<p>the cross-entropy between two probability distributions <span class="math-container">$p$</span> and <span class="math-container">$q$</span></p>
</blockquote>
<p>which, like the notation <span class="math-container">$H(P,Q)$</span>, implies a function whose argument is a pair of distributions, whereas entropy <span class="math-container">$H(X)$</span> is said to be a function of a random variable. In any case, to take an expected value I need a (function of) a random variable, which <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are not.</p>
<p>Comparing the definitions for the discrete case:
<span class="math-container">$$H(p,q)=-\sum_{x\in\mathcal{X}}p(x)\log q(x)$$</span>
and
<span class="math-container">$$H(X)=-\sum_{i=1}^n P(x_i)\log P(x_i)$$</span></p>
<p>where <span class="math-container">$\mathcal{X}$</span> is the support of <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>, there would only be a qualitative difference if the events <span class="math-container">$x_i$</span> didn't cover the whole support (though I could just choose an <span class="math-container">$X$</span> which does).</p>
<p>My questions boil down to the following:</p>
<ol>
<li><p>Where is the random variable necessary to take the expected value which is used to define the cross entropy <span class="math-container">$H(P,Q)=-\text{E}_{P}[\log Q]$</span></p>
</li>
<li><p>If I am correct in my assumption that one needs to choose a random variable <span class="math-container">$X$</span> to compute the cross entropy, is the notation I used for (1) free of ambiguities.</p>
</li>
</ol>
| J. W. Tanner | 615,567 | <p>To show that <span class="math-container">$15$</span> divides <span class="math-container">$n(3n^4+5n^2+7)$</span>, show that <span class="math-container">$3$</span> and <span class="math-container">$5$</span> do.</p>
<p><span class="math-container">$3$</span> does because either <span class="math-container">$3$</span> divides <span class="math-container">$n$</span> or <span class="math-container">$n^2\equiv1\pmod3$</span> by <a href="https://en.wikipedia.org/wiki/Fermat%27s_little_theorem" rel="nofollow noreferrer">Fermat's little theorem</a>,</p>
<p>in which case <span class="math-container">$3n^4+5n^2+7\equiv5n^2+7\equiv5+7=12\equiv0\pmod3.$</span></p>
<p><span class="math-container">$5$</span> does because either <span class="math-container">$5$</span> divides <span class="math-container">$n$</span> or <span class="math-container">$n^4\equiv1\pmod5$</span> by Fermat's little theorem,</p>
<p>in which case <span class="math-container">$3n^4+5n^2+7\equiv3n^4+7\equiv3+7=10\equiv0\pmod5$</span>.</p>
|
3,841,806 | <p>Using spherical coordinates I have to find the volume of a cone <span class="math-container">$z=\sqrt{x^2+y^2}$</span> inscribed in a sphere <span class="math-container">$(x-1)^2+y^2+z^2=4.$</span></p>
<p>I can`t find <span class="math-container">$\rho$</span> because the center of sphere is displaced from the origin.</p>
<hr />
<p>I tried solving it using Mathematica, but i did something wrong somewhere
<a href="https://i.stack.imgur.com/CtRvq.png" rel="nofollow noreferrer">enter image description here</a></p>
| zkutch | 775,801 | <ol>
<li><p>Another variant is to use little "shifted" spherical coordinates
<span class="math-container">$$\begin{array}{}
x-1=\rho\cos\theta\sin\phi\\
y=\rho\sin\theta\sin\phi\\
z=\rho\cos\phi
\end{array}$$</span>
Then your first equation goes to <span class="math-container">$\rho^2 = 4$</span>, Jacobian will be same.</p>
</li>
<li><p>Another way is to stay on Cartesian coordinates and find projection of intersection of sphere and cone which is <span class="math-container">$\left( x-\frac{1}{2} \right)^2+y^2=\frac{7}{4}$</span>. Now volume inside cone bounded by sphere is
<span class="math-container">$$\int\limits_{\frac{\sqrt{7}-1}{2}}^{\frac{\sqrt{7}+1}{2}}\int\limits_{-\sqrt{\frac{7}{4}-\left( x-\frac{1}{2} \right)^2}}^{\sqrt{\frac{7}{4}-\left( x-\frac{1}{2} \right)^2}}\int\limits_{\sqrt{x^2+y^2}}^{\sqrt{4-(x-1)^2-y^2}}dxdydz$$</span></p>
</li>
</ol>
|
1,534,694 | <p>I tried to solve for the following limit: </p>
<p>$$\lim_{x\rightarrow \infty} (e^{2x}+x)^{1/x}$$
and I reached to the indeterminate form:
$${4e^{2x}}\over {4e^{2x}}$$
if I plug in, I will get another indeterminate form! </p>
| user | 505,767 | <p>We simply have</p>
<p><span class="math-container">$$(e^{2x}+x)^{1/x}=e^{2x/x}\left(1+\frac{x}{e^{2x}}\right)^{1/x}\to e^2\cdot1^0=e^2$$</span></p>
|
1,563,518 | <p>Give an example of a natural number $n > 1$ and a polynomial $f(x) ∈ \Bbb Z_n[x]$ of degree $> 0$ that is a unit in $\Bbb Z_n[x]$.</p>
<p>I am trying to understand how units work in polynomial rings. My book doesn't really define it and I need a bit of help with this.</p>
| AnotherPerson | 185,237 | <p>Your solution is correct. What you have determined is that your solution is an element in the equivalence class of $-32$ mod $77$. Since these are just elements of the form $77k-32$, for $k\in \mathbb {Z}$, we can just let $k=1$ and obtain the smallest positive solution, namely $45$.</p>
<p>Note that typically when we denote the equivalence classes mod $n $, we use the positive numbers $0,1, 2, ..., n-1$. If we added on another $77$, this still gives us a multiple of $77$ but just with a remainder of $45$, and so we can also think of this as the equivalence class of $77k+45$. </p>
|
2,250,339 | <p>The Mad Hatter sets up what he believes is a zero-knowledge protocol. The integer n is the product of two large primes p and q and he wants to prove to the March Hare that he knows the factorization of n without revealing to anyone the actual factors $p$, $q$. He devises the following procedure: </p>
<p>March Hare chooses a random integer $x \pmod n$, computes $y \equiv x^2 \pmod n$, and sends $y$ to Mad Hatter.
Mad Hatter then computes the square roots $z_i$ of $y$ modulo $n$ and sends $z = \min(z_i)$ to March Hare who verifies that $z^2 \equiv y \pmod n$. The Hatter offers to repeat this $10$ times with different values of $y$.</p>
<ol>
<li><p>Should the March Hare be convinced that the Hatter knows $p$ and $q$? <br>
My guess: <strong>yes</strong></p></li>
<li><p>Is March Hare able to use all this information to factor $n$? (In which cases this isn’t a zero knowledge procedure at all).
<br> My guess: <strong>no</strong></p></li>
<li>The Knave of Hearts, who eavesdropped the entire sequence, recovers all ten values of $\{y_i,z_i\}$. Can he use this to obtain any information about $p$ and $q$? <br>
My guess: <strong>no</strong></li>
</ol>
<p>Does this look right?</p>
| miracle173 | 11,206 | <p>The answer to the second question is yes. The March hare could use this information to factor.</p>
<p>We have</p>
<p>$$x^2-y^2 \equiv (x+y)(x-y) \equiv 0 \pmod{ pq}$$</p>
<p>There is a chance that $p|(x+y)$ but $q\nmid (x+y)$ or that $q|(x-y)$ but $p\nmid (x-y)$. If one has some multiples of p one can calculate the gcd of these multiples to find p.</p>
<p>From this one can also deduce that the Mad Hatter knows p and q because he can derive it with this method in the same way as the Arch Hare.</p>
<p>So to 1 and 2 the answer is yes.</p>
<p>The knave can't use this information. If he could, then he could factor the system without the help of March Hare and Mad Hatter.</p>
<p>The Knave could select $19$ random integer in $\{1,...,n/2\}$. There is a $50:50$ chance that at least for $10$ of this $19$ integers $i$ is the the smallest root of $i^2$. Assume that this is the case for our $19$ integers. Then apply the factorization algorithm that knave uses to factor n by using $10$ intercepted numbers to all $10$-element subsets of the $19$ integers.</p>
<p>But it is not clear for an arbitrary sequence of number, because this algorithm exponentially with length of the series of numbers.</p>
|
1,652,929 | <p>The question I'm trying to solve is $$\left(y-4y^6\right)=\left(y^4+5x\right)y'$$ where $y(0)=1$ </p>
<p>I want to find the solution explicitly for $x$. I found the integration factor to be $u=y^-6$. Multiplying the equation by the integrating factor, I get $(y^{-5}-4)+(-y^{-2}-5xy^{-6})y'=0$ and then I solved $\int \:(y^{-5}-4)dx=xy^{-5}-4x+g(y)$ and $\int \left(\:-y^2-5xy^{-6}\right)dy=xy^-5-y^{3}/3+h(x)$, combining the two equations, I got $C$=$-1/3$ and $h(x)=-4x$ and $g(y)=-y^3/3$. Then solving the equation, I got $$x=\frac{y^5-y^8}{12y^5-3}$$ But this is not the correct answer. Could someone tell me what's wrong?</p>
| mickep | 97,236 | <p>The comments do not seem to lead to anything fruitful. I give you the first step, and then you confirm that you have the same in your solution, OK?</p>
<p>Write the differential equation as
$$
x'-\frac{5}{y-4y^6}x=\frac{y^3}{1-4y^5}
$$
Thus, an integrating factor is
$$
\exp\Bigl(\int-\frac{5}{y-4y^6}\,dy\Bigr).
$$
I get it to be</p>
<blockquote class="spoiler">
<p>$$-4+\frac{1}{y^5}.$$</p>
</blockquote>
<p>Using this, I get the final solution to be</p>
<blockquote class="spoiler">
<p>$$x=\frac{(1-y)y^4}{4y^5-1}.$$</p>
</blockquote>
|
2,818,427 | <p>Let $f \in \mathrm{End} (\mathbb{C^2})$ be defined by its image on the standard basis $(e_1,e_2)$: </p>
<p>$f(e_1)=e_1+e_2$</p>
<p>$f(e_2)=e_2-e_1$</p>
<p>I want to determine all eigenvalues of f and the bases of the associated eigenspaces.</p>
<p>First of all how does the transformation matrix of $f$ look like?
Is it </p>
<p>$\begin{pmatrix}1 &-1 \\1 &1 \end{pmatrix}$?</p>
| Zarrax | 3,035 | <p>If you are looking for an equation of the form $y'' + p(x)y' + q(x)y = 0$, then you can plug in your two solutions $y = x^2$ and $y = e^{-x}$, giving two equations in two unknowns $p(x)$ and $q(x)$. You can then use standard linear algebra techniques to find $p(x)$ and $q(x)$.</p>
|
3,367,588 | <p>I've been studying Numerical Linear Algebra, Lloyd, 1997. I've came across the below incomprehensible paragraph.</p>
<blockquote>
<p>"Methods like Householder reflections and Gaussian elimination would
solve linear systems of equations exactly in a finite number of steps
if they could be implemented in exact arithmetic. By contrast, any
eigenvalue solver must be iterative. The goal of an eigenvalue solver
is to produce sequences of numbers that converge rapidly toward
eigenvalues."</p>
</blockquote>
<p>In the above, the author differentiates between "a method with a finite number of steps" and "an iterative method",but it seems to me that these words have similar meaning. Could anyone elaborate difference?</p>
| Theo Bendit | 248,286 | <p>An iterative method involves creating a sequence of estimates that converge to the desired result, often employing some kind of recursively-defined sequence. A classic example is <a href="https://en.wikipedia.org/wiki/Newton%27s_method" rel="nofollow noreferrer">Newton's method</a>, which takes a differentiable function <span class="math-container">$f$</span>, an initial point <span class="math-container">$x_0$</span>, and defines a sequence
<span class="math-container">$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)},$$</span>
a sequence which (often) converges to a root of the function <span class="math-container">$f$</span>. There is no guarantee that <span class="math-container">$f(x_n) = 0$</span> for any particular <span class="math-container">$n$</span>, but <span class="math-container">$f(x_n)$</span> will (in many circumstances) become closer and closer to <span class="math-container">$0$</span>.</p>
<p>Compare this to a method like Gaussian elimination. When applied to an <span class="math-container">$n \times m$</span> matrix, it takes no more than <span class="math-container">$n^2$</span> row operations before the matrix is in its unique reduced row-echelon form. We don't get convergence to the matrix we want, instead we get exactly the matrix we want, within a finite number of steps.</p>
<p>I hope that this clears things up.</p>
|
124,280 | <p>Show that the sequence ($x_n$) defined by $$x_1=1\quad \text{and}\quad x_{n+1}=\frac{1}{x_n+3} \quad (n=1,2,\ldots)$$ converges and determine its limit ? </p>
<p>I try to show ($x_n$) is a Cauchy sequence or ($x_n$) is decreasing (or increasing) and bounded sequence but I fail every step of all.</p>
| Unoqualunque | 17,703 | <p>You can also find the explicit form of $a_n.$ The following argument is taken from Kaczor-Novak, Problems in Mathematical Analysis, vol.1, AMS pub. p.228</p>
<p>The equation $x^2+3x-1=0$ has two solutions $a > 0 >b.$ It is easy to observe
$$
\frac{x_{n+1}-a}{x_{n+1}-b}=\frac{a}{b} \frac{x_n-a}{x_n-b}.
$$
Write separately numerator and the denominator and simplify.
Proceeding inductively we come to
$$
\frac{x_{n+1}-a}{x_{n+1}-b} = \left(\frac ab\right)^n \frac{x_1-a}{x_1-b}.
$$
Solve in term of $x_{n+1}$ and using $|a/b|<1$ find the limit. </p>
|
1,707,853 | <p>To be more precise than the title, the function is actually piecewise</p>
<p>$$
f(x,y) = \begin{cases}
\frac{x^3+y^3}{x^2+y^2} & (x,y) \ne (0,0) \\
0 & (x,y) = (0,0) \\
\end{cases}
$$</p>
<p>I checked that the function is continuous at $(0,0)$, so I then calculated the partial derivative with respect to $x$ as</p>
<p>$$
f_1(x,y) = \frac{x^4-2xy^3+3x^2y^2}{(x^2+y^2)^2} \tag{1}
$$</p>
<p>This is undefined at $(0,0)$, so I then tried to find the limit around accumulation points. Let $S_1$ be the points on the $x$ axis</p>
<p>$$
\lim_{x \to 0} f_1(x,0) = \frac{x^4}{(x^2)^2} = 1 \tag{2}
$$</p>
<p>Let $S_2$ be the points on the line $y = x$</p>
<p>$$
\lim_{x \to 0} f_1(x,x) = \frac{x^4-2x^4+3x^4}{(x^2+x^2)^2} = \frac{2x^4}{(2x^2)^2} = \frac{1}{2} \tag{3}
$$</p>
<p>So, the limits are different around different accumulation points. That's where I'm confused because the answer should be $1$.</p>
| Community | -1 | <p>By definition of $f_1(x_0,y_0)$ you have :
$$f_1(0,0)=\lim_{h\to 0}\frac{f(h,0)-f(0,0)}{h}$$<br>
So
$$f_1(0,0)=\lim_{h\to 0}\frac{\frac{h^3+0}{h^2+0}}{h}=1$$</p>
|
47,974 | <p>I am interested in the following question:</p>
<p>Is it known that <span class="math-container">$2$</span> is a primitive root modulo <span class="math-container">$p$</span> for infinitely many primes <span class="math-container">$p$</span>?</p>
<p>There is some information about Artin's conjecture in <a href="https://en.wikipedia.org/w/index.php?title=Artin%27s_conjecture_on_primitive_roots&oldid=374854868" rel="nofollow noreferrer">Wikipedia</a>.
I need to know if it is up-to-date and if one can say something about the case <span class="math-container">$n=2$</span>.</p>
| Igor Potapov | 98,281 | <p>I do not know direct answer to your problem, but I will think about this
as it is close to the area of problems which we are currently working on.</p>
<p>Although I would like to comment that not all questions are "reasonably efficiently decidable" for SL(2,Z) as most of known to me problems for
SL(2,Z) are at least NP-hard, meaning that none of these problems have efficient (polynomial time) solutions unless P=NP.</p>
<p>The membership for Identity Matrix is NP-hard for a finitely generated matrix semigroup from SL(2,Z) and there is no even simple NP brute-force algorithm as there is a class of semigroups where the shortest identity could require exponential length of products to reach the identity.
See: Paul C. Bell, Igor Potapov: On the Computational Complexity of Matrix Semigroup Problems. Fundam. Inform. 116(1-4): 1-13 (2012)
Similar results exist for Mortality problem (membership of the Zero matrix) Paul C. Bell, Mika Hirvensalo, Igor Potapov: Mortality for 2×2 Matrices Is NP-Hard. MFCS 2012: 148-159</p>
<p>Recently we proved that the Vector Reachability for SL(2, Z) is decidable (in general it is harder to check than membership as there is an infinite set of matrices that can connect two points and decidability of the membership is not lead to vector reachability problem decidability).
Potapov, Pavel Semukhin: Vector Reachability Problem in SL(2, Z). MFCS 2016: 84:1-84:14 <a href="http://drops.dagstuhl.de/opus/volltexte/2016/6492/" rel="nofollow">http://drops.dagstuhl.de/opus/volltexte/2016/6492/</a></p>
<p>The other fresh 2016 result is the proof that the membership for nonsingular matrix semigroup from $Z^{2x2}$ is decidable.
<a href="http://arxiv.org/abs/1604.02303" rel="nofollow">http://arxiv.org/abs/1604.02303</a></p>
<p>In relation to the membership in SL(2,Z) we recently managed to improve
the complexity for the membership problem from PSPACE to NP, the main idea of the paper that we are able to operate effectively with compressed versions of matrices from SL(2,Z) by application of number of new results about structural properties of matrix products. </p>
<hr>
<p>For undecidability side the membership problem for a semigroup generated by matrices from SL(2,H) or even for a semigroup of double quaternions is undecidable <a href="http://www.sciencedirect.com/science/article/pii/S0890540108000771" rel="nofollow">http://www.sciencedirect.com/science/article/pii/S0890540108000771</a></p>
<p>Moreover the membership of the identity matrix for semigroup generated by matrices from SL(4,Z) is undecidable:
Paul C. Bell, Igor Potapov: On the Undecidability of the Identity Correspondence Problem and its Applications for Word and Matrix
Semigroups. Int. J. Found. Comput. Sci. 21(6): 963-978 (2010)
Archive vestion: arxiv.org/abs/0902.1975
See Problem 10.3 <a href="http://press.princeton.edu/math/blondel/solutions.html" rel="nofollow">http://press.princeton.edu/math/blondel/solutions.html</a>
but the same problem for 3x3 is sill open.</p>
|
478,566 | <p>I'm reading a book about combinatorics. Even though the book is about combinatorics there is a problem in the book that I can think of no solutions to it except by using number theory.</p>
<p>Problem: Is it possible to put $+$ or $-$ signs in such a way that $\pm 1 \pm 2 \pm \cdots \pm 100 = 101$?</p>
<p>My proof is kinda simple. Let's work in mod $2$. We'll have:</p>
<p>$\pm 1 \pm 2 \pm \cdots \pm 100 \equiv 101 \mod 2$
but since $+1 \equiv -1 \mod 2$ and there are exactly $50$ odd numbers and $50$ even numbers from $1$ to $100$ we can write:</p>
<p>$(1 + 0 + \cdots + 1 + 0 \equiv 50\times 1 \equiv 0) \not\equiv (101\equiv 1) \mod 2$ which is contradictory. </p>
<p>Therefore, it's not possible to choose $+$ or $-$ signs in any way to make them equal.</p>
<p>Now is there a combinatorial proof of that fact except what I have in mind?</p>
| Brian M. Scott | 12,042 | <p>You can rephrase essentially the same argument in the following terms:</p>
<p>Suppose that there were such a pattern of plus and minus signs. Let $P$ be the set of positive terms, and let $N$ be the set of negative terms together with the number $101$. Then $\sum P-\sum N=0$, so $\sum P=\sum N$, and $\{P,N\}$ is a partition of $\{1,2,\ldots,101\}$ into two sets with equal sum. But $\sum_{k=1}^{101}k=\frac12\cdot101\cdot102=101\cdot51$ is odd, so this is impossible.</p>
|
1,479,095 | <blockquote>
<p>For $f:[0,1]\to \mathbb{R}$ let $E\subset\left\{x \mid f'(x) \text{exists}\right\}$. Prove that if $|E|=0$, then $|f(E)|=0$.</p>
</blockquote>
<p>My attempt:</p>
<p>Let $E_{nk}=\left\{x\in [0,1]|\frac{|f(x+h)-f(x)|}{h}\leq n, |h|< \frac{1}{k} \right\}$. </p>
<p>I am not sure where to go from here, but I think that $E\subset \bigcup E_{nk}$, but I am not sure.</p>
<p>Any hints? Thank you.</p>
| Giovanni | 263,115 | <blockquote>
<p><strong>Claim:</strong> Let $E_n = \{x \in E : |f'(x)|\le n\}$, then $\lambda(f(E_n)) \le n\lambda(E_n)$.</p>
</blockquote>
<p>Notice that $E = \bigcup E_n$, hence the desired result follows from the fact that</p>
<p>$$\lambda(f(E_n)) \le n\lambda(E_n) \le n\lambda(E) = 0.$$ Indeed, to conclude from here, we can use the fact that $f(E) \subset \bigcup f(E_n)$, which is a countable union of null sets.</p>
<p><strong>Proof of Claim:</strong> Let $\epsilon > 0$ be given. Notice that the fact that the derivative exists and that it is bounded in $E_n$ means that for $x \in E_n$ $$\lim_{y \to x}\frac{f(y) - f(x)}{y - x} = f'(x),$$ hence there exists $\delta > 0$ such that for every $y \in (x - \delta, x + \delta) \cap [0,1]$ $$|f(y) - f(x)| \le (n + \epsilon)|y - x|. \tag 1$$ </p>
<p>The idea we get from $(1)$ is that a scalar multiple of the length of an interval can be used to control the length of its image. Essentially we want to exploit the fact that $f$ behaves as a Lipschitz function on $E_n$. Unfortunately this does not hold for every interval, but it works for intervals in $E_n$. </p>
<p>The next thing to do is to prove that we have "enough" (very tiny) intervals in $E_n$ to make the previous argument work. Indeed, let's define </p>
<p>$$E_{n,m} := \big\{x \in E_n : \lambda(f(I)) \le (n + \epsilon)\lambda(I), \text{where}\ I\subset [0,1]\ \text{is an interval}, x \in I,\text{and}\ \lambda(I) < \frac 1m \big\}.$$ </p>
<p>Then $E_{n,m} \subset E_{n,m+1}$ and $E_n = \bigcup_m E_{n,m}$. This last equality is not obvious at all: to prove that $E \subset \bigcup_m E_{n,m}$ we need to show that for every $x \in E_n$ we can find $m$ for which $x \in E_{n,m}$. This can be done reasoning as in the first step of the proof, choosing $m$ accordingly ($2m^{-1} < \delta$ works.)</p>
<p>We can now focus on the $E_{n,m}$. The trick that I have seen used in this situation is the following: by outer regularity consider $U_{n,m}$ open such that $\lambda(E_{n,m}) + \epsilon \ge \lambda(U_{n,m})$. Since $U_{n,m}$ is open we can decompose it into countably many disjoint open intervals of length less than $\frac 1m$. Let's call this family of intervals $\mathcal{I}$. Notice that we can apply $(1)$ to each element in $\mathcal{I}$! This implies that</p>
<p>\begin{align}
\lambda(f(E_{n,m})) \le &\ \lambda\Big(\bigcup_{I \in \mathcal{I}}f(I)\Big) \le \sum_{I \in \mathcal{I}}\lambda(f(I)) \le (n + \epsilon)\sum_{I \in \mathcal{I}}\lambda(I) \\
= &\ (n + \epsilon)\lambda\Big(\bigcup_{I \in \mathcal{I}} I\Big) = (n + \epsilon)\lambda(U_{n,m}) \\ \le &\ (n + \epsilon)(\lambda(E_{n,m}) + \epsilon).
\end{align}
Notice that here it is important that the intervals are disjoint.</p>
<p>Recall that $E_{n,m} \uparrow E_n$ to get that $\lambda(f(E_{n,m})) \le (n + \epsilon)(\lambda(E_{n}) + \epsilon)$ and conclude letting $m \to \infty$ and then $\epsilon \to 0^+$. $\blacksquare$</p>
<hr>
<p>As a remark, I should mention that I am not worrying about measurability since everything is a subset of the null set $E$. On the other hand the claim still holds without any assumption on the set $E_n$ defined as above (including measurability). In this case you can use the exact same argument replacing the Lebesgue measure with the outer measure. (you can then reintroduce the measure when you define the (measurable) open sets $U_{n,m}$.</p>
|
232,424 | <p>Are there any claims and counterclaims to mathematics being in some certain cases a result of common sense thinking? Or can some mathematical results be figured out using just pure common sense i.e. no mathematical methods? </p>
<p>I'd also appreciate any mentions relating to sciences, social sciences or ordinary life.</p>
| Robert Israel | 8,508 | <p>"Common sense" in mathematics is not very common.<br>
Many things seem very anti-intuitive, at least until you train your intuition properly.
The untrained intuition is lost when dealing with, for example, infinite sets, or geometry in more than $3$ dimensions. However, one example of "common sense" that does come to mind is
the Pigeonhole Principle in combinatorics.</p>
|
232,424 | <p>Are there any claims and counterclaims to mathematics being in some certain cases a result of common sense thinking? Or can some mathematical results be figured out using just pure common sense i.e. no mathematical methods? </p>
<p>I'd also appreciate any mentions relating to sciences, social sciences or ordinary life.</p>
| Godot | 38,875 | <p>The common sense is the backbone of whole mathematics.</p>
<p>It is fair to say that nowadays all branches of mathematics are axiomatic theories.
To start building an axiomatic theory you must decide what are your axioms, what are your axiom schemes, what are your rules of inference. When you finished setting up those things you can forget, in some sense, about common sense. But to make a right (right=at least interesting, usually you know what is right or what you need) choice of axioms, axioms schemes and rules of inference you will need a common sense because it is your only tool at that moment of the very beginning! You cannot create something from nothing (unless you are God :), you cannot start from nowhere. The common sense is the right starting point for mathematics, even if mathematics is capable of taking you far, far beyond it.</p>
|
232,424 | <p>Are there any claims and counterclaims to mathematics being in some certain cases a result of common sense thinking? Or can some mathematical results be figured out using just pure common sense i.e. no mathematical methods? </p>
<p>I'd also appreciate any mentions relating to sciences, social sciences or ordinary life.</p>
| Ronnie Brown | 28,586 | <p>I would like to write about the problem of "expression", from my own experience. In the 1960s, it seemed to me that, from a commonsensical viewpoint, there should be some way of expressing that
<img src="https://i.stack.imgur.com/7a8Ig.jpg" alt="array"></p>
<p>in the above diagram the big square should be the "composition" of all the little squares. Then I found that Charles Ehresmann had defined double categories, which did the job. </p>
<p>The next question was: what is a commutative cube? For a square with sides <span class="math-container">$a,b,c,d$</span> the answer might be <span class="math-container">$ab=cd$</span>. But if we want the "faces" of a cube to commute? For all this to be significant one has to move from sets with a total operation to sets with partial operations! </p>
<p>The point I am trying to make is that one function of mathematics is to develop language for rigorous expression, deduction and calculation, and this may take a while to develop. For example Descartes' notion of a graph of a function is now a commonplace, even common sense; but it may take an intellectual leap to make something into "common sense". </p>
<p>Another example is the introduction of the zero, and Arabic numerals. </p>
<p>Are higher dimensions than <span class="math-container">$3$</span> common sense? See the book "Flatland"! (downloadable). </p>
<p>Worth discussing is: have there been revolutions in mathematics? See discussions on the work of Thomas Kuhn on "Revolutions in Science". </p>
|
2,321,667 | <p>Patrick Suppes in his book <a href="http://rads.stackoverflow.com/amzn/click/0486406873" rel="nofollow noreferrer">Introduction to Logic</a> on page 63 asks a reader to proof a statement
$$\forall x\forall y\forall z(xPy\land yPz\to xPz)$$ from the theory which he calls "Theory of rational behavior". The statement is based on the notion of weak preference $xQy$, its two properties (lines 1 and 2) and a definition of strict preference $xPy$ (line 3):
$$\begin{array}{p}
\{1\}&(1)&\forall x\forall y\forall z(xQy\land yQz\to xQz)&\text{Transitive property} \\
\{2\}&(2)&\forall x\forall y(xQy \lor yQx)&\text{Axiom of order}\\
\{3\}&(3)&\forall x\forall y(xPy\leftrightarrow \neg yQx)&\text{Definition of strict preferece}\\
\{4\}&(4) & xPy\land yPz & \text{Assumption} \\
\{3,4\}&(5) & \neg yQx\land \neg zQy & \text{from (3)(4) using U.S.} \\
\{2,3,4\}&(6) & xQy & \text{from (2)(5) using U.S.} \\
\{2,3,4\}&(7) & yQz & \text{from (2)(5) using U.S.} \\
\{1,2,3,4\}&(8) & xQz & \text{from (1)(6)(7)} \\
\end{array}
$$
U.S. stands here for the <em>Rule of Universal Specification</em></p>
<p>$xPz$ is equal to $\neg zQx$ by the definition of strict preference on line $3$. So we want to show that $\neg zQx$ logically follows from the premises $\{1,2,3,4\}$ and then use conditioning on line $(4)$ and <em>The Rule of Universal Generalisation</em> to prove the given statement. But from $xQz\land(xQz \lor zQx)$ we cannot conclude $\neg zQx$ because according to the <em>Axiom of order</em> both $xQz$ and $zQx$ can be <em>true</em> together.</p>
<p>I've tried the method of interpretations to check validity of the statement that has to be proven but haven't found any, such that its antecedent would be <em>true</em> and conclusion would be <em>false</em>.</p>
<p>If my derivation is fine so far, I'm looking for tips which will help me to get to the finish line here. Will appreciate any feedback.</p>
| User4407 | 81,495 | <p>I came to the same conclusion using your approach, namely, that the <em>axiom of order</em> wasn't enough to prove $\neg zQx$. However, I found that it could be accomplished by using the <em>transitive property</em> along with <em>modus tollens</em>. That leads to the following idea: $\neg yQx \to (yQz \to \neg zQx)$. That's a summary of lines 14 through 17:</p>
<p>\begin{array}{l}
& \{1\} & 1. & x\forall y\forall z(xQy\land yQz\to xQz) & \text{ Transitive property }\\
& \{2\} & 2. & \forall x\forall y(xQy \lor yQx) & \text{ Axiom of order }\\
& \{3\} & 3. & \forall x\forall y(xPy\leftrightarrow \neg yQx) & \text{ Definition of strict preference }\\
& \{4\} & 4. & aPb\land bPc & \text{ Assum. }\\
& \{4\} & 5. & aPb & \text{ $\land$E }\\
& \{4\} & 6. & bPc & \text{ $\land$E }\\
& \{3\} & 7. & aPb\leftrightarrow \neg bQa & \text{ 3 UE }\\
& \{3\} & 8. & bPc\leftrightarrow \neg cQb & \text{ 3 UE }\\
& \{3,4\} & 9. & \neg bQa & \text{ 7 $\leftrightarrow$E, 5,7 MP }\\
& \{3,4\} & 10. & \neg cQb & \text{ 8 $\leftrightarrow$E, 6,8 MP }\\
& \{2\} & 11. & bQc \lor cQb & \text{ 2 UE }\\
& \{2\} & 12. & \neg cQb \to bQc & \text{ 11 MI }\\
& \{2,3,4\} & 13. & bQc & \text{ 10,12 MP }\\
& \{1\} & 14. & (bQc \land cQa) \to bQa & \text{ 1 UE }\\
& \{1,3,4\} & 15. & \neg (bQc \land cQa) & \text{ 9,14 MT }\\
& \{1,3,4\} & 16. & \neg bQc \lor \neg cQa & \text{ 15 DM }\\
& \{1,3,4\} & 17. & bQc \to \neg cQa & \text{ 16 MI }\\
& \{1,2,3,4\} & 18. & \neg cQa & \text{ 13,17 MP }\\
& \{3\} & 19. & \neg cQa \to aPc & \text{ 3 UE, $\leftrightarrow$E }\\
& \{1,2,3,4\} & 20. & aPc & \text{ 18,19 MP }\\
& \{1,2,3\} & 21. & (aPb\land bPc) \to aPc & \text{ 4,20 CP }\\
& \{1,2,3\} & 22. & \forall x\forall y\forall z(xPy\land yPz\to xPz) & \text{ 22 UI }\\
\end{array}</p>
|
38,193 | <p>For simplicity, let me pick a particular instance of Gödel's Second Incompleteness
Theorem:</p>
<p>ZFC (Zermelo-Fraenkel Set Theory plus the Axiom of Choice, the usual foundation of mathematics) does not prove Con(ZFC), where Con(ZFC) is a formula that expresses that
ZFC is consistent.</p>
<p>(Here ZFC can be replaced by any other sufficiently good, sufficiently strong set of axioms,
but this is not the issue here.)</p>
<p>This theorem has been interpreted by many as saying "we can never know whether mathematics is consistent" and has encouraged many people to try and prove that ZFC (or even PA) is in fact inconsistent. I think a mainstream opinion in mathematics (at least among mathematician who think about foundations) is that we believe that there is no problem with
ZFC, we just can't prove the consistency of it.</p>
<p>A comment that comes up every now and then (also on mathoverflow), which I tend to agree with, is this:</p>
<p>(*) "What do we gain if we could prove the consistency of (say ZFC) inside ZFC? If ZFC were inconsistent, it would prove its consistency just as well."</p>
<p>In other words, there is no point in proving the consistency of mathematics by a mathematical proof, since if mathematics were flawed, it would prove anything, for instance its own non-flawedness.
Hence such a proof would not actually improve our trust in mathematics (or ZFC, following the particular instance).</p>
<p>Now here is my question: Does the observation (*) imply that the only advantage of the Second Incompleteness Theorem over the first one is that we now have a specific sentence
(in this case Con(ZFC)) that is undecidable, which can be used to prove theorems like
"the existence of an inaccessible cardinal is not provable in ZFC"?
In other words, does this reduce the Second Incompleteness Theorem to a mere technicality
without any philosophical implication that goes beyond the First Incompleteness Theorem
(which states that there is some sentence <span class="math-container">$\phi$</span> such that neither <span class="math-container">$\phi$</span> nor <span class="math-container">$\neg\phi$</span> follow from ZFC)?</p>
| Andreas Blass | 6,794 | <p>For the philosophical point encapsulated in (*) in the question, it seems that corollaries of the second incompleteness theorem are more relevant than the theorem itself. If we had doubts about the consistency of ZFC, then a proof of Con(ZFC) carried out in ZFC would indeed be of little use. But a proof of Con(ZFC) carried out in a more reliable system, like Peano arithmetic or primitive recursive arithmetic, would (before Gödel) have been useful, and I think this is what Hilbert was hoping for. Gödel's second incompleteness theorem tells us that this sort of thing can't happen (unless even the more reliable system is inconsistent).</p>
|
4,196,583 | <p>More precisely:</p>
<blockquote>
<p><strong>Definition.</strong><br />
A subset <span class="math-container">$S \subset \Bbb R$</span> is called <em>good</em> if the following hold:</p>
<ol>
<li>if <span class="math-container">$x, y \in S$</span>, then <span class="math-container">$x + y \in S,$</span> and</li>
<li>if <span class="math-container">$(x_n)_{n = 1}^\infty \subset S$</span> is a sequence in <span class="math-container">$S$</span> and <span class="math-container">$\sum_{n = 1}^\infty x_n$</span> converges, then <span class="math-container">$\sum_{n = 1}^\infty x_n \in S$</span>.</li>
</ol>
</blockquote>
<p>In other words, a good subset is closed under finite sums and countable sums whenever the sum does exist.</p>
<blockquote>
<p><strong>Question:</strong> What are all the good subsets of <span class="math-container">$\Bbb R$</span>?</p>
</blockquote>
<hr />
<p><strong>Origin</strong></p>
<p><a href="https://math.stackexchange.com/questions/4194005/">This question</a> was asked recently and <a href="https://math.stackexchange.com/users/152568/conifold">Conifold</a> had <a href="https://math.stackexchange.com/questions/4194005/does-1-frac14-frac19-frac116-cdots-frac-pi26-imply-that-the-rational#comment8699081_4194005">commented</a> how the only subsets of <span class="math-container">$\Bbb R$</span> closed under countable summations are <span class="math-container">$\varnothing$</span> and <span class="math-container">$\{0\}$</span>. It was then natural to ask "closed under countable summation, assuming it exists".</p>
<hr />
<p><strong>My thoughts</strong></p>
<p>Here are some examples of familiar sets which are good: <span class="math-container">$\varnothing$</span>, <span class="math-container">$\{0\}$</span>, <span class="math-container">$\Bbb Z_{\geq 0}$</span>, <span class="math-container">$\Bbb Z_{> 0}$</span>, <span class="math-container">$\Bbb Z$</span>, <span class="math-container">$n\Bbb Z$</span>, <span class="math-container">$\Bbb R$</span>.<br />
We even have the following:<br />
<span class="math-container">$$r \Bbb Z := \{rn : n \in \Bbb Z\},$$</span>
where <span class="math-container">$r$</span> is any real number.</p>
<p>But the examples apart from <span class="math-container">$\Bbb R$</span> are good for a trivial reason: Those are sets that are closed under finite summation and have the property that they are discrete enough so that the only convergent sums are those where the terms are eventually <span class="math-container">$0$</span>. (In the case of <span class="math-container">$\Bbb Z_{> 0}$</span>, there is no such sum.)</p>
<p>On the same note, intervals of the form <span class="math-container">$[a, \infty)$</span> and <span class="math-container">$(a, \infty)$</span> are good for <span class="math-container">$a > 0$</span>.</p>
<p>In general, suppose that <span class="math-container">$S$</span> satisfies the following: <span class="math-container">$S$</span> is closed under finite sums and there exists <span class="math-container">$\epsilon > 0$</span> such that <span class="math-container">$|s| > \epsilon$</span> for all <span class="math-container">$s \in S$</span>. Then, <span class="math-container">$S$</span> and <span class="math-container">$S \cup \{0\}$</span> are good.</p>
<p>Another example: <span class="math-container">$[0, \infty)$</span> and <span class="math-container">$(0, \infty)$</span> are good and do not follow the criteria above.</p>
<hr />
<p>The following are some examples of <em>not</em> good sets: <span class="math-container">$\Bbb Q$</span>, <span class="math-container">$\Bbb R \setminus \Bbb Q$</span>, <span class="math-container">$\Bbb R \setminus \Bbb Z$</span>, a proper cofinite subset of <span class="math-container">$\Bbb R$</span>, any bounded set apart from <span class="math-container">$\{0\}$</span> and <span class="math-container">$\varnothing$</span>. In fact, excluding <span class="math-container">$\Bbb Q$</span>, the other ones are not even closed under finite sums.<br />
In the same vein as <span class="math-container">$\Bbb Q$</span>, we also have the set of real algebraic numbers which is not good (but is indeed closed under finite sums).</p>
<p>Here's a nontrivial one: Consider the set <span class="math-container">$$B = \left\{\frac{1}{2^k} : k \in \Bbb Z_{> 0}\right\}.$$</span> Then, <em>any</em> countable subset of <span class="math-container">$\Bbb R$</span> that contains <span class="math-container">$B$</span> is not good.<br />
<em>Proof.</em> <span class="math-container">$(0, 1]$</span> is uncountable and every element in it can be written as a sum of elements of <span class="math-container">$B$</span>. (Binary expansions.) <span class="math-container">$\Box$</span></p>
<p>A bit more thought actually shows that more is true: Since <span class="math-container">$(0, 1]$</span> is contained in the set of all possible sums, any good set containing <span class="math-container">$B$</span> must contain all of <span class="math-container">$(0, \infty)$</span>.</p>
<p>Another one: let <span class="math-container">$(a_n)_{n \ge 1}$</span> be any real sequence such that <span class="math-container">$\sum a_n$</span> converges conditionally. Then, the only good subset containing <span class="math-container">$\{a_n\}_{n \ge 1}$</span> is <span class="math-container">$\Bbb R$</span>, by the Riemann rearrangement theorem.</p>
<hr />
<p><strong>Additional comments</strong></p>
<p>There are some variants that come to mind. Not sure if any of them are any more interesting. But I'd be happy with an answer that answers only one of the following variants as well.</p>
<ol>
<li>What if I exclude point 1. from my definition? Let's call such a set nice.<br />
In that case, the set <span class="math-container">$\{1\}$</span> is nice but not good. (Of course, if <span class="math-container">$0 \in S$</span>, then nice is equivalent to good.) What are the nice subsets of <span class="math-container">$\Bbb R$</span>? What are the nice subsets which are not good?</li>
<li>What if I consider only those sums which have converge absolutely?</li>
</ol>
<hr />
<p><strong>More observations</strong><br />
(These are edits, which I'm adding later)</p>
<p>Here are some additional observations:</p>
<ol>
<li>Arbitrary intersection of good sets is good and <span class="math-container">$\Bbb R$</span> is good. Thus, it makes sense to talk about the smallest good set containing a given subset of <span class="math-container">$\Bbb R$</span>.<br />
So, given a subset <span class="math-container">$A \subset \Bbb R$</span>, let us call this smallest good set to be the <em>good set generated by <span class="math-container">$A$</span></em> and notationally denote it as <span class="math-container">$\langle A \rangle$</span>.<br />
(In particular, <span class="math-container">$A$</span> is good iff <span class="math-container">$A = \langle A \rangle$</span>.)</li>
<li><span class="math-container">$A \subset B \implies \langle A \rangle \subset \langle B \rangle$</span>.</li>
<li><span class="math-container">$\langle (0, \epsilon) \rangle = (0, \infty)$</span> and similar symmetric results.</li>
<li>Suppose <span class="math-container">$S \subset (0, \infty)$</span> is dense in <span class="math-container">$(0, \infty)$</span>, then <span class="math-container">$\langle S \rangle = (0, \infty)$</span>.<br />
Indeed, pick <span class="math-container">$a_0 \in (0, \infty)$</span>.<br />
Then, there exists <span class="math-container">$s_1 \in (a_0/2, a_0)$</span>.<br />
Put <span class="math-container">$a_1 := a - s_1$</span>. Then, <span class="math-container">$a_1 \in (0, a_0/2)$</span>.<br />
Now pick <span class="math-container">$s_2 \in (a_1/2, a_1)$</span> and so on. Then, <span class="math-container">$\sum s_n = a_0$</span>.<br />
Symmetric results apply. In particular, the only dense good subset of <span class="math-container">$\Bbb R$</span> (or <span class="math-container">$\Bbb R^+$</span> or <span class="math-container">$\Bbb R^-$</span>) is the whole set.</li>
<li>Suppose <span class="math-container">$S \subset (0, \infty)$</span> contains arbitrarily small elements (i.e., <span class="math-container">$S \cap (0, \epsilon) \neq \varnothing$</span> for all <span class="math-container">$\epsilon > 0)$</span>), then <span class="math-container">$\langle S \rangle = (0, \infty)$</span>.<br />
To see this, let <span class="math-container">$a > 0$</span> be arbitrary. Pick <span class="math-container">$s_1 \in (0, a)$</span>.<br />
Let <span class="math-container">$n_1$</span> be the largest positive integer such that <span class="math-container">$n_1 s_1 < a$</span>.<br />
Then pick <span class="math-container">$s_2 \in (0, a - n_1s_1)$</span>. Let <span class="math-container">$n_2$</span> be the largest such <span class="math-container">$n_2s_2 < a - n_1s_1$</span> and so on.<br />
Then, <span class="math-container">$$\underbrace{s_1 + \cdots + s_1}_{n_1} + \underbrace{s_2 + \cdots + s_2}_{n_2} + \cdots = a.$$</span></li>
</ol>
| qualcuno | 362,866 | <p>Not quite what you want, but too long for a comment:</p>
<blockquote>
<p><strong>Proposition.</strong> If <span class="math-container">$S$</span> is a subgroup, then <span class="math-container">$S$</span> is good iff it is closed.</p>
</blockquote>
<p><em>Proof.</em> Suppose that <span class="math-container">$S$</span> is good and consider a
converging sequence <span class="math-container">$s_n \to s$</span> with each <span class="math-container">$s_n \in S$</span>; setting <span class="math-container">$t_n :=
s_{n+1}-s_n \in S$</span> we obtain</p>
<p><span class="math-container">$$S \ni \sum_{n \geq 1}t_n = \lim_{n\to \infty}s_{n+1} = s-s_1.$$</span></p>
<p>Thus <span class="math-container">$s = (s-s_1) + s_1 \in S$</span>. Conversely, suppose that <span class="math-container">$S$</span> is good and take a converging series <span class="math-container">$\sum_{n \geq 1}s_n$</span> with terms belonging to <span class="math-container">$S$</span>. Since each partial sum is in <span class="math-container">$S$</span>, the series is a limit point of <span class="math-container">$S$</span> (and thus belongs to <span class="math-container">$S$</span>). <span class="math-container">$\square$</span></p>
<p>A closed subgroup of <span class="math-container">$\Bbb R$</span> is of the form <span class="math-container">$\alpha \Bbb Z$</span> for some <span class="math-container">$\alpha > 0$</span>, <span class="math-container">$\Bbb R$</span> or <span class="math-container">$\{0\}$</span>, see <a href="https://math.stackexchange.com/questions/505429/closed-subgroups-of-mathbbr">here</a>.</p>
<p>Also, a remark: by <a href="https://en.wikipedia.org/wiki/Riemann_series_theorem" rel="nofollow noreferrer">Riemann's rearranging theorem</a> any good subset that contains a sequence whose series is conditionally convergent must be <span class="math-container">$\Bbb R$</span>.</p>
<p>Maybe if we consider the subgroup generated by a good subset <span class="math-container">$S$</span> we can get some more information. It is not obvious to me whether this remains to be a good subset, though.</p>
|
4,117,409 | <blockquote>
<p>Prove or disprove: if for every <span class="math-container">$n\in\Bbb{N}, |a_{n+1}-a_n|<\frac{1}{n^2}$</span> then <span class="math-container">$a_n$</span> converges.</p>
</blockquote>
<p>I think this is true, and tried using Cauchy's theorem - I take some <span class="math-container">$\varepsilon > 0$</span>. There exists such <span class="math-container">$N$</span> s.t <span class="math-container">$\varepsilon > \frac{1}{N^2}$</span>. So, for every <span class="math-container">$m,n>N$</span>, we get that:
<span class="math-container">$|a_m-a_n|\leq|a_m-a_{m-1}|+...+|a_{n+1}-a_n|<\frac{1}{m^2}+...+\frac{1}{n^2}<\frac{m-n}{N^2}$</span>. Now I think the righthand side tends to <span class="math-container">$0$</span>, but this feels like cheating. Am I correct or is there something I'm missing?</p>
| TheSilverDoe | 594,484 | <p>No need to use Cauchy criterion here.</p>
<p>The fact that <span class="math-container">$|a_{n+1}-a_n| \leq 1/n^2$</span> implies that the <em>series</em> <span class="math-container">$\sum (a_{n+1}-a_n)$</span> is absolutely convergent, hence convergent, hence the sequence <span class="math-container">$(a_n)$</span> converges.</p>
|
3,866,285 | <p>Suppose I'd like to find the coefficient of <span class="math-container">$x^{l}$</span> in the expansion of <span class="math-container">$(1+x+x^{2}+...+x^{n})^{m}$</span>, where <span class="math-container">$n$</span> and <span class="math-container">$m$</span> are given positive integers, for some given integer <span class="math-container">$l$</span> such that <span class="math-container">$n < l < mn$</span>. Is there a straightforward formula (in terms of some multinomial coefficient, or similar)?</p>
| user158293 | 158,293 | <p>Using the ideas of a linked post such as <span class="math-container">$\left(\sum_0^n x^s\right)^m=(1-x^{n+1})^m (1-x)^{(-m)} $</span> found a much more concise answer.
Again using <span class="math-container">$L$</span> in place of <span class="math-container">$l\;$</span> the coeff of <span class="math-container">$x^L$</span> is
<span class="math-container">$\sum_{k=0}^{min(m,\frac L{(n+1)})}(-1)^k\binom{m}{k}\binom{m+L-k(n+1)-1}{L-k(n+1)}$</span></p>
|
854,438 | <p>I've read in a lot of places how there was a "foundational crisis" in defining the "foundations of mathematics" in the 20th century. Now, I understand that mathematics was very different then, I suppose the ideas of Church, Godel, and Turing were either in their infancy, or not well-known, but I still hear this kind of language a lot today, and I don't know why.</p>
<p>From my perspective, mathematics is essentially the study of any kind of formal system understandable by an intelligent, yet finite being. By definition then, the only reasonable "foundation for all of mathematics" must be a Turing complete language, and the choice of what specific language is basically arbitrary (except for concerns of elegance). The idea of creating a finite set of axioms that describes "all of mathematics" seems fruitless to me, unless what is being described is a Turing complete system.</p>
<p>Is this idea of finding a foundation for all of "mathematics" still prominent today? I suppose I can understand why this line of reasoning was once prominent, but I don't see why it is relevant today. Is the continuum hypothesis true? Well, do you want it to be?</p>
| Kile Kasmir Asmussen | 72,934 | <p>Mathematics is a science of discovery. It's domain is The Mathematical Principle a peculiar entity or feature of the universe we inhabit; one might formulate the principle as follows:</p>
<blockquote>
<p>Given a string of symbols and well defined symbolic transformations, applying a well specified series of transformations to the initial string always yields the same result.</p>
</blockquote>
<p>All mathematics is symbol manipulation (usually we prefer to work in syntax trees, but there is an isomorphism between syntax trees and strings.) We usually start by designating a number of strings as starting points (axioms) and a number of transformations (inference rules), and then we add the rule that any derivable string is a valid starting point too (theorems.)</p>
<p>Now, by this very general definition includes weird systems such as the MIU system by Douglas Hofstadter. Usually we are a bit more refined.</p>
<p>In the more refined form we use a stratified syntax of objects and statements, with functions mapping objects to objects, predicates or relations mapping objects to statements, and logical connectives mapping statements to statements.</p>
<p>The most common inference rule in this system is modus ponnens in the logic, and it turns out most interesting things can be derived from that.</p>
<p>The question is now what objects to use and how to gain objects to use. The usual way to gain objects is using quantifiers, the universal and existential are common, and most quantifiers can be reduced to these two.</p>
<p>Now, what objects do we use? It is all well and fined to write down a lot of formulae in first-order logic, but we need to actually make them <em>do</em> something, and this is where the foundational crisis comes in.</p>
<p>Because they just plain didn't know what to do here. If you pick up Russel & co.'s Principia Mathematica, you'll find a lot of outdated ideas, because the modern techniques just plain weren't invented back then.</p>
<p>What ideas? The purely set-theoretic ordered pair. Principia uses several dozen pages on duplicating all the set axioms but for relations. Only in 1914 did Norbert Wiener come up with {{a,{}},{b}} for ordered pairs and only in 1921 Kazimierz Kuratowski came up with {{a,b},{b}} using the axiom of pairing.</p>
<p>Really it was Zermelo & Frankelen and Gödel who made the day with the eponymous ZF and Completeness Theorems solidifying the basis for First Order logic. This was as late as 1920's. We have only had well-founded mathematics for eight decades!</p>
<p>The whole deal comes down to Model Theory, which studies the interplay of a universe of elements and a first-order syntax (composed of the two quantifiers, a logic, equality and some relations and predicates, some constants, and some functions.)</p>
<p>The power of ZF comes from it's ability to quote formulas and prove their validity for specific sets. Using this we can quote each single axiom of Peano Arithmetic and prove it is true for the Von Neumann set-theoretic naturals; and suddenly Peano Arithmetic has a model. ZF is incredibly powerful because like this, it contains almost every theory, and many nice properties can be proven of theories and universes in general.</p>
|
4,249,794 | <p>i have to determine whether this graph is bipartite or not:</p>
<p><img src="https://i.stack.imgur.com/AjxQzl.png" alt="" /></p>
<p>I have found an answer but i am not sure about it. If we divide the vertices set into <span class="math-container">$\{a,d,c,h\}$</span> and <span class="math-container">$\{b,f,e,g\}$</span>, then it fulfills bipartite properties.
Is it correct ?</p>
| Holy Moly | 959,258 | <p>Note that a graph is bipartite iff all its cycles have even length. Thus your graph is bipartite. To check that all cycles of your graph are actually of even length, we may apply the following result that can be proved by induction. Given a graph <span class="math-container">$G$</span> built up from even cycles connected by one edge, all cycles of <span class="math-container">$G$</span> are even.</p>
|
59,954 | <p>I can rather easily imagine that some mathematician/logician had the idea to symbolize "it <strong>E</strong> xists" by $\exists$ - a reversed E - and after that some other (imitative) mathematician/logician had the idea to symbolize "for <strong>A</strong> ll" by $\forall$ - a reversed A. Or vice versa. (Maybe it was one and the same person.)</p>
<p>What is hard (for me) to imagine is, how the one who invented $\forall$ could fail to consider the notations $\vee$ and $\wedge$ such that today $(\forall x \in X) P(x)$ must be spelled out $\bigwedge_{x\in X} P(x)$ instead of $\bigvee_{x\in X}P(x)$? (Or vice versa.)</p>
<p>Since I know that this is not a real question, let me ask it like this: Where can I find more about this observation?</p>
| GEdgar | 442 | <p>The four types of propositions used in the classical Greek syllogisms were called A, E, I, O. Statements of type A were "All p are q". Statement of type E were "Some p are q". So of course a millennium later, mathematicians (who had a classical education) used A and E for these quantifiers, then later turned them upside down to avoid confusion with letters used for other things. </p>
<p>By the way: I and O were "All p are not q" = "No p are q" and "Some p are not q"="Not all p are q", but I don't remember which is I and which is O.</p>
|
2,089,502 | <blockquote>
<p>How many numbers are there from $1$ to $1400$ which maintain these conditions:
when divided by $5$ the remainder is $3$ and when divided by $7$ the remainder is $2$?</p>
</blockquote>
<p>How can I start? I am newbie in modular arithmetics. I can just figure out that the number $= 5k_1+3 = 7k_2+2$. </p>
| Bernard | 202,857 | <p><em>Mean value theorem</em>: there exists $c$ between $x$ and $y$ such that $\;\cos x-\cos y=-\sin c(x-y)$, hence
$$\lvert\cos x-\cos y\rvert=\lvert\sin c\rvert\lvert x-y\rvert\le\lvert x-y\rvert. $$</p>
|
1,612,808 | <p>Suppose that $X$ is a finite $G$-set. A group $G$ is of prime power if $|G|=p^n$ for $p$ prime.</p>
<p>The fixed point set $X_G=\{x\in X : gx=x$ $\forall g\in G\}$.</p>
<p>I'm asked to prove that $|X|=|X_G|$ (mod $p$), but I'm unsure of how I should start.</p>
| Tsemo Aristide | 280,301 | <p>Hint: To show this, you have to remark that if $x$ is not fixed by $G$, the cardinal of $Orb_x$ the orbit of $x$ is $\mid Orb_x\mid =\mid G\mid/\mid G_x\mid$ Where $G_x$ is the subgroup of elements of $G$ which fix $x$. Lagrange implies that $\mid G_x\mid$ divides $\mid G\mid$ so $\mid Orb_x\mid =0$ mod p since $G_x\neq G$. The results follows from the fact that the cardinal of $X$ is the sum of the cardinal of the orbits of $G$. $\mid G\mid =\sum \mid X_G\mid+\sum_{x: G_x\neq G}\mid Orb_x \mid$ since $\mid Orb_x\mid =0$ mod $p$, $\mid X_G\mid =mid X\mid $ mod p.</p>
|
30,220 | <p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p>
<p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> "Clarifying the nature of the infinite: the development of metamathematics and proof theory".</p>
<blockquote>
<p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual
reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>),
as part and parcel of what he refers to as the “second birth” of mathematics.
The following quote, from Dedekind, makes the difference of opinion very clear:</p>
</blockquote>
<blockquote>
<blockquote>
<p>A theory based upon calculation would, as it seems to me, not offer
the highest degree of perfection; it is preferable, as in the modern
theory of functions, to seek to draw the demonstrations no longer
from calculations, but directly from the characteristic fundamental
concepts, and to construct the theory in such a way that it will, on
the contrary, be in a position to predict the results of the calculation
(for example, the decomposable forms of a degree).</p>
</blockquote>
</blockquote>
<blockquote>
<p>In other words, from the Cantor-Dedekind point of view, abstract conceptual
investigation is to be preferred over calculation.</p>
</blockquote>
<p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here "calculation" means any type of routine technicality.) Category theory and topoi may provide some examples.</p>
<p>Thanks in advance.</p>
| Daniel Litt | 6,950 | <p>A wonderful example is the proof of the Poincare Lemma I sketch <a href="http://www.thehcmr.org/issue1_2/poincare_lemma.pdf" rel="nofollow noreferrer">here</a> (<a href="https://web.archive.org/web/20141017035427/http://www.thehcmr.org/issue1_2/poincare_lemma.pdf" rel="nofollow noreferrer">Wayback Machine</a>), as compared to the proof in e.g. Spivak's Calculus on Manifolds. The latter is extremely computational and, IIRC, not illuminating; it proves the de Rham cohomology of a star-shaped domain vanishes. The former proof shows (the stronger result) that the de Rham complex <span class="math-container">$\Lambda_{DR}(M)$</span> is null-homotopic for <span class="math-container">$M$</span> contractible; while this does involve some computation, it is very simple and conceptual. The proof is about half a page long in total, and could probably be shortened. It was shown to me by Professor Dennis Gaitsgory; I haven't seen it elsewhere, though I'm sure it is in the literature.</p>
<p>You can skip to the end of the paper (page 26) to see the proof; much of it is aimed at an undergraduate audience that has not yet seen any homological algebra, or even more conceptual linear algebra.</p>
<p>Essentially, the proof works by 1) Noting that the de Rham complex construction is functorial, via pullback of differential forms; 2) Noting that a homotopy of maps <span class="math-container">$M\to N$</span> induces a homotopy of maps of chain complexes; and 3) Noting that for <span class="math-container">$M$</span> contractible, <span class="math-container">$\operatorname{id}_M$</span> is homotopic to a constant map, and thus the pullback via <span class="math-container">$\operatorname{id}_M$</span> is both zero and the identity on cohomology.</p>
|
1,414,316 | <p>I am trying to optimize distance from point to plane using Lagrange multiplier.</p>
<p>Usually for such problems you are given specific point like (1,2,3) in 3D, and then an exact plane which is just the subject of Lagrange. But what I have here doesn't specify values for point and plane.</p>
<p>It says problem happens in a D-dimensional space. It denotes the point as X, the plane as (wT)x+b=0, where wT is the transpose of w (which is just a n-by-1 matrix I suppose), and requires me to use Lagrange to optimize the distance. The final result should be expressed with w, b and X. Without specific values I am really lost in how to approach such kind of problem. Any suggestions?</p>
| Noah Schweber | 28,111 | <p>Note that in Asaf's answer, the elementary embedding $j: M\rightarrow N$ does not "live" (that is, is not definable in) $M$.</p>
<p>By contrast, if we have an elementary embedding $j: M\rightarrow N$ which is definable in $M$ (from parameters in $M$), then $crit(j)$ <strong>is</strong> inaccessible, in fact measurable, in $M$: letting $\kappa=crit(j)$, we form the ultrafilter $$\mathcal{U}=\{S\subseteq \kappa: \kappa\in j(S)\},$$ and note that this ultrafilter is $<\kappa$-closed (since $j(\bigcup_{\alpha<\mu}A_\alpha)=\bigcup_{\alpha<\mu} j(A_\alpha)$ for $\mu<\kappa$, since $\kappa=crit(j)$ so $j(\mu)=\mu$); since $\mathcal{U}$ exists in $M$ (since $j$ is definable), we have $\kappa$ is measurable in $M$.</p>
<p>Even if $j$ is not definable in $M$, however, we may form the ultrafilter $\mathcal{U}$ described above; it might not live in $M$, is all that changes. In inner model theory, we are often interested in embeddings which, though maybe not definable over their domain model, nonetheless yield reasonably tame ("amenable") ultrafilters.</p>
|
432,208 | <p>I want to grasp the moving frames method but I find some obstacles. I don't know if this question is suitable for MO, if it is not the case please let me know and I will move it.<br />
I am aware there are other related questions here like <a href="https://mathoverflow.net/questions/337294/moving-frames-method-for-non-matrix-lie-group">this one</a> or <a href="https://mathoverflow.net/questions/336149/about-the-cartans-moving-frame-method?rq=1">this one</a>, but they don't answer my doubts.</p>
<p>Given a Lie group <span class="math-container">$G$</span> and a homogeneous space <span class="math-container">$X\equiv G/H$</span>, the goal of the moving frames method is to study submanifolds <span class="math-container">$M$</span> of <span class="math-container">$X$</span>. In particular we want to know if two given submanifolds <span class="math-container">$M$</span> and <span class="math-container">$\tilde{M}$</span> are "congruent", in the sense that there is a "movement" <span class="math-container">$g\in G$</span> such that <span class="math-container">$g(M)=\tilde{M}$</span>.</p>
<p>I know I have a <span class="math-container">$\mathfrak{g}$</span>-valued differential form on <span class="math-container">$G$</span> which is left invariant, the Maurer-Cartan form <span class="math-container">$\theta$</span>. The left-invariance of <span class="math-container">$\theta$</span> allows us to show (see <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjxmv-g3tf6AhUCcxoKHQC6DLcQFnoECAsQAQ&url=https%3A%2F%2Fprojecteuclid.org%2Fjournals%2Fduke-mathematical-journal%2Fvolume-41%2Fissue-4%2FOn-Cartans-method-of-Lie-groups-and-moving-frames-as%2F10.1215%2FS0012-7094-74-04180-5.full&usg=AOvVaw3bbvtFV6C_skrb20r37Tg9" rel="nofollow noreferrer">griffiths1974cartan</a>, lemma (1.3)) that given two maps <span class="math-container">$f,\tilde{f}$</span> from, let's say, <span class="math-container">$\mathbb{R}^n$</span> to <span class="math-container">$G$</span> then <span class="math-container">$\tilde{f}(x)=g\cdot f(x)$</span> if and only if <span class="math-container">$\tilde{f}^*(\theta)=f^*(\theta)$</span>. This way <span class="math-container">$f^*(\theta)$</span> provide a set of invariants to characterize submanifolds of <span class="math-container">$G$</span> (not of <span class="math-container">$X$</span>!).</p>
<p>Suppose our submanifolds <span class="math-container">$M$</span> and <span class="math-container">$\tilde{M}$</span> are parametrized respectively by maps <span class="math-container">$\alpha:\mathbb{R}^n \to X$</span> and <span class="math-container">$\tilde{\alpha}:\mathbb{R}^n \to X$</span>. If we are in a particular case, for example <span class="math-container">$G=E(2)$</span>, <span class="math-container">$X=\mathbb{E}^2$</span> and <span class="math-container">$M,\tilde{M}$</span> curves, we have a "canonical" way to lift <span class="math-container">$\alpha$</span> and <span class="math-container">$\tilde{\alpha}$</span> to <span class="math-container">$f_{\alpha}, f_{\tilde{\alpha}}:\mathbb{R} \to G$</span> (the unitary tangent vector and its orthogonal, together with the curve point itself). This way, if
<span class="math-container">$$
f_{\alpha}^*(\theta)={f}_{\tilde{\alpha}}^*(\theta)
$$</span>
we conclude that the "curves of frames" <span class="math-container">$f_{\alpha}$</span> and <span class="math-container">$f_{\tilde{\alpha}}$</span> are congruent, and therefore <span class="math-container">$\alpha$</span> and <span class="math-container">$\tilde{\alpha}$</span> are congruent.</p>
<p>The key fact here is, I think, that the assignment
<span class="math-container">$$
\alpha \mapsto f_{\alpha}
$$</span>
is <span class="math-container">$G$</span>-invariant, in the sense that <span class="math-container">$\tilde{\alpha}=g\alpha$</span> if and only if <span class="math-container">$f_{\tilde{\alpha}}=g f_{\alpha}$</span>. Otherwise we could have congruent curves in <span class="math-container">$E(2)$</span> which couldn't be detected by the invariants (because of the "bad assignment" of frames to the curves).</p>
<p><strong>Question 1</strong><br />
Back to the general case: can we always find such a "canonical lift"? Is there a method to find it? Or is the moving frames method restricted to a bunch of particular cases?<br />
<span class="math-container">$\blacksquare$</span></p>
<p><strong>Question 2</strong><br />
Can you provide at least a brief list of examples of these assignments? For example:</p>
<ul>
<li>Curves in <span class="math-container">$\mathbb{R}^3$</span> with Euclidean movements: the Frenet frame.</li>
<li>Surfaces in <span class="math-container">$\mathbb{R}^3$</span> with Euclidean movements: a frame made with the point, the normal vector and two ortogonal vectors aligned with the principal directions of the surface (or is not this necessary?).</li>
<li>...</li>
</ul>
<p><span class="math-container">$\blacksquare$</span></p>
<p>I have read the article of Griffiths, the corresponding chapter of <em>Cartan for begginers</em> and <em>From Frenet to Cartan</em>, but I am still blocked with these doubts.</p>
| Robert Bryant | 13,972 | <p>I think you might want to read a couple of articles on the moving frame that carefully discuss this issue (and show that it is more subtle than most people realize).</p>
<p>The first is a paper by Mark Green, <em>The moving frame, differential invariants and rigidity theorems for curves in homogeneous spaces</em> (Duke Math. J. 45 (1978), no. 4, 735–779). Mark discusses the 1-dimensional case in great detail and shows, by example, that, in fact, there are cases in which a `canonical' lifting by Cartan's method does not exist, and one must extend the method by introducing a family of liftings in order to find all of the invariants.</p>
<p>The second is a set of lecture notes by Gary Jensen,
<em>Higher order contact of submanifolds of homogeneous spaces</em>,
(Lecture Notes in Mathematics, Vol. 610. Springer-Verlag, Berlin-New York, 1977.) He discusses the foundations of the subject in great detail and gives a number of illuminating examples.</p>
<p>Of course, there are lots of more recent papers and articles, but I think that these two illustrate a lot of the issues and point out some of the pitfalls that aren't apparent at first sight.</p>
|
238,128 | <p>Let $G$ be an abelian group. <br/>
Show that $\{x\in{G} | |x| < \infty\}$ is a subgroup of $G$. Give an example of a non-abelian group where this fails to be a subgroup.</p>
| André Nicolas | 6,312 | <p><strong>Hint:</strong> Here is one of many ways of constructing an example. Let $G$ be the group of permutations of the integers. Let $f$ be the permutation that takes any integer $x$ to $-x$, and $g$ the permutation that takes any integer $x$ to $1-x$. </p>
<p>Both $f$ and $g$ have order $2$. Now consider the permutation $gf$, meaning $f$, followed by $g$. Show that $gf$ does not have finite order. </p>
<p>If you prefer matrices, let $A=\begin{pmatrix}-1 &0\\0&1\end{pmatrix}$ and $B=\begin{pmatrix}-1 &1\\0&1\end{pmatrix}$.</p>
<p>Then $A^2$ and $B^2$ are both the identity matrix. But $BA$ has infinite order. To see this, check what $BA$ does to the vector $\begin{pmatrix}n \\1\end{pmatrix}$ </p>
<p><strong>Remark:</strong> For the Abelian case, we need to show closure under product and inverse. For product, note that if $a^m=e$ and $b^n=e$, then $(ab)^{mn}=a^{mn}b^{mn}=e$. Inverse is easier, since in any group, Abelian or not, the inverse of $a$ has the same order as $a$. </p>
|
4,275,780 | <blockquote>
<p>If <span class="math-container">$0<a<b$</span> and <span class="math-container">$0<c<d$</span> then <span class="math-container">$\frac{c+a}{d+a} <\frac{c+b}{d+b}.$</span></p>
</blockquote>
<p>I get to <span class="math-container">$$d+a<d+b \Longrightarrow \frac{1}{d+b} < \frac{1}{d+a}$$</span> but that inequality seems opposite of what I am trying to prove. Any advice is appreciated.</p>
| Aryaman Maithani | 427,810 | <p><span class="math-container">$\renewcommand{\iff}{\Leftrightarrow}$</span>Here's a dumb way which requires no clever insight:<br />
Since <span class="math-container">$a, b, c, d > 0$</span>, we see that
<span class="math-container">$$\frac{c+a}{d+a} < \frac{c+b}{d+b} \iff(c + a)(d + b) < (d + a)(c + b).$$</span>
Thus, it suffices to prove that the right side is true. Multiplying the brackets and cancelling the common terms, this becomes equivalent to showing that
<span class="math-container">$$cb + ad < db + ac.$$</span>
Rearranging shows that the above is equivalent to
<span class="math-container">$$ad - ac < db - cb.$$</span>
Note
<span class="math-container">$$ad - ac < db - cb \iff a(d - c) < b(d - c) \iff 0 < (b - a)(d - c).$$</span>
The rightmost statement is true because <span class="math-container">$b - a > 0$</span> and <span class="math-container">$d - c > 0$</span>.</p>
|
4,275,780 | <blockquote>
<p>If <span class="math-container">$0<a<b$</span> and <span class="math-container">$0<c<d$</span> then <span class="math-container">$\frac{c+a}{d+a} <\frac{c+b}{d+b}.$</span></p>
</blockquote>
<p>I get to <span class="math-container">$$d+a<d+b \Longrightarrow \frac{1}{d+b} < \frac{1}{d+a}$$</span> but that inequality seems opposite of what I am trying to prove. Any advice is appreciated.</p>
| zwim | 399,263 | <p>There is this important general inequality (working for positive denominators), i.e. the compound fraction comes between the bounds:</p>
<p><span class="math-container">$$\frac AB<\frac CD\implies \frac AB<\frac{A+C}{B+D}<\frac CD$$</span></p>
<p>See here for a proof <a href="https://math.stackexchange.com/a/3373549/399263">https://math.stackexchange.com/a/3373549/399263</a>, but it is basically using the fact that <span class="math-container">$AD-BC<0$</span> once reduced to common denominator.</p>
<p>You can use it here by following this chain:</p>
<p><span class="math-container">$$\frac cd<1=\frac aa\implies \frac {c+a}{d+a}<1=\frac{b-a}{b-a}\implies \frac{c+a}{d+a}<\frac{c+a+b-a}{d+a+b-a}=\frac{c+b}{d+b}$$</span></p>
|
4,275,780 | <blockquote>
<p>If <span class="math-container">$0<a<b$</span> and <span class="math-container">$0<c<d$</span> then <span class="math-container">$\frac{c+a}{d+a} <\frac{c+b}{d+b}.$</span></p>
</blockquote>
<p>I get to <span class="math-container">$$d+a<d+b \Longrightarrow \frac{1}{d+b} < \frac{1}{d+a}$$</span> but that inequality seems opposite of what I am trying to prove. Any advice is appreciated.</p>
| David | 979,748 | <p>By the rearrangement inequality, <span class="math-container">$bc+ad\lt ac+bd.$</span> Thus, <span class="math-container">$(c+a)(d+b)\lt(c+b)(d+a).$</span></p>
|
344,166 | <p>I was for some time curious about William Feller's probability tract (first volume); luckily, I could lay my hands on it recently and I find it of super qualities. It provides a complete exposition of elementary(no measures) probability. The book is rigorous "hard" math but doesn't escape from giving a solid intuitive feeling. The author discusses a topic, mentions an example, proposes different scenarios that gives back more math. His first chapter on "nature of probability" is essential. It gives a good feeling for what statistical probability means, and why/how it was defined as it is. </p>
<p>Question: I'm looking for other math books on fundamental mathematics(algebra, real analysis, etc...)- essential mathematics that is not very advanced(algebraic geometry for example) - of high qualities like Feller's probability text. Feller might not be used anymore, but its full of exercises that would make it a working textbook written by a master.</p>
<p>To be specific and not too general. I'm looking exactly for inspiring Feller style books in real analysis and abstract algebra. Rudin is good, but its not a master book. I don't know much about abstract algebra available textbooks/master expositions. </p>
| Community | -1 | <p>Here is a link to a free down-load of virtually verbatim lecture notes for a real analysis course taught by Fields Medal winner Vaughan Jones. They were my first introduction to real math - beautiful presentation, lots of motivation:</p>
<p><a href="https://sites.google.com/site/math104sp2011/lecture-notes" rel="nofollow">https://sites.google.com/site/math104sp2011/lecture-notes</a></p>
<p>Another nice book on real analysis is Pugh's "Real Math. Analysis." An unsung hero, again lots of motivation, excellent pictures for a real feel, and plenty of examples.</p>
<p>In addition to Paul Garrett's excellent notes, here is a link to great material by Keith Conrad:</p>
<p><a href="http://www.math.uconn.edu/~kconrad/blurbs/" rel="nofollow">http://www.math.uconn.edu/~kconrad/blurbs/</a></p>
<p>What I especially like here aside from the great presentation is the constant pointing out of anticipated misconceptions and many, many examples looking at the topic from many sides. They are relatively short and cover a wide range of primarily algebraic topics at many levels.</p>
|
344,166 | <p>I was for some time curious about William Feller's probability tract (first volume); luckily, I could lay my hands on it recently and I find it of super qualities. It provides a complete exposition of elementary(no measures) probability. The book is rigorous "hard" math but doesn't escape from giving a solid intuitive feeling. The author discusses a topic, mentions an example, proposes different scenarios that gives back more math. His first chapter on "nature of probability" is essential. It gives a good feeling for what statistical probability means, and why/how it was defined as it is. </p>
<p>Question: I'm looking for other math books on fundamental mathematics(algebra, real analysis, etc...)- essential mathematics that is not very advanced(algebraic geometry for example) - of high qualities like Feller's probability text. Feller might not be used anymore, but its full of exercises that would make it a working textbook written by a master.</p>
<p>To be specific and not too general. I'm looking exactly for inspiring Feller style books in real analysis and abstract algebra. Rudin is good, but its not a master book. I don't know much about abstract algebra available textbooks/master expositions. </p>
| goblin GONE | 42,339 | <p>Goldrei's <a href="http://rads.stackoverflow.com/amzn/click/0412606100" rel="nofollow">Classic Set Theory For Guided Independent Study</a>. I don't necessarily think he's the greatest expositor, but his educational philosophy is spot on. For instance, he starts with the real number system and asks: how do we know this system exists? One possible answer is: because the set of all Dedekind cuts of rational numbers can be made into a real number system in a natural way. Okay but how do we know a rational number system exists? Easy: we can build rational numbers as equivalence classes of integers. But wait! Perhaps there is no integer number system. But that can't be, because we can build integers out of naturals. Okay, but maybe there doesn't exist a natural number system.</p>
<p>At this point, the reader has an epiphany. The existence of all the major number systems can be demonstrated using set theory alone - that is, if we can build a natural number system. So if we can build such a system, then WOW! Set theory is POWERFUL. Its only at this late stage in the game that Goldrei actually starts talking about the ZFC axioms. And it works great!</p>
<p>Too many math books start with axioms, or esoteric definitions, without giving the reader any intuition about why they should care. Goldrei's book is a breath of fresh air in this regard. Truly, a remarkable book.</p>
|
1,653,106 | <p>I was following a calculus tutorial that factored the equation $x^4-16$ into $(x^2 +4) (x+2)(x-2)$.</p>
<p>Why is the factorization of $x^4-16 = (x^2 + 4)(x+2)(x-2)$ rather than $(x^2 - 4)(x^2 +4)$? </p>
| Jay Lee | 166,706 | <p>That is, since $(x^2+4)(x+2)(x-2)$ is the simplest form of the equation $x^4-16$, rather than $(x^2-4)(x^2+4)$.</p>
|
1,653,106 | <p>I was following a calculus tutorial that factored the equation $x^4-16$ into $(x^2 +4) (x+2)(x-2)$.</p>
<p>Why is the factorization of $x^4-16 = (x^2 + 4)(x+2)(x-2)$ rather than $(x^2 - 4)(x^2 +4)$? </p>
| hkBst | 261,514 | <p>Actually you could factor all the way to complex roots:</p>
<p>$$x^4-16 = (x^2-4)(x^2+4) = (x-2)(x+2)(x-2i)(x+2i).$$</p>
|
1,630,655 | <p>Not sure what to do / how to start this... I have equcation of 504 is: $2 \cdot2 \cdot 2 \cdot 3 \cdot 3 \cdot 7$</p>
| lhf | 589 | <p>Write $504 = 2^3\cdot3^2\cdot7$ and
$n^9-n^3=n^3(n^6-1)=n^3(n^2-1)(n^4+n^2+1)$.</p>
<p>Then:</p>
<ul>
<li><p>For $m=2^3$, we have $n^2-1 \equiv 0$ for $n$ odd and $n^3 \equiv 0$ for $n$ even.</p></li>
<li><p>For $m=3^2$, we have $\phi(m)=6$ and so $n^6-1 \equiv 0$ when $n$ is not a multiple of $3$ and $n^3 \equiv 0$ when $n$ is a multiple of $3$.</p></li>
<li><p>For $m=7$, we have $\phi(m)=6$ and so $n^6-1 \equiv 0$ when $n$ is not a multiple of $7$ and $n^3 \equiv 0$ when $n$ is a multiple of $7$.</p></li>
</ul>
<p>The last two come from Fermat's theorem.</p>
|
322,858 | <p>Let <span class="math-container">$G$</span> be a split semisimple real Lie group in characteristic zero, and let <span class="math-container">$B=TU$</span> be a Borel subgroup with unipotent radical <span class="math-container">$U$</span> and Levi <span class="math-container">$T$</span>. Fix an ordering on the roots <span class="math-container">$\Phi^+$</span> of <span class="math-container">$T$</span> in <span class="math-container">$U$</span>, and for each root subgroup <span class="math-container">$U_{\alpha}$</span> of <span class="math-container">$U$</span>, let <span class="math-container">$u_{\alpha}: \mathbb R \rightarrow U_{\alpha}$</span> be an isomorphism.</p>
<p>For all <span class="math-container">$\alpha, \beta \in \Phi^+$</span>, there exist unique real numbers <span class="math-container">$C_{\alpha,\beta,i,j}$</span> (depending on the <span class="math-container">$u_{\alpha}$</span> and the ordering) such that for all <span class="math-container">$x, y \in \mathbb R$</span>,</p>
<p><span class="math-container">$$u_{\alpha}(x) u_{\beta}(y) u_{\alpha}(x)^{-1} = u_{\beta}(y) \prod\limits_{\substack{i,j>0\\ i\alpha + j \beta \in \Phi^+}} u_{i\alpha+j\beta}(C_{\alpha,\beta,i,j}x^iy^j)$$</span></p>
<p>I want to work out some examples with unipotent groups of exceptional semisimple groups, and am looking for table of structure constants for the root system G2. Does anyone know a reference where an ordering on the roots is chosen and these constants are written down?</p>
| Jim Humphreys | 4,231 | <p>Probably the earliest reference is the 1956-58 Chevalley seminar, available online in typed format, which has been republished in 2005 as a typeset book edited by P. Cartier: see Chapter 21. (No special assumption about the characteristic of the field is needed.) My own later treatment of the classification of simple algebraic groups followed the same method in GTM 21 (<em>Linear Algebraic Groups</em>, Springer, 1975, 33.5). A similar approach was taken in SGA3, as indicated by Peter McNamara. (The later more elegant approach to the classification due to Takeuchi was worked out in Jantzen's book as well as Springer's textbook.) </p>
<p>Note too that the Lie algebra calculations were done in my earlier book GTM 9, first in the characteristic 0 setting: see 19.3. </p>
|
2,579,156 | <p>I found the solution of series on Wolfram Alpha
<a href="http://www.wolframalpha.com/input/?i=sum+1%2F(2k%2B1)%2F(2k%2B2)+from+1+to+n" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=sum+1%2F(2k%2B1)%2F(2k%2B2)+from+1+to+n</a></p>
<p>$ \sum\limits_{k=1}^{n} \left(\frac{1}{2k+1} - \frac{1}{2k+2}\right) = \sum\limits_{k=1}^{n} \frac{1}{(2k+1)(2k+2)} = \frac{1}{2} \left(H_{n+\frac{1}{2}} - H_{n+1} -1 + \text{ln}(4)\right)$</p>
<p>Can someone tell how to prove this in the form of Harmonic numbers?</p>
| Dr. Wolfgang Hintze | 198,592 | <p>This is not an answer to the specific question but a long comment which gives a derivation of a much simpler formula for the sum in question. </p>
<p>Let</p>
<p>$$s = \sum _{k=1}^n \left(\frac{1}{2 k+1}-\frac{1}{2 k+2}\right)$$</p>
<p>Adding and subtracting a sum of even terms we get for $s$</p>
<p>$$\begin{array}
&=& \sum _{k=1}^n \left(\frac{1}{2 k}+\frac{1}{2 k+1}\right)-\sum _{k=1}^n \left(\frac{1}{2 k}+\frac{1}{2 k+2}\right)\\
=& (\frac{1}{2}+\frac{1}{3}+ ... + \frac{1}{2n+1})-\frac{1}{2}(1+\frac{1}{2}+...+\frac{1}{n}) -\frac{1}{2}(\frac{1}{2}+\frac{1}{3}+...+ \frac{1}{n+1})\\
=&(H_{2n+1}-1) -\frac{1}{2}H_n -\frac{1}{2}(H_{n+1}-1)\\
=&H_{2n+1} -\frac{1}{2}(H_n +H_{n+1})-\frac{1}{2}
\end{array}
$$</p>
<p>Those who wish can simplifiy this further using the relation $H_{n+1}=H_n + \frac{1}{n+1}$.</p>
|
113,854 | <p>I'm importing a *.pdb file containing a single protein. <em>Mathematica</em> automatically produces a plot of the protein.</p>
<p>I want to specify the color of each residue independently, in this plot. Is this possible?</p>
<p>Additionally, I would like to change the type of plot to "cartoon". How can I do this?</p>
| J. M.'s persistent exhaustion | 50 | <p>Here is a very hack-ish way of coloring each residue sequentially. The trick is in constructing a <code>Blend[]</code> function where the color corresponding to each residue appears twice in the first argument. A slight shift is apparently needed to match up the colors, even if only approximately.</p>
<pre><code>n = StringLength[First[Import["ExampleData/1PPT.pdb", "Sequence"]]];
clist = PadRight[{}, n, Values[Association[ColorData[97, "ColorRules"]]]];
Import["ExampleData/1PPT.pdb",
ColorFunction -> (Directive[GrayLevel[1/10], Specularity[1/5, 10],
Glow[Blend[Riffle[clist, clist], # - 1/(2 n)]]] &),
Lighting -> "Classic"]
</code></pre>
<p><img src="https://i.stack.imgur.com/zfNJo.png" alt="candied peptide"></p>
|
2,807,356 | <blockquote>
<p><strong>If $z_1,z_2$ are two complex numbers such that $\vert
z_1+z_2\vert=\vert z_1\vert+\vert z_2\vert$,then it is necessary that</strong> </p>
<p>$1)$$z_1=z_2$</p>
<p>$2)$$z_2=0$</p>
<p>$3)$$z_1=\lambda z_2$for some real number $\lambda.$</p>
<p>$4)$$z_1z_2=0$ or $z_1=\lambda z_2$ for some real number $\lambda.$</p>
</blockquote>
<p>From Booloean logic we know that if $p\implies q$ then $q$ is necessary for $p$.</p>
<p>For $1)$taking $z_1=1$ and $z_2=2$ then $\vert 1+2 \vert=\vert 1\vert+\vert 2\vert$ but $1\neq 2$.So,$(1)$ is false.</p>
<p>For $2)$taking $z_1=1$ and $z_2=2$ then $\vert 1+2 \vert=\vert 1\vert+\vert 2\vert$ but $2\neq 0$.So,$(2)$ is false.</p>
<p>I'm not getting how to prove or disprove options $(3)$ and $(4)?$</p>
<p>Need help</p>
| Peter Szilas | 408,605 | <p>A bit of geometry:</p>
<p>Consider $z_1, z_2$ vectors in the complex plane.</p>
<p>Vector addition: </p>
<p>$\vec {OA} =z_1$; $\vec {AB} =z_2$; and $\vec {OB} = z_1+z_2.$</p>
<p>For $OAB$ to be a triangle we must have:</p>
<p>$|z_1|+|z_2|> |z_1+z_2|$, i.e. the sum of the lengths of 2 sides is greater than the 3rd side.</p>
<p>Hence $z_1$ and $z_2$ must be collinear. </p>
<p>1) $\lambda =0$, trivial.</p>
<p>2)$\lambda <0:$</p>
<p>$z_1=\lambda z_2.$</p>
<p>$|1+\lambda| |z_1| = (1+|\lambda|)|z_1|.$</p>
<p>Ruled out.</p>
<p>3) $\lambda >0$ , ok.</p>
<p>Hence a necessary condition : </p>
<p>$z_1= \lambda z _2$ with $\lambda \ge 0$.</p>
<p>Now check your options.</p>
|
2,476,717 | <p>Let $f: \mathbb R^n \rightarrow \mathbb R^n$ with arbitrary norm $\|\cdot\|$. It exists a $x_0 \in \mathbb R^n$ and a number $r \gt 0 $ with </p>
<p>$(1)$ on $B_r(x_0)=$ {$x\in \mathbb R^n: \|x-x_0\| \leq r$} $f$ is a contraction with Lipschitz constant L</p>
<p>$(2)$ it applies $\|f(x_0)-x_0\| \le (1-L)r$</p>
<p>The sequence $(x_k)_{k\in\mathbb N}$ is defined by $x_{k+1}=f(x_k).$</p>
<p>How do I show that $x_k \in B_r(x_0) \forall k \in \mathbb N$?</p>
<p>How do I show that $f$ has a unique Fixed point $x_f$ with $x_f = \lim_{k \rightarrow\infty} x_k$?</p>
<p>I know this has something to do with Banach but I am totally clueless on how to prove this. Any help is welcome. Thanks. </p>
| Surb | 154,545 | <p>For $x\in B_r(x_0)$, we have
$$\|f(x)-x_0\|=\|f(x)-f(x_0)+f(x_0)-x_0\|\leq \|f(x)-f(x_0)\|+\|f(x_0)-x_0\|\\
\leq L\|x-x_0\|+(1-L)r \leq Lr+(1-L)r=r$$
thus $f(x)\in B_r(x_0)$.</p>
<p>To show that $\lim_{k\to\infty}x_k=x_f$ and $f$ has a unique fixed point in $B_r(x_0)$, you can indeed use the Banach fixed point theorem on $f|_{B_r(x_0)}$ provided that $r\in (0,1)$.</p>
|
4,605,888 | <p>Let's say I have a uniformly distributed random number sequence whose values are in the range [1, <strong>m</strong>]. Each value has a chance of <strong>p</strong> = 1/<strong>m</strong> appearing. Take a sample of size <strong>s</strong> from that sequence. For a given value in the sample, let <strong>n</strong> be the number of duplicates in the rest of the sample. Let <strong>c</strong> be the count of numbers in the sample that have the same <strong>n</strong>.</p>
<p>For example:</p>
<ul>
<li>if <strong>n</strong> = 0, then <strong>c</strong> is the number of unique values in the sample.</li>
<li>if <strong>n</strong> = 1, then <strong>c</strong> is the number of values that have one same value in the rest of the sample.</li>
</ul>
<p>So for a given <strong>n</strong>, how to calculate the expected <strong>c</strong>?</p>
<p>Below are some simulated results.</p>
<p>for <strong>m</strong> = 65536, <strong>s</strong> = 1000</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>n</th>
<th>c</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>978</td>
</tr>
<tr>
<td>1</td>
<td>22</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>for <strong>m</strong> = 65536, <strong>s</strong> = 10,000</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>n</th>
<th>c</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>8598</td>
</tr>
<tr>
<td>1</td>
<td>1284</td>
</tr>
<tr>
<td>2</td>
<td>114</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>4</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>for <strong>m</strong> = 65536, <strong>s</strong> = 100,000</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>n</th>
<th>c</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>21806</td>
</tr>
<tr>
<td>1</td>
<td>33326</td>
</tr>
<tr>
<td>2</td>
<td>24915</td>
</tr>
<tr>
<td>3</td>
<td>13076</td>
</tr>
<tr>
<td>4</td>
<td>4865</td>
</tr>
<tr>
<td>5</td>
<td>1566</td>
</tr>
<tr>
<td>6</td>
<td>350</td>
</tr>
<tr>
<td>7</td>
<td>96</td>
</tr>
<tr>
<td>8</td>
<td>0</td>
</tr>
</tbody>
</table>
</div> | user51547 | 51,547 | <p>Let <span class="math-container">$X_i$</span> denote the number of values in the sequence that take on value <span class="math-container">$i$</span>, where <span class="math-container">$1 \leq i \leq m$</span>, and let <span class="math-container">$Z_n$</span> the number of values in the sequence that are the duplicate of exactly <span class="math-container">$n$</span> other values.</p>
<p>What we're interested in computing is <span class="math-container">$\mathbb{E} Z_n$</span>, where
<span class="math-container">$$ Z_n = \sum_{i=1}^m (n+1)\mathbb{1}\{X_i=n+1\}.$$</span></p>
<p>By linearity of expectation, this is
<span class="math-container">$$\mathbb{E}Z_n = \sum_{i=1}^m (n+1)P(X_i=n+1)=m(n+1)P(X_1=n+1).$$</span></p>
<p>Since <span class="math-container">$X_i\sim \text{Binom}(s,1/m)$</span>, we have</p>
<p><span class="math-container">$$P(X_1=n+1)=\binom{s}{n+1} \left(\frac{1}{m}\right)^{n+1}\left(1-\frac{1}{m}\right)^{s-n-1},$$</span></p>
<p>therefore
<span class="math-container">$$\mathbb{E}Z_n = (n+1)\binom{s}{n+1}\left(\frac{1}{m}\right)^{n}\left(1-\frac{1}{m}\right)^{s-n-1}.$$</span></p>
<p>For m = 65536, s = 100,000:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>n</th>
<th>c</th>
<th><span class="math-container">$\mathbb{E}Z_n$</span></th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>21806</td>
<td>21743.1</td>
</tr>
<tr>
<td>1</td>
<td>33326</td>
<td>33177.5</td>
</tr>
<tr>
<td>2</td>
<td>24915</td>
<td>25312.3</td>
</tr>
<tr>
<td>3</td>
<td>13076</td>
<td>12874.3</td>
</tr>
<tr>
<td>4</td>
<td>4865</td>
<td>4911.1</td>
</tr>
<tr>
<td>5</td>
<td>1566</td>
<td>1498.7</td>
</tr>
<tr>
<td>6</td>
<td>350</td>
<td>381.1</td>
</tr>
<tr>
<td>7</td>
<td>96</td>
<td>83.1</td>
</tr>
<tr>
<td>8</td>
<td>0</td>
<td>15.8</td>
</tr>
</tbody>
</table>
</div> |
1,494,022 | <p>Find a closed expression in terms of $n$.
$$\sum_{k=1}^n(k!)(k^2+k+1); n=1,2,3...$$<br>
Any idea about how to do this.. I'm a new to this so a little explanation would be helpful. Thanks in advance!</p>
| SchrodingersCat | 278,967 | <p>$$\sum_{k=1}^n(k!)(k^2+k+1)=\sum_{k=1}^n(k!)(k^2+2k+1-k)$$
$$=\sum_{k=1}^n(k!)[(k+1)^2-k]$$
$$=\sum_{k=1}^n(k+1)!(k+1)-k(k)!$$
$$=\sum_{k=1}^n(k+1)!(k+2-1)-(k+1-1)(k)!$$
$$=\sum_{k=1}^n[(k+2)!-(k+1)!-(k+1)!+(k)!]$$
$$=\sum_{k=1}^n[(k+2)!-(k+1)!]-\sum_{k=1}^n[(k+1)!-(k)!]$$
$$=(n+2)!-2-[(n+1)!-1]$$
$$=(n+2)!-(n+1)!-1$$</p>
<p>This is the req. sum.</p>
|
713,104 | <p>Are there any combinatorial games whose order (in the usual addition of combinatorial games) is finite but neither $1$ nor $2$?</p>
<p>Finding examples of games of order $2$ is easy (for example any impartial game), but I have not been able to think up an example with finite order where the order did not come from some sort of symmetry (for example even though Domineering is not impartial, it is easy to see that any square board will give a game of order $1$ or $2$), and such a symmetry only gives $1$ or $2$ as the possible orders.</p>
| Mark Bennet | 2,906 | <p>There is a general formula $$\sum_{k=1}^n\frac {k(k+1)\dots(k+r-1)}{r!}=\frac {n(n+1)\dots(n+r)}{(r+1)!}$$</p>
<p>Which can be proved by induction - base case $n=1$, both sides of the equation are equal to $1$.</p>
<p>Then $$\frac {n\left[(n+1)\dots(n+r)\right]}{(r+1)!}+\frac {\left[(n+1)\dots(n+r)\right]}{r!}=\frac {\left[(n+1)\dots(n+r)\right](n+r+1)}{(r+1)!}$$</p>
<p>Another way of writing this, which invites a combinatorial proof is:</p>
<p>$$\sum_{k=1}^n\binom {k+r-1}r=\binom {n+r}{r+1}$$</p>
<p>The sum of integers is the case $r=1$ and the sum of triangular numbers is the case $r=2$.</p>
|
1,234,726 | <p>How many lattice paths are there from $(0, 0)$ to $(10, 10)$ that do not pass to the point $(5, 5)$ but do pass to $(3, 3)$?</p>
<p>What I have so far:</p>
<p>The number of lattice paths from $(0,0)$ to $(n,k)$ is equal to the binomial coefficient $\binom{n+k}n$ (according to Wikipedia). So the number of lattice paths from $(0, 0)$ to $(10, 10)$ is $\binom{20}{10}$, and the number of lattice paths through $(5, 5)$ is $\binom{10}{5}$, and the number of lattice paths that pass through $(3, 3)$ is $\binom{6}{3}$. What next?</p>
| Hemant Rupani | 230,966 | <p>From (0,0) to (3,3) there are $\binom{6}{3}=20$ paths.</p>
<p>From (3,3) to (10,10) there are $\binom{14}{7}=3432$ paths.</p>
<p>From (5,5) to (10,10) there are $\binom{10}{5}=252$ paths.(the paths you don't need)
6*252 paths that from (3,3) to (5,5) to (10,10)</p>
<p>Hence, total conditional lattice path will be 20*3432-20*6*252=38400.</p>
|
4,234,095 | <p>I need to show that <span class="math-container">$[\mathbb{Q}(2^{1/4},2^{1/6}):\mathbb{Q}]$</span> is a field extension of degree <span class="math-container">$12$</span>. It is possible to show that the degree is at least <span class="math-container">$12$</span> because it is divisible by <span class="math-container">$6$</span> and <span class="math-container">$4$</span> by finding the minimal polynomial of the simple field extensions of <span class="math-container">$2^{1/4}$</span> and <span class="math-container">$2^{1/6}$</span>, but I am not sure how to bound the inequality in the other direction.</p>
<p>Another way to approach this problem might just be to explicitly find the basis, but I think there should be a way to find a bound on the inequality.</p>
| Alex Wertheim | 73,817 | <p>You've already done the hard work. Let <span class="math-container">$K = \mathbb{Q}(2^{1/4}, 2^{1/6})$</span>, let <span class="math-container">$\alpha = 2^{1/12}$</span>, and let <span class="math-container">$L = \mathbb{Q}(\alpha)$</span>. Clearly, <span class="math-container">$K \subset L$</span>, since <span class="math-container">$2^{1/4} = \alpha^{3}$</span> and <span class="math-container">$2^{1/6} = \alpha^{2}$</span>. You've already shown that <span class="math-container">$[K:\mathbb{Q}] \geqslant 12$</span>, and it's not hard to show (e.g., via Eisenstein) that <span class="math-container">$[L:\mathbb{Q}] = 12$</span> by computing the minimal polynomial of <span class="math-container">$\alpha$</span> over <span class="math-container">$\mathbb{Q}$</span>. Since <span class="math-container">$K$</span> is a <span class="math-container">$\mathbb{Q}$</span>-subspace of <span class="math-container">$L$</span>, we conclude that <span class="math-container">$[K:\mathbb{Q}] \leqslant 12$</span>, which gives the desired equality.</p>
|
376,861 | <p>A knot can be represented with a <a href="http://katlas.math.toronto.edu/wiki/MorseLink_Presentations" rel="nofollow noreferrer">Morse link presentation</a>, as a combination of cups, caps and crossings (which is not uniquely determined by the knot, of course):</p>
<p><a href="https://i.stack.imgur.com/47mxu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/47mxu.png" alt="enter image description here" /></a></p>
<p>Two Morse link presentations of the same knot can be related by a sequence of the following moves:</p>
<ul>
<li>Swapping the order of two independent operations</li>
</ul>
<p><a href="https://i.stack.imgur.com/OZ2dq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OZ2dq.png" alt="enter image description here" /></a></p>
<ul>
<li>Pulling a cap, a cup or a crossing through two crossings (generalization of Reidemeister type 3)
<a href="https://i.stack.imgur.com/barWn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/barWn.png" alt="enter image description here" /></a></li>
<li>Cancelling out two successive crossings of alternating types (Reidemeister type 2)</li>
</ul>
<p><a href="https://i.stack.imgur.com/6BpPZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6BpPZ.png" alt="enter image description here" /></a></p>
<ul>
<li>Twisting a cap or a cup (Reidemeister type 1)</li>
</ul>
<p><a href="https://i.stack.imgur.com/sy37Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sy37Z.png" alt="enter image description here" /></a></p>
<ul>
<li>Introducing or eliminating a zig-zag</li>
</ul>
<p><a href="https://i.stack.imgur.com/hyBtB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hyBtB.png" alt="enter image description here" /></a></p>
<p>Given Morse presentation of the unknot, it seems to me that one can always simplify it to the trivial unknot diagram without ever introducing a zig-zag (so, never introducing new strands). So the last equality could be used in the simplifying direction only. Is this fact known and if so, how does the proof go? If not, I would be interested in a counter-example.</p>
| M. Ozawa | 46,903 | <p>I think this was Question 3.5 in "Thin position in the theory of classical knots" by Martin Scharlemann, and a counterexample was given by Zuapn.</p>
<p><em>Zupan, Alexander</em>, <a href="http://dx.doi.org/10.2140/agt.2011.11.1097" rel="nofollow noreferrer"><strong>Unexpected local minima in the width complexes for knots</strong></a>, Algebr. Geom. Topol. 11, No. 2, 1097-1105 (2011). <a href="https://zbmath.org/?q=an:1227.57015" rel="nofollow noreferrer">ZBL1227.57015</a>.</p>
|
4,521,774 | <p>In many posts on MSE, it is discussed that Cauchy sequences can't be defined in General topological spaces and in a typical topology book it is discussed what converging sequences are, but, what I don't understand is, why, on an abstract level, does convergence generalize even without a metric while cauchy-ness doesn't?</p>
<p><strong>My issue with the posts <a href="https://math.stackexchange.com/questions/3839837/cauchy-sequence-is-not-a-topological-notion">(eg)</a> is that they show that Cauchyness is not a topological property using specific sequences. But what I wish to know is the abstract idea behind it.</strong></p>
| jacktrnr | 901,848 | <p>Here's my attempt to explain.</p>
<p>Consider this definition of convergence that does not require a notion of metric:</p>
<p><em>A sequence <span class="math-container">$\{x_n\}_{n\in\mathbb N}$</span> converges to <span class="math-container">$x$</span> if and only if for each open neighborhood <span class="math-container">$U$</span> of <span class="math-container">$x$</span> there is an <span class="math-container">$m_U \in \mathbb{N}$</span> such that <span class="math-container">$x_n \in U$</span> whenever <span class="math-container">$n \geq m_U$</span>.</em></p>
<p>Now, try to imagine an analogous definition for Cauchy sequences. In the convergence definition, we don't need to shift <span class="math-container">$U$</span> in anyway. In the Cauchy case, we would have to say something like:</p>
<p><em>A sequence <span class="math-container">$\{x_n\}_{n\in\mathbb N}$</span> is Cauchy if and only if for each open neighborhood U of the origin, there is an <span class="math-container">$m_U \in \mathbb{N}$</span> such that <span class="math-container">$x_j,x_k \in (U + x_m)$</span> whenever <span class="math-container">$j,k \geq m_U$</span>.</em></p>
<p>Hence, the difference is the Cauchy definition has this implicit "shifting" of open sets, and such a mechanism requires an "origin" and a metric.</p>
|
2,209,034 | <blockquote>
<p>The question asks to compute:
<span class="math-container">$$\sum_{k=0}^{n-1}\dfrac{\alpha_k}{2-\alpha_k}$$</span>
where <span class="math-container">$\alpha_0, \alpha_1, \ldots, \alpha_{n-1}$</span> are the <span class="math-container">$n$</span>-th roots of unity.</p>
</blockquote>
<p>I started off by simplifiyng and got it as:</p>
<p><span class="math-container">$$=-n+2\left(\sum_{k=0}^{n-1} \dfrac{1}{2-\alpha_k}\right)$$</span></p>
<p>Now I was stuck. I can rationalise the denominator, but we know <span class="math-container">$\alpha_k$</span> has both real and complex components, so it can't be simplified by rationalising. What else can be done?</p>
| Jaideep Khare | 421,580 | <p>Since <span class="math-container">$\alpha_0,\alpha_1,\alpha_2, \dots , \alpha_{n-1}$</span> are roots of the equation</p>
<p><span class="math-container">$$x^n-1=0 ~~~~~~~~~~~~~ \cdots ~(1)$$</span> </p>
<p>You can apply <a href="https://en.wikipedia.org/wiki/Polynomial_transformation" rel="nofollow noreferrer">Transformation of Roots</a> to find a equation whose roots are<span class="math-container">$$\frac{1}{2-\alpha_0} , \frac{1}{2-\alpha_1},\frac{1}{2-\alpha_2}, \dots , \frac{1}{2-\alpha_{n-1}}$$</span></p>
<p>Let <span class="math-container">$P(y)$</span> represent the polynomial whose roots are <span class="math-container">$\frac{1}{2-\alpha_k}$</span></p>
<p><span class="math-container">$$y=\frac{1}{2-\alpha_k}=\frac{1}{2-x} \implies x=\frac{2y-1}{y}$$</span></p>
<p>Put in <span class="math-container">$(1)$</span></p>
<p><span class="math-container">$$\Bigg(\frac{2y-1}{y}\Bigg)^n-1=0 \implies (2y-1)^{n}-y^{n}=0$$</span></p>
<p>Use <a href="https://en.wikipedia.org/wiki/Binomial_theorem" rel="nofollow noreferrer">Binomial Theorem</a> to find coefficient of <span class="math-container">$y^n$</span> and <span class="math-container">$y^{n-1}$</span>.You will get sum of the roots using <a href="https://en.wikipedia.org/wiki/Vieta%27s_formulas" rel="nofollow noreferrer">Vieta's Formulas</a>.</p>
<p>Hope it helps!</p>
|
20,784 | <p>I have a set of 3-space coordinates for the atoms of a molecule (I could also transform them into spheres with radii corresponding to the atoms they represent). I would like to place this molecule into the tightest possible 3D rectangular bounding box and determine the coordinates for the box vertices. Is there a graphics processing routine in Mathematica that will do this automatically for me, or do I need to implement an algorithm from the literature?</p>
<p>To be more specific - I would like to obtain the minimum bounding box for my molecule allowing rotation etc. I was able to find a solution for the special case of the molecule I'm interested in using a sort of lame trick with a rotational symmetry. I'm asking this question because I'd like to have a general-case solution.</p>
<p>One way to proceed would be for me to perform a set of random rotations, calculate the volume of the bounding box Mathematica generates, and keep going until I obtain a sufficiently tight fit. Is there a way to get the coordinates for the Graphics3D bounding box vertices?</p>
| Daniel Lichtblau | 51 | <p>Here is an idea for an approximate method. Center the data, compute the singular value decomposition, and use the right factor rotation matrix to align the singular values axes with the coordinate axes (I'm not saying that very well but it gives the rotation matrix we want). Now take min and max values along coordinate axes. This gives the dimensions of what could be a fairly tight bounding box. Also the mean and rotation matrix provide a way to go back to the original coordinates via geometric transformation.</p>
<p>Here is an example. It may work better than your actual ones though, because it uses "nice" random values.</p>
<pre><code>SeedRandom[11111];
dmat = DiagonalMatrix[{4, 7, 13}];
rmat = RandomReal[{-1, 1}, {3, 3}];
mat = dmat + rmat + Transpose[rmat]
(* Out[115]= {{3.05494151932,
1.22535650049, -0.103665150251}, {1.22535650049, 5.67393092991,
1.38468670897}, {-0.103665150251, 1.38468670897, 12.2854081966}} *)
</code></pre>
<p>One easily checks that the eigenvalues are all positive, hence this can be used as correlation matrix for generating random normals in 3D.</p>
<pre><code>vals = RandomVariate[MultinormalDistribution[{0, 0, 0}, mat], 10^4];
</code></pre>
<p>We'll have a look.</p>
<pre><code>ListPointPlot3D[vals]
</code></pre>
<p><img src="https://i.stack.imgur.com/Co8hq.png" alt="enter image description here"></p>
<p>Now find the "best" rotation matrix via SVD. I remark that this should be done differently if one is on a 32 bit or otherwise memory-limited machine, since it will take substantial memory if done as below.</p>
<pre><code>means = Mean[vals];
newvals = Map[# - means &, vals];
{uu, ww, vv} = SingularValueDecomposition[newvals];
</code></pre>
<p>Now we can rotate these centered points.</p>
<pre><code>rotatedvals = newvals.vv;
ListPointPlot3D[rotatedvals]
</code></pre>
<p><img src="https://i.stack.imgur.com/lsYqB.png" alt="enter image description here"></p>
<p>Finally we find max and min values for a plausible bounding box.</p>
<pre><code>mins = Map[Min[rotatedvals[[All, #]]] &, Range[3]]
maxes = Map[Max[rotatedvals[[All, #]]] &, Range[3]]
(* Out[125]= {-12.6346155672, -9.21373247283, -6.11807437861}
Out[126]= {13.2151960928, 9.81786657885, 5.87492886205} *)
</code></pre>
|
20,784 | <p>I have a set of 3-space coordinates for the atoms of a molecule (I could also transform them into spheres with radii corresponding to the atoms they represent). I would like to place this molecule into the tightest possible 3D rectangular bounding box and determine the coordinates for the box vertices. Is there a graphics processing routine in Mathematica that will do this automatically for me, or do I need to implement an algorithm from the literature?</p>
<p>To be more specific - I would like to obtain the minimum bounding box for my molecule allowing rotation etc. I was able to find a solution for the special case of the molecule I'm interested in using a sort of lame trick with a rotational symmetry. I'm asking this question because I'd like to have a general-case solution.</p>
<p>One way to proceed would be for me to perform a set of random rotations, calculate the volume of the bounding box Mathematica generates, and keep going until I obtain a sufficiently tight fit. Is there a way to get the coordinates for the Graphics3D bounding box vertices?</p>
| Carl Woll | 45,431 | <p>In versions M10.4+ you can use the function <a href="http://reference.wolfram.com/language/ref/BoundingRegion" rel="nofollow noreferrer"><code>BoundingRegion</code></a>. Taking @DanielLichtblau's example:</p>
<pre><code>SeedRandom[11111];
dmat=DiagonalMatrix[{4,7,13}];
rmat=RandomReal[{-1,1},{3,3}];
mat=dmat+rmat+Transpose[rmat]
vals=RandomVariate[MultinormalDistribution[{0,0,0},mat],10^4];
</code></pre>
<p>we find:</p>
<pre><code>bound = BoundingRegion[vals, "MinOrientedCuboid"]
</code></pre>
<blockquote>
<p>Parallelepiped[{9.85391,
3.54298, -12.6616}, {{-10.3723, -13.6539, -0.378494}, {-0.0336508, \
-0.679714, 25.4423}, {-9.41092, 7.14414, 0.178415}}]</p>
</blockquote>
<p>Comparing volumes:</p>
<pre><code>Volume[bound]
Volume[Cuboid[mins, maxes]] (* from @DanielLichtblau's answer *)
</code></pre>
<blockquote>
<p>5158.23</p>
<p>5900.12</p>
</blockquote>
<p>And a visualization:</p>
<pre><code>Graphics3D[{FaceForm[Opacity[.2]], bound, Red, Point[vals]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/GOLMY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GOLMY.png" alt="enter image description here"></a></p>
|
3,491,978 | <blockquote>
<p>Let (X,d) be a compact metric space. For every open cover, show there exists ε > 0 such that for every x ∈ X, B(x,ε) is contained in some member of the cover.</p>
</blockquote>
<p>My attempt:</p>
<p>(X,d) is compact. Therefore there exists a finite subcover of X.</p>
<p>Any element x in X must lie in some member of the cover, say x ∈ Ui. Otherwise they would not constitute a cover.</p>
<p>Since Ui is open, by definition every point is interior, so there exists ε > 0 such that B(x,ε) is contained in Ui. </p>
<p>I haven't used the fact the subcover is finite, or the fact X is a metric space rather than just topological space, so I feel my reasoning is flawed. </p>
<p>Any help is greatly appreciated!</p>
| Henno Brandsma | 4,280 | <p>Let <span class="math-container">$\mathcal{A}$</span> be an open cover of <span class="math-container">$X$</span>. For each <span class="math-container">$x \in X$</span> we can find <span class="math-container">$A_x \in \mathcal{A}$</span> such that <span class="math-container">$x \in A_x$</span>, and as <span class="math-container">$A_x$</span> is open there is some <span class="math-container">$r(x)>0$</span> such that <span class="math-container">$$B(x,2r(x)) \subseteq A_x\tag{1}$$</span></p>
<p>(Note that we use <span class="math-container">$2r(x)$</span>, to take some space; openness guarantees us some <span class="math-container">$s>0$</span> and we just use half of it..)</p>
<p>Then <span class="math-container">$\{B(x, r(x)): x \in X\}$</span> is an open cover for <span class="math-container">$X$</span>, so by compactness we have a finite subcover <span class="math-container">$\{B(x_1, r(x_1), \ldots, B(x_n, r(x_n)\}$</span>.</p>
<p>Now define <span class="math-container">$\delta=\min_{i=1}^n r(x_i) > 0$</span> (a minimum of finitely many positive reals) and I claim this <span class="math-container">$\delta$</span> is as required: let <span class="math-container">$x \in X$</span>, and we have to show <span class="math-container">$B(x, \delta)$</span> is some member of <span class="math-container">$\mathcal{A}$</span>.</p>
<p>Firstly, note that <span class="math-container">$x \in B(x_i, r(x_i))$</span> for some <span class="math-container">$i \in \{1,\ldots, n\}$</span> as the subcover is a cover. So <span class="math-container">$d(x,x_i) < r(x_i)$</span> and if now <span class="math-container">$y \in B(x, r(x_i))$</span> is arbitrary, <span class="math-container">$d(y,x) < r(x_i)$</span> and the triangle inequality then tells us that</p>
<p><span class="math-container">$$d(y,x_i) \le d(y,x)+d(x,x-i) < r(x_i) + r(x_i)=2r(x_i)\text{, so } y \in B(x_i, 2r(x_i)$$</span></p>
<p>And as <span class="math-container">$y$</span> was arbitrary, <span class="math-container">$B(x, r(x_i)) \subseteq B(x_i, 2r(x_i))$</span>. Now it's obvious that <span class="math-container">$\delta \le r(x_i)$</span> and so</p>
<p><span class="math-container">$$B(x, \delta) \subseteq B(x, r(x_i)) \subseteq B(x_i, 2r(x_i)) \subseteq A_{x_i} \in \mathcal{A}$$</span> and this finishes the proof.</p>
|
3,491,978 | <blockquote>
<p>Let (X,d) be a compact metric space. For every open cover, show there exists ε > 0 such that for every x ∈ X, B(x,ε) is contained in some member of the cover.</p>
</blockquote>
<p>My attempt:</p>
<p>(X,d) is compact. Therefore there exists a finite subcover of X.</p>
<p>Any element x in X must lie in some member of the cover, say x ∈ Ui. Otherwise they would not constitute a cover.</p>
<p>Since Ui is open, by definition every point is interior, so there exists ε > 0 such that B(x,ε) is contained in Ui. </p>
<p>I haven't used the fact the subcover is finite, or the fact X is a metric space rather than just topological space, so I feel my reasoning is flawed. </p>
<p>Any help is greatly appreciated!</p>
| Milo Brandt | 174,927 | <p>You have proved that for all <span class="math-container">$x$</span>, there exists a <span class="math-container">$\varepsilon$</span> so that <span class="math-container">$B(x,\varepsilon)$</span> is contained in some member of the cover - which does not rely on the fact that the chosen subcover is finite or on compactness. This is a much weaker statement than the one you wished to prove. Indeed, you can convince yourself that a proof has to be more intricate by considering that, even if your cover were finite, that does not mean it has the desired property without compactness; for instance, in the space <span class="math-container">$[0,1/2)\cup (1/2,1]$</span>, the cover consisting of the two components does not satisfy the desired property. Since your proof apparently would apply to any finite cover, you can see that something must be wrong with it; more generally, we can see that taking a finite subcover of the cover we were given is probably not going to help us.</p>
<p>A very fast way to do this, however, is to start with your open cover <span class="math-container">$\mathscr U$</span>. For each <span class="math-container">$x\in X$</span>, you can define <span class="math-container">$$f(x)=\max \{\varepsilon > 0 : B(x,\varepsilon) \subseteq U\text{ for some }U\in \mathscr U\}$$</span>
You are trying to show that <span class="math-container">$f(x)\geq \varepsilon$</span> for some <span class="math-container">$\varepsilon$</span> for all <span class="math-container">$x$</span>. It's worth taking a moment to consider why this can be assumed to be a maximum rather than supremum and to convince yourself that this function is continuous. You can also convince yourself that this function is positive everywhere, since every <span class="math-container">$x$</span> has some <span class="math-container">$B(x,\varepsilon)$</span> contained in some <span class="math-container">$U$</span>.</p>
<p>A standard way to finish this proof is to apply the extreme value theorem to <span class="math-container">$f$</span> and find a minimum - but there's an easier way that uses the definition of compactness directly. In particular, let <span class="math-container">$A_{n}$</span> be the set of <span class="math-container">$x\in X$</span> that satisfy <span class="math-container">$f(x)>1/n$</span>. Clearly, every <span class="math-container">$x$</span> is in some <span class="math-container">$A_n$</span> - so the set <span class="math-container">$\{A_1,A_2,\ldots\}$</span> is a cover of <span class="math-container">$X$</span>. Note also that <span class="math-container">$A_1\subseteq A_2\subseteq A_3 \subseteq \ldots$</span>, so the union of any finite subset of this cover is an element of the cover. By compactness, this has a finite subcover - and by the prior note, this means that <span class="math-container">$A_n=X$</span> for some <span class="math-container">$n$</span>. Then you have the desired statement for <span class="math-container">$\varepsilon = 1/n$</span>.</p>
<p>Generally, this is a good approach to have to such questions: if you're trying to prove a quantity can be taken to be uniform across the space (i.e. to turn "<span class="math-container">$\forall\exists$</span>" into <span class="math-container">$"\exists\forall"$</span>), you can often proceed by looking at sets where some fixed constant suffices, and seeing if you can use compactness (or whatever other property you have) to show that one of those sets is the whole space. Note also that here we are taking a finite subcover of a cover <em>we</em> constructed - it would not help us to take a finite subcover of <span class="math-container">$\mathscr U$</span>, even though it seems like a good idea.</p>
|
2,379,955 | <p>Assume I want to minimise this:
$$ \min_{x,y} \|A - x y^T\|_F^2$$
then I am finding best rank-1 approximation of A in the squared-error sense and this can be done via the SVD, selecting $x$ and $y$ as left and right singular vectors corresponding to the largest singular value of A.</p>
<p>Now instead, is possible to solve:
$$ \min_{x} \|A - x b^T\|_F^2$$
for $b$ also fixed?</p>
<p>If this is possible, is there also a way to solve:
$$ \min_{x} \|A - x b^T\|_F^2 + \|C - x d^T\|_F^2$$
where I think of $x$ as the best "average" solution between the two parts of the cost function.</p>
<p>I am of course longing for a closed-form solution but a nice iterative optimisation approach to a solution could also be useful.</p>
| Royi | 33 | <p>This is a Convex Optimization Problem and you can easily solve it using CVX:</p>
<pre><code>numRows = 5;
mA = randn([numRows, numRows]);
vB = randn([numRows, 1]);
cvx_begin('quiet')
cvx_precision('best');
variable vX(numRows)
minimize( norm(mA - vB * vX.', 'fro') )
cvx_end
disp([' ']);
disp(['CVX Solution Summary']);
disp(['The CVX Solver Status - ', cvx_status]);
disp(['The Optimal Value Is Given By - ', num2str(cvx_optval)]);
disp(['The Optimal Argument Is Given By - [ ', num2str(vX.'), ' ]']);
disp([' ']);
</code></pre>
<p>If you formulate your expression using the Trace Operator you'll also be able to solve it easily in other simple methods.</p>
|
3,555,084 | <blockquote>
<p>Let
<span class="math-container">$$f(z) = e^z (1+\cos\sqrt{z} ) $$</span>
<span class="math-container">$\Omega=\{z\in\Bbb C: |z|\gt r\}$</span>, <span class="math-container">$r\gt 0$</span>. What is <span class="math-container">$f(\Omega)$</span>?</p>
<p>where <span class="math-container">$\sqrt{z}=\exp{(\text{Log }z/2)}, \text{Log }z=\log|z|+i\arg z,\arg z\in(-\pi,\pi]$</span></p>
</blockquote>
<p><span class="math-container">$\infty$</span> is an essential singularity. Picard's Great theorem , <span class="math-container">$\Bbb C\setminus f(\Omega) $</span> contains at most one point. <span class="math-container">$f(\Omega)$</span> is <span class="math-container">$\Bbb C$</span> ? </p>
<p>When <span class="math-container">$ z\in\Bbb R $</span>, <span class="math-container">$f(z)\geq0$</span>. According to <a href="https://en.wikipedia.org/wiki/Schwarz_reflection_principle" rel="nofollow noreferrer">Schwarz reflection principle</a> and Picard's Great theorem, all <span class="math-container">$ x+iy(y\ne0), \in\Bbb f(\Omega) $</span>. How to show that all negative real numbers belong to <span class="math-container">$f(\Omega)$</span>? thanks a lot!</p>
<p><strong>Is there a simple, more Elementary Proof</strong> than Conard's ? </p>
| lab bhattacharjee | 33,337 | <p><span class="math-container">$$x^2(x^2+6)=(x^2+3)^2-3^2$$</span></p>
<p>WLOG <span class="math-container">$x^2+3=3\csc t,t\to0^+$</span></p>
<p><span class="math-container">$$\lim_{t\to0^+}3(\csc t-1)(3\csc t-3\cot t)$$</span></p>
<p><span class="math-container">$$=\lim\dfrac{9(1-\sin t)(1-\cos t)}{\sin^2t}$$</span></p>
<p><span class="math-container">$$=9\lim\dfrac{1-\sin t}{1+\cos t}=?$$</span> as <span class="math-container">$1-\cos t\ne0$</span> as <span class="math-container">$t\ne0$</span> as <span class="math-container">$\to0$</span></p>
|
408,128 | <p>Suppose a vector $y$ and a <em>symmetric</em> matrix $M$ are given.</p>
<p>\begin{equation}
\forall x; \quad x^Ty=0 \implies x^TMx \ge 0
\end{equation}</p>
<p>Prove that $M$ has at most one negative eigenvalue.</p>
| user1551 | 1,551 | <p><strong>Hint.</strong> As $M$ is real symmetric, it has a orthonormal eigenbasis. Suppose $(\lambda_1,v_1),(\lambda_2,v_2)$ are two eigenpairs of $M$ with $\lambda_1,\lambda_2<0$ and $v_1\perp v_2$. Use a dimension argument to show that $\mathrm{span}\{y\}^\perp\cap\mathrm{span}\{v_1,v_2\}\neq0$. That is, there exists a nonzero vector $w\in\mathrm{span}\{v_1,v_2\}$ such that $w\perp y$. Why is this a contradiction?</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.